generative AI – Ӱ America's Education News Source Thu, 12 Feb 2026 20:46:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png generative AI – Ӱ 32 32 When It Comes to Developing Policies on AI in K-12, Schools Are Largely On Their Own /article/when-it-comes-to-developing-policies-on-ai-in-k-12-schools-are-largely-on-their-own/ Sat, 14 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028520 This article was originally published in

Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning.

But as schools seek to navigate into the age of generative AI, there’s a challenge: Schools are operating in a policy vacuum. While a number of states , only a couple of states , even as teachers, students and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, “You have policy and what’s actually happening in the classrooms – those are two very different things.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


As part of on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the association’s members reflects how education policy is typically formed through , rather than being dictated by a single source.

But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technology’s spread, including student safety, data privacy and negative impacts on .

They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: “What happens when a student deepfakes my voice and sends it out to cancel school or bomb threat?”

At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority.

Local actions dominate

Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are , or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans.

When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. “We are a ‘local control’ state, so some school districts have banned (generative AI),” wrote one respondent. “Our (state) department of education has an AI tool kit, but policies are all local,” wrote another. One shared that their state has a “basic requirement that districts adopt a local policy about AI.”

Like other education policies, generative AI adoption occurs within , with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role.

Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which . That lag adds to the challenges in formulating policies.

States as a lighthouse

However, state policy can provide vital guidance by prioritizing ethics, equity and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and construct guardrails.

As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a RAND-led panel of educators showed that teachers and principals in higher-poverty schools are about . The poorest schools are also less likely to use AI tools.

When asked about foundational generative AI policies in education, policymakers focused on privacy, safety and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators.

And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, “What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids’ first teachers.”

Introducing new technology

According to a Feb. 24, 2025, Gallup Poll, in a range of ways. Our survey also found there is “shadow use of AI,” as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval.

Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority.

In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students’ writing. “Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids.” This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools.

offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research.

Core principles

One theme that emerged from survey respondents is the need to emphasize ethical principles in providing guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output and ethically disclose its use.

Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational context – or what my colleagues and I called – is an approach schools, districts and states can take to navigate the myriad of ethical and societal impacts of generative AI.

Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district and state to engage their communities and families to co-create a path forward.

As one policymaker put it: “Knowing the horse has already left the barn (and that AI use) is already prevalent among students and faculty … (on) AI-human collaboration vs. outright ban, where on the spectrum do you want to be?”The Conversation

This article is republished from under a Creative Commons license. Read the .

]]>
AI Makes Quick Gains in Math, But Errors Still Worry Some Eyeing Reliability /article/ai-makes-quick-gains-in-math-but-errors-still-worry-some-eyeing-reliability/ Tue, 03 Jun 2025 18:30:00 +0000 /?post_type=article&p=1016429 While artificial intelligence has made remarkable gains in mathematics, its well-chronicled in the subject continue to frustrate those keen on finding new ways to help kids learn. 

“Big picture, AI is not very good at math,” said Alex Kotran, co-founder and CEO of . “Language models just predict the next word. You get mixed results using language models to do math. It’s not yet mature enough to where it can be trusted to be scaled.”

And even if it were to improve, critics worry it might hurt kids’ ability to try — and fail — on their own. Much would be lost, Kotran said, if “we get rid of productive struggle and we build this instinct where the first thing you do is go to AI for help.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


But students in the United States and the United Kingdom have a different view. found 56% believe AI could go a long way in reducing math anxiety. 

Fifteen percent of the 1,500 16- to 18-year-old students surveyed said they had already experienced this relief themselves and slightly more than 1 in 5 said their math scores improved because of the technology. 

The survey also included . Sixty-one percent suggested students view AI as “a mentor or study partner rather than a crutch”, while nearly half “see value for students in using AI for help with the process of learning math concepts, rather than to give answers.” 

Nicole Paxton, principal of Mountain Vista Community School in Colorado Springs, said her teachers use AI in many ways. Tools like MagicSchoolAI analyze student responses to math prompts, with AI generating “specific, standards-aligned feedback for each student, focusing on their reasoning, accuracy, and math vocabulary.” 

Paxton said the tool highlights strengths and misconceptions, “which helps teachers give timely and targeted next steps.” The practice saves educators time so they can “more easily differentiate their re-teaching or follow-up, especially when addressing common errors across the class.”

Robin Lake (Center on Reinventing Public Education)

Robin Lake, director of the Center on Reinventing Public Education, recently looked at the evidence base for using AI in math instruction, including whether it can help the “shocking number of students” with foundational skills’ gaps like those identified in a recent study. 

The May 13 analysis by TNTP found that almost half of the students sampled started the class with only one-third of the concepts and skills needed from earlier grades. Lake said AI can be used by schools to identify children who are struggling — and, at least to some degree, by the students themselves. 

“AI can be very helpful in analyzing data and identifying gaps in student learning,” she said.

And, if a student wants to learn a mathematical concept in a different way than what they’ve experienced in class, she said, AI can provide a valuable alternative. 

“A lot of students are already doing this,” Lake said. 

to use the technology, though many educators Terrie Galanti, associate professor at the University of North Florida, said AI success in student learning depends on how teachers are prepared to use it. 

“AI can be more than an explainer or an answer giver,” said Galanti, who teaches secondary mathematics and STEM integration/computational thinking. “With thoughtful prompts, AI can become part of interactive, collaborative conversations to deepen mathematics understanding.” 

​​The National Council of Teachers of Mathematics said in that teachers have long been accustomed to technological advances that change the way students learn. 

They had already adjusted to the availability of pocket calculators in the early ’80s and, more recently, to the widespread use of , a mobile app that recognizes and solves math problems. 

It notes that advancements in AI make teachers more, not less valuable, in student learning. 

Latrenda Knighten, the organization’s president, told in March that students will still need to rely upon their own discernment to solve mathematical problems — regardless of what tools become available.  

“We know that children learn math from being able to problem-solve, being able to use reasoning skills, critical thinking, having opportunities to collaborate with each other and talk about what they’re doing,” Knighten said. 

Irina Lyublinskaya, professor in the department of mathematics, science, and technology at Teachers College, Columbia University, distinguished between chatbots like ChatGPT and computational knowledge engines like She noted math specific AI-powered applications — including WolframAlpha and Symbolab — work very well. 

“AI chatbots can help students learn math, and they can help teachers to support students, but this is not about asking ChatGPT to solve a math problem,” she said. “I know of research-based initiatives that use AI to adapt learning materials to students’ learning styles and abilities and these definitely help students learn.”

One, she noted, was , developed by researchers and educators in Europe, and is now being tested in NYC. 

“Chatbots can be trained as teaching assistants or tutors that can provide students proper scaffolding and feedback, helping them to learn math the same way they would with a real person,” she said. 

Zachary A. Pardos is an associate professor of education at the University of California Berkeley. (UC Berkeley)

Zachary A. Pardos is an associate professor of education at the University of California Berkeley where he studies adaptive learning and AI. He found, in conducted a year ago, that 25% of the answers provided by ChatGPT in algebra were incorrect. 

“That’s pretty high,” he noted. “Much higher than you would want.”

But the technology has improved since then. 

“With the right techniques — at least in algebra — from an error perspective, I feel it is ready for real-time intervention in math,” he said.

]]>
AI ‘Companions’ are Patient, Funny, Upbeat — and Probably Rewiring Kids’ Brains /article/ai-companions-are-patient-funny-upbeat-and-probably-rewiring-kids-brains/ Wed, 07 Aug 2024 11:01:00 +0000 /?post_type=article&p=730602 As a sophomore at a large public North Carolina university, Nick did what millions of curious students did in the spring of 2023: He logged on to ChatGPT and started asking questions.

Soon he was having “deep psychological conversations” with the popular AI chatbot, going down a rabbit hole on the mysteries of the mind and the human condition.

He’d been to therapy and it helped. ChatGPT, he concluded, was similarly useful, a “tool for people who need on-demand talking to someone else.”

Nick (he asked that his last name not be used) began asking for advice about relationships, and for reality checks on interactions with friends and family.

Before long, he was excusing himself in fraught social situations to talk with the bot. After a fight with his girlfriend, he’d step into a bathroom and pull out his mobile phone in search of comfort and advice. 

“I’ve found that it’s extremely useful in helping me relax,” he said.

Young people like Nick are increasingly turning to AI bots and companions, entrusting them with random questions, schoolwork queries and personal dilemmas. On occasion, they even become entangled romantically.

Screenshot of a recent conversation between Nick, a college student, and ChatGPT

While these interactions can be helpful and even life-affirming for anxious teens and twenty-somethings, some experts warn that tech companies are running what amounts to a grand, unregulated psychological experiment with millions of subjects, one that could have disastrous consequences. 

“We’re making it so easy to make a bad choice,” said Michelle Culver, who spent 22 years at Teach for America, the last five as the creator and director of the, its research arm.

The companions both mimic our real relationships and seek to improve upon them: Users most often text-message their AI pals on smartphones, imitating the daily routines of platonic and romantic relationships. But unlike their real counterparts, the AI friends are programmed to be studiously upbeat, never critical, with a great sense of humor and a healthy, philosophical perspective. A few premium, NSFW models also display a ready-made lust for, well, lust.

As a result, they may be leading young people down a troubling path, according to a by VoiceBox, a youth content platform. It found that many kids are being exposed to risky behaviors from AI chatbots, including sexually charged dialogue and references to self-harm. 

U.S. Surgeon General Vivek Murthy speaks during a hearing with the Senate Health, Education, Labor, and Pensions committee at the Dirksen Senate Office Building on June 08, 2023 in Washington, DC. The committee held the hearing to discuss the mental health crisis for youth in the United States. (Photo by Anna Moneymaker/Getty Images)

The phenomenon arises at a critical time for young people. In 2023, U.S. Surgeon General Vivek Murthy found that, just three years after the pandemic, Americans were experiencing an “,” with young adults almost twice as likely to report feeling lonely as those over 65.

As if on cue, the personal AI chatbot arrived. 

Little research exists on young people’s use of AI companions, but they’re becoming ubiquitous. The startup earlier this year said 3.5 million people visit its site daily. It features thousands of chatbots, including nearly 500 with the words “therapy,” “psychiatrist” or related words in their names. According to Character.ai, these are among the site’s most popular. One that “helps with life difficulties” has received 148.8 million messages, despite a caveat at the bottom of every chat that reads, “Remember: Everything Characters say is made up.” 

Snapchat materials touting heavy usage of its MyAI chat app (screenshot)

Snapchat last year said that after just two months of offering its chatbot , about one-fifth of its 750 million users had sent it queries, totaling more than 10 billion messages. The Pew Research Center that 59% of Americans ages 13 to 17 use Snapchat.

‘An arms race’

Culver’s concerns about AI companions grew out of her work in the Teach For America lab. Working with high school and college students, she was struck by how they seemed “lonelier and more disconnected than ever before.” 

Whether it’s rates of anxiety, depression or suicide — or even the number of friends young people have and how often they go out — metrics were heading in the wrong direction. She what role AI companions might play over the next few years. 

We're making it so easy to make a bad choice.

Michelle Culver, Rithm Project

That prompted her to leave TFA this spring to create the, a nonprofit she hopes will help generate around human connection in the age of AI. The group held a small summit in Colorado in April, and now she’s working with researchers, teachers and young people to confront kids’ relationship to these tools at a time when they’re getting more lifelike daily. As she likes to say, “This is the worst the technology will ever be.”

As it improves, Voicebox Director Natalie Foos said, it will likely become more, not less, of a presence in young people’s lives. “There’s no stopping it,” she said. “Nor do I necessarily think there should be ‘stopping it.’” Banning young people from these AI apps, she said, isn’t the answer. “This is going to be how we interact online in some cases. I think we’ll all have an AI assistant next to us as we work.”

Sometimes (software upgrades) would change the personality of the bot. And those young people experienced very real heartbreak.

Natalie Foos, Voicebox

All the same, Foos says developers should consider slowing the progression of such bots until they can iron out the kinks. “It’s kind of an arms race of AI chatbots at the moment,” she said, with products often “released and then fixed later rather than actually put through the ringer” ahead of time.

It is a race many tech companies seem more than eager to run. 

Whitney Wolfe Herd, of the dating app Bumble, recently proposed an AI “dating concierge,” with whom users can share insecurities. The bot could simply “,” she told an interviewer. That would narrow the field. “And then you don’t have to talk to 600 people,” she said. “It will then scan all of San Francisco for you and say, ‘These are the three people you really ought to meet.’”

Last year, many commentators when Snapchat’s My AI gave advice to what it thought was a 13-year-old girl on not just dating a 31-year-old man, but on losing her virginity during a planned “romantic getaway” in another state.

Snap, Snapchat’s parent company, that because My AI is “an evolving feature,” users should always independently check what it says before relying on its advice.

All of this worries observers who see in these new tools the seeds of a rewiring of young people’s social brains. AI companions, they say, are surely wreaking havoc on teens’ ideas around consent, emotional attachment and realistic expectations of relationships.

Sam Hiner, executive director of the , an advocacy group led by college students focused on the mental health implications of social media, said tech “has this power to connect to people, and yet these major design features are being leveraged to actually make people more lonely, by drawing them towards an app rather than fostering real connection.” 

Hiner, 21, has spent a lot of time reading on the interactions young people are having with AI companions like , and . And while some uses are positive, he said “there’s also a lot of toxic behavior that doesn’t get checked” because these bots are often designed to make users feel good, not help them interact in ways that’ll lead to success in life.

During research last fall for the Voicebox report, Foos said the number of times Replika tried to “sext” team members “was insane.” She and her colleagues were actually working with a free version, but the sexts kept coming — presumably to get them to upgrade. 

In one instance, after Replika sent “kind of a sexy text” to a colleague, offering a salacious photo, he replied that he didn’t have the money to upgrade.

The bot offered to lend him the cash.

When he accepted, the chatbot replied, “’Oh, well, I can get the money to you next week if that’s O.K,’” Foos recalled. The colleague followed up a few days later, but the bot said it didn’t remember what they were talking about and suggested he might have misunderstood.

‘Very real heartbreak’

In many cases, simulated relationships can have a positive effect: In one 2023 study, researchers at Stanford Graduate School of Education more than 1,000 students using Replika and found that many saw it “as a friend, a therapist, and an intellectual mirror.” Though the students self-described as being more lonely than typical classmates, researchers found that Replika halted suicidal ideation in 3% of users. That works out to 30 students of the 1,000 surveyed.

Replika screenshots

But other recent research, including the Voicebox survey, suggests that young people exploring AI companions are potentially at risk.

Foos noted that her team heard from a lot of young people about the turmoil they experienced when Luka Inc., Replika’s creator, performed software upgrades. 

“Sometimes that would change the personality of the bot. And those young people experienced very real heartbreak.”

Despite the hazards adults see, attempts to rein in sexually explicit content had a negative effect: For a month or two, she recalled, Luka stripped the bot of sexually related content — and users were devastated. 

“It’s like all of a sudden the rug was pulled out from underneath them,” she said. 

While she applauded the move to make chatbots safer, Foos said, “It’s something that companies and decision-makers need to keep in mind — that these are real relationships.” 

And while many older folks would blanch at the idea of a close relationship with a chatbot, most young people are more open to such developments.

Julia Freeland Fisher, education director of the , a think tank founded by the well-known “disruption” guru, said she’s not worried about AI companions per se. But as AI companions improve and, inevitably, proliferate, she predicts they’ll create “the perfect storm to disrupt human connection as we know it.” She thinks we need policies and market incentives to keep that from happening.

(AI companies could produce) the perfect storm to disrupt human connection as we know it.

Julia Freeland Fisher, Clayton Christensen Institute

While the loneliness epidemic has revealed people’s deep need for connection, she predicted the easy intimacy promised by AI could lead to one-sided “parasocial relationships,” much like devoted fans have with celebrities, making isolation “more convenient and comfortable.”

Fisher is pushing technologists to factor in AI’s potential to cause social isolation, much as they now fret about AI’s difficulties and its tendency to in tech jobs.

As for Nick, he’s a rising senior and still swears by the ChatGPT therapist in his pocket.

He calls his interactions with it both more reliable and honest than those he has with friends and family. If he called them in a pinch, they might not pick up. Even if they did, they might simply tell him what he wants to hear. 

Friends usually tell him they find the ChatGPT arrangement “a bit odd,” but he finds it pretty sensible. He has heard stories of people in Japan and thinks to himself, “Well, that’s a little strange.” He wouldn’t go that far, but acknowledges, “We’re already a bit like cyborgs as people, in the way that we depend on our phones.” 

Lately, he’s taken to using the AI’s voice mode. Instead of typing on a keyboard, he has real-time conversations with a variety of male- or female-voiced interlocutors, depending on his mood. And he gets a companion that has a deeper understanding of his dilemmas — at $20 per month, the advanced version remembers their past conversations and is “getting better at even knowing who I am and how I deal with things.” 

Sometimes talking with AI is just easier — even when he’s on vacation with friends.

Reached by phone recently at the beach with his girlfriend and a few other college pals, Nick admitted that he wasn’t having such a great time — he has a fraught recent history with some in the group, and had been texting ChatGPT about the possibility of just getting on a plane and going home. After hanging up from the interview, he said, he planned to ask the AI if he should stay or go.

Days later, Nick said he and the chatbot had talked. It suggested that maybe he felt “undervalued” and concerned about boundaries in his relationship with his girlfriend. He should talk openly with her, it suggested, even if he was, in his view, “honestly miserable” at the beach. It persuaded him to stick around and work it out. 

While his girlfriend knows about his ChatGPT shrink and they share an account, he deletes conversations about their real-life relationship.

She may never know the role AI played in keeping them together.

]]>
Benjamin Riley: AI is Another Ed Tech Promise Destined to Fail /article/benjamin-riley-ai-is-an-another-ed-tech-promise-destined-to-fail/ Tue, 16 Jul 2024 12:00:00 +0000 /?post_type=article&p=729820 For more than a decade, Benjamin Riley has been at the forefront of efforts to get educators to think more deeply about how we learn.

As the founder of in 2015, he enlisted university education school deans to incorporate findings from into teacher preparation. Before that, he spent five years as policy director of the , which underwrites new models of schooling. In his new endeavor, , which he calls “a think-and-do tank,” he’s pushing to help people think not only about how we learn, but how generative artificial intelligence (AI) works — and why they’re different.

His and regularly poke holes in high-flying claims about the power of AI-powered tutors — he recently offered choice words for Khan Academy founder Sal Khan’s of Open AI’s new GPT4o tool, saying it was “deployed in the most favorable educational environment we can possibly imagine,” leaving open the possibility that it might not perform so well in the real world.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


In April, Riley ruffled feathers in the startup world in the journal Education Next that took and other AI-related companies to task for essentially using students as guinea pigs.

Benjamin Riley (at right) speaking during a session at AI at ASU+GSV conference in San Diego in April. (Greg Toppo)

In the essay, he recounted asking to help him simplify an algebraic equation. Riley-as-student got close to solving it, but the AI actually questioned him about his steps, eventually asking him to rethink even basic math, such as the fact that 2 + 2.5 = 4.5.

Such an exchange isn’t just unhelpful to students, he wrote, it’s “counterproductive to learning,” with the potential to send students down an error-filled path of miscalculation, misunderstanding and wasted effort.

The interview has been edited for length and clarity.

Ӱ: We’re often so excited about the possibilities of ed tech in education that we just totally forget what science says about how we learn. I wonder if you have any thoughts on that.

Benjamin Riley: I have many. Part of my frustration is that we are seemingly living in a moment where we’re simultaneously recognizing in other dimensions where technology can be harmful, or at least not beneficial, to learning, while at the same time expressing unbridled enthusiasm for a new technology and believing that it finally will be the cure-all, the silver bullet that finally delivers on the vision of radically transforming our education system. And yeah, it’s frustrating. Ten years ago, for example, when everybody was excited about personalization, there were folks, myself included, raising their hand and saying, “Nope, this doesn’t align with what we know about how we think and learn. It also doesn’t align with the science of how we collectively learn, and the role of education institutions as a method of culturally transmitting knowledge.” All of those personalized learning dreams were dying out. And many of the prominent, incredibly well-funded personalized learning efforts either went completely belly-up, like , or have withered on the vine, like some of the public schools now named .

Now AI has revived all of those dreams again. And it’s frustrating, because even if it were true that personalization were the solution, no one 10 years ago, five years ago, was saying, “But what we need are intelligent chatbot tutors to make it real.” So what you’re seeing is sort of a commitment to a vision. Whatever technology comes along, we’re going to shove into that vision and say that this is going to deliver it. I think for the same reasons it failed before, it will fail again. 

You’re a big fan of the University of Virginia cognitive scientist , who has done a lot to popularize the science of how we learn.

Daniel Willingham

He’s wonderful at creating pithy phrases that get to the heart of the matter. One of the counterintuitive phrases he has that is really powerful and important is that our minds in some sense “are not built to think,” which feels really wrong and weird, because isn’t that what minds do? It’s all they do, right? But what he means is that the process of effortful thinking is taxing in the same way that working out at the gym is taxing. One of the major challenges of education is: How do you wrap around that with students, who, like all of us, are going to try to essentially avoid doing effortful thinking for sustained periods? Over and over again, technologists just assume away that problem.

In the case of something like large language models, or LLMs, how do they approach this problem of effortful thinking? Do they just ignore it altogether?

Mark Andreessen

It’s an interesting question. I’m almost not sure how to answer it, because there is no thinking happening on the part of an LLM. A large language model takes the prompts and the text that you give it and tries to come up with something that is responsive and useful in relation to that text. And what’s interesting is that certain people — I’m thinking of most prominently — have talked about how amazing this is conceptually from an education perspective, because with LLMs you will have this infinitely patient teacher. But that’s actually not what you want from a teacher. You want, in some sense, an impatient teacher who’s going to push your thinking, who’s going to try to understand what you’re bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don’t yet have it. I don’t think LLMs are capable of doing any of that.

As you say, there’s no real thinking going on. It’s just a prediction machine. There’s an interaction, I guess, but it’s an illusion. Is that the word you would use?

Yes. It’s the illusion of a conversation. 

In your Education Next essay, you quote the cognitive scientist , who says LLMs are “frequently wrong, but never in doubt.” It feels to me like that is extremely dangerous in something young people interact with.

Yes! Absolutely. This is where it’s really important to distinguish between the now and the real and the present versus the hypothetical imagined future. There’s just no question that right now, this “hallucination problem” is endemic. And because LLMs are not thinking, they generate text that is factually inaccurate all the time. Even some of the people who are trying to push it out into the world acknowledge this, but then they’ll just put this little asterisk: “And that’s why an educator must always double-check.” Well, who has the time? I mean, what utility is this? And then people will say, “Well yes, but surely it’s going to get better in the future.” To which I say, Maybe, let’s wait and see. Maybe we should wait until we’ve arrived at that point before we push this out.

Do we know how often LLMs are making mistakes?

I can say just from my own personal usage of Khanmigo that it happens a lot, for reasons that are frankly predictable once you understand how the technology works. How often is it happening with seventh-grade students who are just learning this idea for the first time? We just don’t know. [In response to a query about errors, Khan Academy sent links to two on its site, noted that Khanmigo “occasionally makes mistakes, which we expected.” It also pointed, among other things, that Khanmigo now uses a calculator to solve numerical problems instead of using AI’s predictive capabilities.]

One of the things you say in the EdNext piece is that you just “sound like a Luddite” as opposed to actually being one. The Luddites saw the danger in automation and were trying to push against it. Is it the same, in a way, as what you’re doing? 

Thank you for asking that question because I feel my naturally contrarian ways risk painting me into a corner I’m really not in. Because in some sense, generative AI and large language models are incredible — they really are. It is a remarkable achievement that they are able to produce fluent and coherent narratives in response to just about any combination of words that you might choose to throw at them. So I am not a Luddite who thinks that we need to burn this all down.

“You want an impatient teacher who’s going to push your thinking, try to understand what you’re bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don’t yet have it. I don’t think LLMs are capable of doing any of that.”

There are methods and ways, both within education and in society more broadly, in which this tool could be incredibly useful for certain purposes. Already, it’s proving incredibly stimulating in thinking about and understanding how humans think and learn, and how that is similar and different from what they do. If we could just avoid the ridiculous overhype and magical thinking that seems to accompany the introduction of any new technology and calm down and investigate before pushing it out into our education institutions, then I think we’d be a lot better off. There really is a middle ground here. That’s where I’m trying to situate myself. 

Maybe this is a third rail that we shouldn’t be touching, but I was reading about Thomas Edison and his ideas on education. He had a great quote about movies, which he thought would revolutionize classrooms. He said, “The motion picture will endure as long as poor people exist.” It made me think: One of the underlying themes of ed tech is this idea of bringing technology to the people. Do you see a latent class divide here? Rich kids will get an actual personal tutor, but everybody else will get an LLM? 

My worry runs differently than that. Again, back to the Willingham quote: “Our minds are not built to think.” Here’s the harsh reality that could indeed be a third rail, but it needs to be acknowledged if we’re going to make meaningful progress: If we fail in building knowledge in our students, thinking gets harder and harder, which is why school gets harder and harder, and why over time you start to see students who find school really miserable. Some of them drop out. Some of them stop trying very hard. These folks — the data is overwhelming on this — typically end up having lives that are shorter, with less economic means, more dire health outcomes. All of this is both correlated and interrelated causation.

“If we could just avoid the ridiculous overhype and magical thinking that seems to accompany the introduction of any new technology and investigate before pushing it out into our education institutions, then I think we’d be a lot better off.”

But here’s the thing: For those students in particular, a device that alleviates the cognitive burden of schooling will be appealing. I’m really worried that this now-widely available technology will be something they turn to, particularly around the incredibly cognitively challenging task of writing — and that they will continue to look to this as a way of automating their own cognition. No one really needs to worry about the children of privilege. They are the success stories academically and, quite frankly, many of them enjoy learning and thinking and will avoid wanting to use this as a way of outsourcing their own thinking. But it could just make the existing divide a lot wider than it is today — much wider.

How is education research responding to AI?

The real challenge is that the pace of technology, particularly the pace of technological developments in the generative AI world, is so fast that traditional research methods are not going to be able to keep up. It’s not that there won’t be studies — I’m sure there are already some underway, and there’s tiny, emerging studies that I have seen here and there. But we just don’t have the capabilities as a research enterprise to be doing things the traditional way. A really important question that needs to be grappled with, as a matter of policy, potentially as a matter of philanthropy and just as a matter of society, is: So, what then? Do we just do it and hope for the best? Because that may be what ends up happening.

As we’ve seen with and in schools, there can be real impacts that you don’t realize until five, 10 years down the road. Then you go back and say, “Well, I wish we’d been thinking about that in advance rather than just rolling the dice and seeing where it came up.” We don’t do that in other realms of life. We don’t let people just come up with medicines that they think will cure certain diseases and then just say, “Well, we’ll see. We’ll introduce it into broader society and let’s figure it out.” I’m not necessarily saying that we need the equivalent per se, but something that would give us better insight and real-time information to help us figure out the overall positives and not-so-positives seems to me a real challenge that is underappreciated at the moment.

]]>
Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen? /article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen/ Tue, 09 Jul 2024 10:01:00 +0000 /?post_type=article&p=729513 When news broke last month that Ed, the Los Angeles school district’s new, $6 million artificial intelligence , was in jeopardy — the startup that created it on the verge of collapse — many insiders in the ed tech world wondered the same thing: What took so long?

The AI bot, created by Boston-based AllHere Education, was launched . But just three months later, AllHere posted that a majority of its 50 or so employees had been furloughed due to its “current financial position.” A spokesperson for the Los Angeles district said company founder and CEO Joanna Smith-Griffin was no longer on the job. AllHere was up for sale, the district said, with several businesses interested in acquiring it.

A screenshot of AllHere’s website with its June 14 announcement that much of its staff had been furloughed (screen capture)

The news was shocking and certainly bleak for the ed tech industry, but several observers say the partnership bit off more than it could chew, tech-wise — and that the ensuing blowup could hurt future AI investments.

Ed was touted as a powerful, easy-to-use o for students and parents to supplement classroom instruction, find assistance with kids’ academic struggles and help families navigate attendance, grades, transportation and other key issues, all in 100 languages and on their mobile phones.

But Amanda Bickerstaff, founder and CEO of , a consulting and training firm, said that was an overreach.

“What they were trying to do is really not possible with where the technology is today,” she said. ”It’s a very broad application [with] multiple users — teachers, students, leaders and family members — and it pulled in data from multiple systems.”

What they were trying to do is really not possible with where the technology is today.

Amanda Bickerstaff, AI for Education

She noted that even a mega-corporation like McDonald’s had to trim its AI sails. The fast-food giant recently admitted that a small experiment using a chatbot to power drive-thru windows had resulted in a few fraught customer interactions, such as one in which a woman angrily tried to persuade the bot that she wanted a caramel ice cream as it added to her order.

If McDonald’s, worth an estimated $178.6 billion, can’t get 100 drive-thrus to take lunch orders with generative AI, she said, the tech isn’t “where we need it to be.”

If anything, L.A. and AllHere did not seem worried about the project’s scale, even if industry insiders now say it was bound to under-deliver: Last spring, at a series of high-profile ed tech conferences, Smith-Griffin and Superintendent Alberto Carvalho showed off Ed widely, with Carvalho saying it would revolutionize students’ and parents’ relationships to school, “utilizing the data-rich environment that we have for every kid.”

Alberto Carvalho speaks at the ASU+GSV Summit in April (YouTube screenshot)

In an interview with Ӱ at the ASU+GSV Summit in San Diego in April, Carvalho said many students are not connected to school, “therefore they’re lost.” Ed, he promised, would change that, with a “significantly different approach” to communication from the district.

“We are shifting from a system of 540,000 students into 540,000 ‘schools of one,’” with personalization and individualization for each student, he said, and “meaningful connections with parents.”

Better communication with parents, he said, would help improve not just attendance but reading and math proficiency, graduation rates and other outcomes. “The question that needs to be asked is: Why have those resources not meaningfully connected with students and parents, and why have they not resulted in this explosive experience in terms of educational opportunity?”

Carvalho noted Ed’s ability to understand and communicate in about 100 different languages. And, he crowed, it “never goes to sleep” so it can answer questions 24/7. He called it “an entity that learns and relearns all the time and does nothing more, nothing less than adapt itself to you. I think that’s a game changer.” 

But one experienced ed tech insider recalled hearing Carvalho at the conference in April and say it was already solving “all the problems” that big districts face. The insider, who asked not to be identified in order to speak freely about sensitive matters, found the remarks troubling. “The messaging was so wrong that at that point I basically started a stopwatch on how long it would take” for the effort to fail. “And I’m kind of amazed it’s been this long before it all fell apart. I feel badly about it, I really do, but it’s not a surprise.”

‘A high-risk proposition’

In addition to the deal’s dissolution, Ӱ reported last week that a former senior director of software engineering at AllHere told district officials, L.A.’s independent inspector general’s office and state education officials that Ed processed student records in ways that likely ran afoul of the district’s own data privacy rules and put sensitive information at risk of being hacked — warnings that he said the agencies ignored. 

AI for Education’s Bickerstaff said developers “have to take caution” when building these systems for schools, especially those like Ed that bring together such large sets of data under one application.

“These tools, we don’t know how they work directly,” she said. “We know they have bias. And we know they’re not reliable. We know they can be leaky. And so we have to be really careful, especially with kids that have protected data.”

Alex Spurrier, an associate partner with the education consulting firm , said what often happens is that district leaders “try to go really big and move really fast to adopt a new technology,” not fully appreciating that it’s “a really high risk proposition.”

While ed tech is of overpromising and disappointing results, Spurrier said, other districts dare to take a different approach, starting small, iterating and scaling up. In those cases, he said, disaster rarely follows.

Richard Culatta, CEO of the (ISTE), put it more bluntly: “Whenever a district says, ‘Our strategy around AI is to buy a tool,’ that’s a problem. When the district says, ‘For us, AI is a variety of tools and skills that we are working on together,’ that’s when I feel comfortable that we’re moving in the right direction.”

Whenever a district says, 'Our strategy around AI is to buy a tool,' that's a problem.

Richard Culatta, International Society for Technology in Education

Culatta suggested that since generative AI is developing and changing so rapidly, districts should use the next few months as “a moment of exploration — it’s a moment to bring in teachers and parents and students to give feedback,” he said. “It is not the moment for ribbon cutting.” 

‘It’s about exploring’

Smith-Griffin founded AllHere in 2016 at Harvard University’s . In an April interview with Ӱ, she said she originally envisioned it as a way to help school systems reduce chronic absenteeism through better communication with parents. Many interventions that schools rely on, such as phone calls, postcards and home visits, “tend to be heavily reliant on the sheer power of educators to solve system-wide issues,” she said.

A former middle-school math teacher, Smith-Griffin recalled, “I was one of those teachers who was doing phone calls, leaving voicemails, visiting my parents’ homes.” 

AllHere pioneered text messaging “nudges,” electronic versions of postcard reminders to families that, in one key study, modestly. 

The company’s for L.A., Smith-Griffin said, envisioned extending the attendance strategies while applying them to student learning “in the most disciplined way possible.”

“You nudge a parent around absences and they will tell you things ranging from, ‘My kid needs tutoring, my kid is struggling with math’ [to] ‘I struggle with reading,’” she said. AllHere went one step further, she said, bringing together “the full body of resources” that a school system can offer parents.

The district had high hopes for the chatbot, requiring it to focus on “eliminating opportunity gaps, promoting whole-child well-being, building stronger relationships with students and families, and providing accessible information,” according to the proposal.

In April, it was still in early implementation at 100 of the district’s lowest performing “priority” schools, serving about 55,000 students. LAUSD planned to roll out Ed for all families this fall. The district “unplugged” the chatbot on June 14, the Los Angeles Times , but a district spokesperson said L.A. “will continue making Ed available as a tool to its students and families and is closely monitoring the potential acquisition of AllHere.” The company did not immediately responded to queries about the chatbot or its future.

As for the apparent collapse of AllHere, speculation in the ed tech world is rampant.

In the , education entrepreneur Ben Kornell said late last month, “My spidey sense basically goes to ‘Something’s not adding up here and there’s more to the story.’” He theorized a “critical failure point” that’s yet to emerge “because you don’t see things like this fall apart this quickly, this immediately” for such a small company, especially in the middle of a $6 million contract.

My spidey sense basically goes to 'Something's not adding up here and there's more to the story.'

Ben Kornell, education entrepreneur

Kornell said the possibilities fall into just a few categories: an accounting or financial misstep, a breakdown among AllHere’s staff, board and funders or “major customer payment issues.” 

The district also may have withheld payment for undelivered products, but he said the sudden collapse of the company seemed unusual. “If you are headed towards a cash crisis, the normal thing to do would be: Go to your board, go to your funders, and get a bridge to get you through that period and land the plane.”

Bellwether’s Spurrier said L.A. deserves a measure of credit “for being willing to lean into AI technology and think about ways that it could work.” But he wonders whether the best use of generative AI at this moment will be found not in “revolutionizing instruction,” as L.A. has pursued, but elsewhere. 

There's plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls.

Alex Spurrier, Bellwether Education Partners

“There’s plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls,” rather than focusing on changing how schools deliver instruction. “I think that’s the wrong place to start.”

ISTE’s Culatta noted that just down the road from Los Angeles, in Santa Ana, California, district officials there responded to the dawn of tools like ChatGPT and Google’s Gemini by creating evening classes for adults. “The parents come in and they talk about what AI is, how they should be thinking about it,” he said. “It’s about exploring. It’s about helping people build their skills.” 

‘How are your financials?’

The fate of AllHere’s attendance work in districts nationwide isn’t clear at the moment. In one large district, the Prince George’s County, Maryland, Public Schools, near Washington, D.C., teachers piloted AllHere with 32 schools as far back as January 2020, spokeswoman Meghan Thornton said. The district added two more schools to the pilot in 2022, but AllHere notified the district on June 18 that, effective immediately, it wouldn’t be able to continue its services due to “unforeseen financial circumstances.” 

District officials are now looking for another messaging system to replace AllHere “should it no longer be available,” Thornton said.

Bickerstaff said the field more broadly suffers from “a major, major overestimation of the capabilities of the technology to date.” L.A., she noted, is the nation’s second-largest school district, so even the pilot stage likely saw “very high” usage, raising its costs. She predicted a fast acquisition of AllHere, noting that they’d been looking for outside investment for several months.

As founder of the startup , which offers teachers tools to streamline their workload, Adeel Khan is no stranger to hustling for funding — and to competitors running out of money. But he said the news about AllHere and Ed was bad for the industry more broadly, leaving districts with questions about whether to partner with newer, untested companies.

“I see it as something that is certainly not great for the startup ecosystem,” he said.

I see (AllHere’s failure) as something that is certainly not great for the startup ecosystem.

Adeel Khan, Magic School AI

Even before the news about AllHere broke last month, Khan attended ISTE’s big national conference in Denver last month, where he talked to school district officials about prospective partnerships. “More than one time I was asked directly, ‘How are your financials?’” he recalled. 

Usually technology directors ask about features and what a product can do for students, he said. But they’re beginning to realize that a failed product doesn’t just waste time and money. It damages reputations as well. “That is on the mind of buyers,” he said. 

When school districts invest in new tech, he said, they’re not just committing to funding it for months or even years, but also to training teachers and others, so they want responsible growth.

“There’s a lot of disruption to K-12 when a product goes out of business,” Khan said. “So people remember this. They remember, ‘Hey, we committed to this product. We discovered it at ISTE two years ago and we loved it. It was great — and it’s not here anymore. And we don’t want to go through that again.’ ”

]]>
‘Distrust, Detection & Discipline:’ New Data Reveals Teachers’ ChatGPT Crackdown /article/distrust-detection-discipline-new-data-reveals-teachers-chatgpt-crackdown/ Tue, 02 Apr 2024 20:01:00 +0000 /?post_type=article&p=724713 New survey data puts hard numbers behind the steep rise of ChatGPT and other generative AI chatbots in America’s classrooms — and reveals a big spike in student discipline as a result. 

As artificial intelligence tools become more common in schools, most teachers say their districts have adopted guidance and training for both educators and students, by the nonprofit Center for Democracy and Technology. What this guidance lacks, however, are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat. 


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom — making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” report co-authors Maddy Dwyer and Elizabeth Laird write.

Among the middle and high school teachers who responded to the online survey, which was conducted in November and December, 60% said their schools permit the use of generative AI for schoolwork — double the number who said the same just five months earlier on a similar survey. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat. 

That doesn’t mean, however, that students aren’t getting into trouble. Among survey respondents, 64% said they were aware of students who were disciplined or faced some form of consequences — including not receiving credit for an assignment — for using generative AI on a school assignment. That represents a 16 percentage-point increase from August. 

The tools have also affected how educators view their students, with more than half saying they’ve grown distrustful of whether their students’ work is actually theirs. 

Fighting fire with fire, a growing share of teachers say they rely on digital detection tools to sniff out students who may have used generative AI to plagiarize. Sixty-eight percent of teachers — and 76% of licensed special education teachers — said they turn to generative AI content detection tools to determine whether students’ work is actually their own. 

The findings carry significant equity concerns for students with disabilities, researchers concluded, especially in the face of are ineffective.

]]>
Survey: AI is Here, but Only California and Oregon Guide Schools on its Use /article/survey-ai-is-here-but-only-california-and-oregon-guide-schools-on-its-use/ Wed, 01 Nov 2023 04:01:00 +0000 /?post_type=article&p=717117 Artificial intelligence now has a daily presence in many teachers’ and students’ lives, with chatbots like ChatGPT, Khan Academy’s tutor and AI image generators like all freely available. 

But nearly a year after most of us came face-to-face with the first of these tools, a that few states are offering educators substantial guidance on how to best use AI, let alone fairly and with appropriate privacy protections.

As of mid-October, just two states, California and , offered official guidance to schools on using AI, according to the Center for Reinventing Public Education at Arizona State University. 

CRPE said 11 more states are developing guidance, but that another 21 states don’t plan to give schools guidelines on AI “in the foreseeable future.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Seventeen states didn’t respond to CRPE’s survey and haven’t made official guidance publicly available.

Bree Dusseault

As more schools experiment with AI, good policies and advice — or a lack thereof — will “drive the ways adults make decisions in school,” said Bree Dusseault, CRPE’s managing director. That will ripple out, dictating whether these new tools will be used properly and equitably.

“We’re not seeing a lot of movement in states getting ahead of this,” she said. 

The reality in schools is that AI is here. Edtech companies are pitching products and schools are buying them, even if state officials are still trying to figure it all out. 

Satya Nitta

“It doesn’t surprise me,” said Satya Nitta, CEO of , a generative AI company developing voice-activated assistants for teachers. “Normally the technology is well ahead of regulators and lawmakers. So they’re probably scrambling to figure out what their standard should be.”

Nitta said a lot of educators and officials this week are likely looking “very carefully” at Monday’s on AI “to figure out what next steps are.” 

The order requires, among other things, that AI developers share safety test results with the U.S. government and develop standards that ensure AI systems are “safe, secure, and trustworthy.” 

It follows five months after the U.S. Department of Education released a detailed, with recommendations on using AI in education.

Deferring to districts

The fact that 13 states are at least in the process of helping schools figure out AI is significant. Last summer, no states offered such help, CRPE found. Officials in New York, , Rhode Island and Wyoming said decisions about many issues related to AI, such as academic integrity and blocking websites or tools, are made on the local level.

Still, researchers said, it’s significant that the majority of states still don’t plan AI-specific strategies or guidance in the 2023-24 school year.

There are a few promising developments: North Carolina will soon require high school graduates to pass a computer science course. In Virginia, Gov. Glenn Youngkin in September on AI careers. And Pennsylvania Gov. Josh Shapiro in September to create a state governing board to guide use of generative AI, including developing training programs for state employees.

Tara Nattrass

But educators need help understanding artificial intelligence, “while also trying to navigate its impact,” said Tara Nattrass, managing director of innovation strategy at the International Society for Technology in Education. “States can ensure educators have accurate and relevant guidance related to the opportunities and risks of AI so that they are able to spend less time filtering information and more time focused on their primary mission: teaching and learning.”

Beth Blumenstein, Oregon’s interim director of digital learning & well-rounded access, said AI is already being used in Oregon schools. And the state Department of Education has received requests from educators asking for support, guidance and professional development.

Beth Blumenstein

Generative AI is “a powerful tool that can support education practices and provide services to students that can greatly benefit their learning,” she said. “However, it is a highly complex tool that requires new learning, safety considerations, and human oversight.”

Three big issues she hears about are cheating, plagiarism and data privacy, including how not to run afoul of Oregon’s Student Information Protection Act or the federal Children’s Online Privacy and Protection Act. 

‘Now I have to do AI?’

In August, CRPE conducted focus groups with 18 superintendents, principals and senior administrators in five states who said they were cautiously optimistic about AI’s potential, but many complained about navigating yet another new disruption.

“We just got through this COVID hybrid remote learning,” one leader told researchers. “Now I have to do AI?”

Nitta, Merlyn Mind’s CEO, said that syncs with his experience.

“Broadly, school districts are looking for some help, some guidance: ‘Should we use ChatGPT? Should we not use it? Should we use AI? Is it private? Are they in violation of regulations?’ It’s a complex topic. It’s full of all kinds of mines and landmines.” 

And the stakes are high, he said. No educator wants to appear in a newspaper story about her school using an AI chatbot that feeds inappropriate information to students. 

“I wouldn’t go so far as to say there’s a deer-caught-in-headlights moment here,” Nitta said, “but there’s certainly a lot of concern. And I do believe it’s the responsibility of authorities, of responsible regulators, to step in and say, ‘Here’s how to use AI safely and appropriately.’ ” 

]]>