computers – Ӱ America's Education News Source Mon, 06 Apr 2026 15:31:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png computers – Ӱ 32 32 Schools Are Paying for Ed Tech That Students Never Use — Could A New Contract Model Change That? /article/schools-are-paying-for-ed-tech-that-students-never-use-could-a-new-contract-model-change-that/ Mon, 06 Apr 2026 16:30:00 +0000 /?post_type=article&p=1030759 When school districts sign contracts for educational technology, they typically buy a set number of licenses. The software company delivers the product and the district cuts a check. Whether students actually benefit or even use the tools doesn’t factor into it.

Over the past few decades, that has generated a growing tension among parents and educators, who have begun questioning the .


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


But a new kind of funding scheme may turn that dynamic on its head: A finds that a different approach to buying classroom technology may not only be workable but, in many cases, produces results that traditional contracts don’t. Called outcomes based contracting, the model ties what companies get paid, at least in part, to whether students actually learn.

The findings, from the nonprofit groups and the , also come as school budgets are tightening after COVID relief funds dried up and district leaders find themselves under growing pressure to justify spending. 

The report examined a group of school districts piloting an outcomes based model. It finds that the arrangement offers a new way to determine whether tech is actually working for kids, since it dictates that a portion of vendors’ payments depends on meeting a set of agreed-upon student benchmarks. If students don’t reach them, vendors don’t collect the full contract amount. 

But the model also builds in a layer of shared accountability: Districts must commit to making sure students use the tools at the levels, or “dosage,” necessary to produce results.

Brittany Miller, the center’s executive director, said that forces everyone to take implementation seriously.

“What this model does is it tells everybody across the ecosystem: ‘Prioritize this,’” she said. “You have to get to this level of implementation integrity, which translates into dosage, in order to actually have a meaningful experience for a student.”

Kids ‘not getting the dosage they need’

Before looking at whether a tech product improves student outcomes, Miller said, there’s a more basic question that districts rarely ask: Are students using these tools at all?

The answer is often, “No.” 

The report found that more than 65% of purchased ed tech licenses typically go unused, with school districts paying full price for products that sit idle. But districts participating in the outcomes based pilot met dosage requirements for as many as 95% of students. Overall usage rates were typically 10 times higher than under traditional contracts.

“We talk a lot about dosage, and kids not getting the dosage that they need,” Miller said. “And that, to me, is a proxy for being a responsible consumer of tech: Are our kids actually using it in a way that will drive outcomes?”

Miller said part of what drives the usage shift is that both districts and vendors share a direct financial stake in students actually using the products. Under the model, if a student falls behind on usage, the district must find out why and get that student back on track. If they don’t, there’s a record of that and the district is on the hook for payments, even if the student’s achievement didn’t improve. 

Brittany Miller

It’s only fair in cases like these, she said. “The provider wasn’t able to prove that their product worked because kids didn’t actually use it.”

Beyond usage statistics, the report found that districts in the pilot reported greater instructional coherence. Technology was being used with more intention tied to specific learning goals rather than as a general add-on to existing lessons. And teachers were more deliberate about how they integrated tech into their instruction.

Miller, who formerly led large-scale tutoring implementation in Denver Public Schools, said she has sat in classrooms and watched students working with these products, typically supplemental literacy and math tools. She said many of them can make a difference, but only if used properly. 

“We’re talking about technology that has the ability to help students pronounce words correctly, support their fluency and break down words for them,” she said. “In mathematics, we’re talking about students using technology to really try different ways of solving problems and getting them exactly what they need in the moment.”

The report also found that tech companies benefited from the model in unexpected ways: Because outcomes based contracts require detailed, real-time data on how students are using a product, companies got access to information about their tools’ effectiveness that most standard contracts never generate.

Fewer tools, better results

Perhaps most counterintuitively, the report found, districts that rely on outcomes based contracting actually end up buying fewer tech products.

That’s because the process of building such a contract requires district leaders to clearly define what problem they’re trying to solve, what success looks like and whether a given product is actually the right tool for the job. That level of scrutiny, said Miller, produces a kind of natural audit.

“We’ve seen in a lot of districts as they’ve taken this on, the number of ed tech tools they’re purchasing just [goes] way down at the district level,” she said.

In one district, Miller said, officials found they’d purchased licenses for more than 1,000 tools. As they examined the list they said, “If there is not a clear reason and purpose that we’re using this in the classroom that’s actually driving student learning, then we’re not going to pay for that tool anymore.”

She added, “It just shifts the mindset of the system to really say, ‘Let’s look at what we’re purchasing more carefully, figure out what is and isn’t working, and start to cut down on the noise.”

The center, based at the , grew out of research conducted at Harvard University’s under economist Tom Kane, who in 2021 a small group of tutoring providers and school districts to examine whether outcomes based contracting — already used in healthcare and workforce development — could be adapted for K-12 education. 

The project eventually moved to the foundation, with Denver among the early participants. Miller was a district leader at the time and got involved in the work that Denver was piloting on tutoring. 

As of February, Miller’s center had worked with 87 education institutions ranging from school districts to state education agencies and tracked results for more than 63,000 students.

In addition, six states — California, Texas, Florida, Arkansas, Indiana and Louisiana — have launched initiatives around the model. Together they represent more than 28% of total U.S. K-12 education spending, constituting a potentially fundamental shift in how schools spend money. That shift, Miller said, could have a huge impact on children’s achievement if educators are asking the right questions. 

“There’s a student at the end of the day that’s being served by this,” she said. “How are you really humanizing their lived experience in the classroom and making sure that they’re achieving the outcomes that we know they’re able to?”

]]>
Wizard Chess, Robot Bikes and More: Six Students Creating Cool Stuff with AI /article/students-ai-opportunity-while-adults-fret-artificial-intelligence/ Sun, 25 Feb 2024 15:30:00 +0000 /?post_type=article&p=722752 More than a year after ’s surprise launch thrust artificial intelligence into public view, many educators and policymakers still fear that students will primarily use the technology for cheating. An found that two-thirds of high school and college instructors are so concerned about AI they’re rethinking assignments, with many planning to require handwritten assignments, in-class writing or even oral exams. 

But a few students see things differently. They’re not only fearless about AI, they’re building their studies and future professional lives around it. While many of their teachers are scrambling to outsmart AI in the classroom, these students are embracing the technology, often spending hours at home, in classrooms and dorm rooms building tools they hope will launch their careers.

In a , ACT, the non-profit that runs the college entrance exam of the same name, found that nearly half of high school students who’d signed up for the June 2023 exam had used AI tools, most commonly ChatGPT. Almost half of those who had used such tools relied on them for school assignments. 

Ӱ went looking for young people diving head-first into AI and found several doing substantial research and development as early as high school. 

The six students we found, a few as young as 15, are thinking much more deeply about AI than most adults, their hands in the technology in ways that would have seemed impossible just a generation ago. Many are immigrants to the West or come from families that emigrated here. Edtech podcaster Alex Sarlin, who also writes a newsletter focused on edtech and founded the consultancy , ’t surprised by the demographics. He explained that while U.S. companies typically make headlines in AI, the phenomenon has “truly been a product of global collaboration, and many of its major innovators have been immigrants,” often with training and professorships at top North American universities.

These young people are programming everything from autonomous bicycles to postpartum depression apps for new mothers to 911 chatbots, homework helpers and Harry Potter-inspired robotic chess boards. 

All have a clear message about AI: Don’t fear it. Learn about it.

Isabela Ferrer

Age 17

Hometown Bogota, Colombia

School MAST Academy, Miami, Fla.

What she’s working on: A high school junior at MAST, a public magnet high school focused on maritime studies and science, Ferrer plans to return to Colombia this spring and study computer science in college. She has been working with a foundation called that takes in abandoned and abused children in her home country. She’s developing an AI tool to help the children learn how to read and write Spanish more easily.

“They enter a public school system that expects them to know how to read, but they don’t have these skills,” she said. 

Ferrer is also considering adding more features in the future, such as one that uses AI voice recognition to identify trauma in a student’s voice. 

Once she graduates, she’d like to take a gap year to “get a little more involved in the Colombian startup ecosystem and culture. I also want to travel internationally and possibly keep working on projects like the one I’m working on right now, but on an international scale.” 

What most people misunderstand about AI: “Something I think most people don’t get about AI is that it’s very accessible to everyone,” Ferrer said. “Coding API [application programming interface, which allows two applications to talk to each other] and creating AI models for any specific purpose is very easy and, if done correctly, can be beneficial for different purposes.” 

All the same, she also worries that AI is often used to tackle “very superficial problems” like productivity or data processing. “But I think there’s a huge opportunity to use these technologies to solve real problems in the world … There’s a huge opportunity to close different gaps that exist in emerging markets and in developing countries. And it’s very worth exploring.” 

Shanzeh Haji

Age 16

Hometown Toronto, Canada

School Bayview Secondary School, Richmond Hill, Ontario

Once she learned about postpartum depression, Haji began talking to new mothers and family members, including her own mother, who had experienced it. “I realized how big the problem was and how closely connected I was to it.” Haji finished coding the AI chatbot for the as-yet unnamed app and is working on the symptom recognition platform. 

What most people misunderstand about AI: “If you look at some of the people who are working in AI and some of the significant impact that AI has made on so many different problems,” she said, “whether it be climate change or medicine or drug discovery, you can just see that AI has significant potential — it can literally transform our lives in a positive way. It really allows for this radical innovation. And I feel like people see more of the negative side of artificial intelligence rather than the positive and the significance that it has on our lives.” 

Aditya Syam

Age 20

Hometown Mumbai, India

School Cornell University

What he’s working on: A math and computer science double major, Syam is part of a longstanding team at Cornell that is developing an AI-powered, self-navigating, , basically a robot bike. “The kinds of applications we are thinking of for this are deliveries and basically just getting things from point A to point B without having a human intervene at any point,” he said. Syam, who is working on the bike’s navigation team, has been honing its obstacle avoidance algorithm, which keeps it from hitting things. 

The project began about a decade ago, he said. “Back then, it was just a theory.” Now they plan to showcase an actual prototype of the bike this spring, probably in March or April, so everyone who has contributed to the project “can see what we’ve built.”

What most people misunderstand about AI: “It’s technology that’s been around for decades,” he said. “It’s just been rebranded in a different way.” ChatGPT, for instance, combines Natural Language Processing and Web access, which results in a kind of “miracle” product. “It seems so great — it can just pull something off the web for you, it can write essays for you, it can edit software code for you. But in its essence, it’s not that different from technologies that have been around before.”

Vinitha Marupeddi

Age 21

Hometown San Jose, Calif.

School Purdue University

What she’s working on: A senior studying computer science, data science and applied statistics, Marupeddi recently led two student teams — one in voice recognition and another in computer vision — developing a robotic, voice-activated modeled after , the 3-D animated game in the Harry Potter books in which the pieces come to life. “We were able to do a lot of high-level robotics using that one project, so I thought that was very cool,” she said. Though the game is still far from being playable, Marupeddi calls it a good use case “to get people interested in robotics and machine learning.” 

Last summer, she interned at a John Deere warehouse in Moline, Ill., where she was set free to work on any project that struck her fancy. Marupeddi looked around the warehouse and saw that Deere had a robot that was being used to track inventory, so she expanded its abilities to cover a wider area. She also worked on a computer vision algorithm that used security camera footage to detect how full certain areas of the warehouse were and determine how much more inventory they could hold.

What most people misunderstand about AI: ”Honestly, I think a good chunk of people are just obsessed with the cheating part of it. They’re like, ‘Oh, ChatGPT can just write my essay. It can do my homework. I don’t have to worry about it.’ But they don’t try to actually understand the material. The people that do use ChatGPT to understand the material are actually going to use it as tutors or use it to ask questions if they don’t understand something.” That divide, between those who reject AI and those who learn how to control it, could grow larger if unaddressed. But learning about AI, she said, will “give people the resources, if they have the drive.”

Vinaya Sharma

Age 18

Hometown Toronto, Canada

School Castlebrooke Secondary School, Brampton, Ontario

What she’s working on: Actually, the better question might be: What ’t she working on? Sharma, a high school senior, writes code like most of us speak. In part, her work is a response to how little challenge she gets in school these days. “After COVID, I feel schools have gone easier on students,” she said. “I skip school as much as I can so I can code in my room.” The result has been a flurry of applications, from an AI-powered chatbot to handle 911 calls to a power grid simulator to a pharmaceutical app to aid in drug discovery. 

The is still in search of customers, she said, but would be valuable especially in cases where multiple people are calling about the same emergency, such as a car crash. The AI would geolocate the calls and determine if callers were using similar words to describe what they saw. To those who balk at talking to a 911 chatbot, Sharma said the current system in Toronto is often backed up. “It’ll be 100% better than being put on hold and no one assisting you at all.”

The idea was born after she began talking to engineers and energy policymakers and realized that, in her words, “The engineers were very technical, looking at things on a scale of voltages and currents. And the policymakers had trouble communicating with these grid engineers. And I realized that that was one of the bottlenecks slowing down the process so much.” She used design principles pioneered by one of her favorite video games, , to give the two groups a drag-and-drop simulation that both could understand. 

Sharma got interested in drug discovery that Lululemon founder Chip Wilson has a rare form of muscular dystrophy that makes it difficult to walk. He’s investing $100 million on treatments and research for a cure. Sharma said she “fell down a research rabbit hole” and soon realized that the drug discovery process “is honestly broken. It takes more than a decade to bring a drug to market, and it costs, on average, $1 billion to $2 billion,” or about $743 million to nearly $1.5 billion in U.S. dollars.

Her app, BioBytes, aims to bring down both the cost and time needed to bring drugs to market. 

What most people misunderstand about AI: “With any new emerging tech, there’s going to be bad actors that will abuse the system or use it for harm,” she said. “But personally I believe the pros outweigh it. Instead of taking these tools away from us in order to prevent these bad things from happening, I think that people need to realize that the tools are here and people are going to use them. So there needs to be a greater focus on education, of how to use the tools and how to use [them] for good and how it can actually support us.” 

Krishiv Thakuria 

Age 15

Hometown Mississauga, Ontario, Canada

School The Woodlands Secondary School, Mississauga

What he’s working on: Thakuria founded a startup called and is building a set of AI-powered learning tools to help students study more efficiently. The tools let users upload any class materials — study notes, a PDF of a textbook chapter or entire novel or even a teacher’s PowerPoint. From there they can create “an infinite set of practice questions” keyed to the course, Thakuria said. If students get stuck, they can click on an AI tutor customized to the material they uploaded.

The tutoring function is similar to Khan Academy’s AI-powered teaching assistant , but Thakuria said Aceflow’s tool has an advantage: Khanmigo only works, for now, on Khan Academy materials. “In a lot of classes, teachers teach content in very different ways,” he said. “If you can personalize an AI tool to study the material of your teachers, you get learning that’s far more personalized and far more relevant to you, making your studying sessions more effective.” Aceflow users can also create timed study sessions, something neither Khanmigo nor ChatGPT users can currently do.

The new tool is being beta-tested by a focus group of 20, with a 1,400-person waitlist, he said. He and his partners plan to offer it on a “freemium” model, with charges for premium features. Even paying a small amount for unlimited use of the tool makes it available to many families who can’t afford a tutor, Thakuria said, since private tutoring can cost upwards of $10,000 a year. 

What most people misunderstand about AI: That its impact on education will be “binary,” he said. People believe “it’s either a good thing or a bad thing. I think that it can do both. For all the people who worry about AI being a bad thing, I would argue that, well, a hammer can be a bad thing when you give your kid a hammer for the first time to help you out with carpentry work. You have to teach your kid how to use it, right? And without teaching your kid how to use a tool, the tool is not going to be used properly, and that hammer is going to break something.”

It’s the same with AI. “If we can teach kids that smoking is bad for the body, we should teach kids that using AI in certain ways is bad for the brain. But we shouldn’t just focus on the negative effects, because then we’re closing off a future of using AI to solve educational inequity in so many beautiful ways. AI is a technology that can help us scale private tutoring to far more families than can actually afford it now. I think no one should underestimate the positive effects of AI while also safeguarding [against] the negative effects, because two things can be true at once.” 

]]>