ISTE – Ӱ America's Education News Source Tue, 09 Jul 2024 00:41:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png ISTE – Ӱ 32 32 Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen? /article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen/ Tue, 09 Jul 2024 10:01:00 +0000 /?post_type=article&p=729513 When news broke last month that Ed, the Los Angeles school district’s new, $6 million artificial intelligence , was in jeopardy — the startup that created it on the verge of collapse — many insiders in the ed tech world wondered the same thing: What took so long?

The AI bot, created by Boston-based AllHere Education, was launched . But just three months later, AllHere posted that a majority of its 50 or so employees had been furloughed due to its “current financial position.” A spokesperson for the Los Angeles district said company founder and CEO Joanna Smith-Griffin was no longer on the job. AllHere was up for sale, the district said, with several businesses interested in acquiring it.

A screenshot of AllHere’s website with its June 14 announcement that much of its staff had been furloughed (screen capture)

The news was shocking and certainly bleak for the ed tech industry, but several observers say the partnership bit off more than it could chew, tech-wise — and that the ensuing blowup could hurt future AI investments.

Ed was touted as a powerful, easy-to-use o for students and parents to supplement classroom instruction, find assistance with kids’ academic struggles and help families navigate attendance, grades, transportation and other key issues, all in 100 languages and on their mobile phones.

But Amanda Bickerstaff, founder and CEO of , a consulting and training firm, said that was an overreach.

“What they were trying to do is really not possible with where the technology is today,” she said. ”It’s a very broad application [with] multiple users — teachers, students, leaders and family members — and it pulled in data from multiple systems.”

What they were trying to do is really not possible with where the technology is today.

Amanda Bickerstaff, AI for Education

She noted that even a mega-corporation like McDonald’s had to trim its AI sails. The fast-food giant recently admitted that a small experiment using a chatbot to power drive-thru windows had resulted in a few fraught customer interactions, such as one in which a woman angrily tried to persuade the bot that she wanted a caramel ice cream as it added to her order.

If McDonald’s, worth an estimated $178.6 billion, can’t get 100 drive-thrus to take lunch orders with generative AI, she said, the tech isn’t “where we need it to be.”

If anything, L.A. and AllHere did not seem worried about the project’s scale, even if industry insiders now say it was bound to under-deliver: Last spring, at a series of high-profile ed tech conferences, Smith-Griffin and Superintendent Alberto Carvalho showed off Ed widely, with Carvalho saying it would revolutionize students’ and parents’ relationships to school, “utilizing the data-rich environment that we have for every kid.”

Alberto Carvalho speaks at the ASU+GSV Summit in April (YouTube screenshot)

In an interview with Ӱ at the ASU+GSV Summit in San Diego in April, Carvalho said many students are not connected to school, “therefore they’re lost.” Ed, he promised, would change that, with a “significantly different approach” to communication from the district.

“We are shifting from a system of 540,000 students into 540,000 ‘schools of one,’” with personalization and individualization for each student, he said, and “meaningful connections with parents.”

Better communication with parents, he said, would help improve not just attendance but reading and math proficiency, graduation rates and other outcomes. “The question that needs to be asked is: Why have those resources not meaningfully connected with students and parents, and why have they not resulted in this explosive experience in terms of educational opportunity?”

Carvalho noted Ed’s ability to understand and communicate in about 100 different languages. And, he crowed, it “never goes to sleep” so it can answer questions 24/7. He called it “an entity that learns and relearns all the time and does nothing more, nothing less than adapt itself to you. I think that’s a game changer.” 

But one experienced ed tech insider recalled hearing Carvalho at the conference in April and say it was already solving “all the problems” that big districts face. The insider, who asked not to be identified in order to speak freely about sensitive matters, found the remarks troubling. “The messaging was so wrong that at that point I basically started a stopwatch on how long it would take” for the effort to fail. “And I’m kind of amazed it’s been this long before it all fell apart. I feel badly about it, I really do, but it’s not a surprise.”

‘A high-risk proposition’

In addition to the deal’s dissolution, Ӱ reported last week that a former senior director of software engineering at AllHere told district officials, L.A.’s independent inspector general’s office and state education officials that Ed processed student records in ways that likely ran afoul of the district’s own data privacy rules and put sensitive information at risk of being hacked — warnings that he said the agencies ignored. 

AI for Education’s Bickerstaff said developers “have to take caution” when building these systems for schools, especially those like Ed that bring together such large sets of data under one application.

“These tools, we don’t know how they work directly,” she said. “We know they have bias. And we know they’re not reliable. We know they can be leaky. And so we have to be really careful, especially with kids that have protected data.”

Alex Spurrier, an associate partner with the education consulting firm , said what often happens is that district leaders “try to go really big and move really fast to adopt a new technology,” not fully appreciating that it’s “a really high risk proposition.”

While ed tech is of overpromising and disappointing results, Spurrier said, other districts dare to take a different approach, starting small, iterating and scaling up. In those cases, he said, disaster rarely follows.

Richard Culatta, CEO of the (ISTE), put it more bluntly: “Whenever a district says, ‘Our strategy around AI is to buy a tool,’ that’s a problem. When the district says, ‘For us, AI is a variety of tools and skills that we are working on together,’ that’s when I feel comfortable that we’re moving in the right direction.”

Whenever a district says, 'Our strategy around AI is to buy a tool,' that's a problem.

Richard Culatta, International Society for Technology in Education

Culatta suggested that since generative AI is developing and changing so rapidly, districts should use the next few months as “a moment of exploration — it’s a moment to bring in teachers and parents and students to give feedback,” he said. “It is not the moment for ribbon cutting.” 

‘It’s about exploring’

Smith-Griffin founded AllHere in 2016 at Harvard University’s . In an April interview with Ӱ, she said she originally envisioned it as a way to help school systems reduce chronic absenteeism through better communication with parents. Many interventions that schools rely on, such as phone calls, postcards and home visits, “tend to be heavily reliant on the sheer power of educators to solve system-wide issues,” she said.

A former middle-school math teacher, Smith-Griffin recalled, “I was one of those teachers who was doing phone calls, leaving voicemails, visiting my parents’ homes.” 

AllHere pioneered text messaging “nudges,” electronic versions of postcard reminders to families that, in one key study, modestly. 

The company’s for L.A., Smith-Griffin said, envisioned extending the attendance strategies while applying them to student learning “in the most disciplined way possible.”

“You nudge a parent around absences and they will tell you things ranging from, ‘My kid needs tutoring, my kid is struggling with math’ [to] ‘I struggle with reading,’” she said. AllHere went one step further, she said, bringing together “the full body of resources” that a school system can offer parents.

The district had high hopes for the chatbot, requiring it to focus on “eliminating opportunity gaps, promoting whole-child well-being, building stronger relationships with students and families, and providing accessible information,” according to the proposal.

In April, it was still in early implementation at 100 of the district’s lowest performing “priority” schools, serving about 55,000 students. LAUSD planned to roll out Ed for all families this fall. The district “unplugged” the chatbot on June 14, the Los Angeles Times , but a district spokesperson said L.A. “will continue making Ed available as a tool to its students and families and is closely monitoring the potential acquisition of AllHere.” The company did not immediately responded to queries about the chatbot or its future.

As for the apparent collapse of AllHere, speculation in the ed tech world is rampant.

In the , education entrepreneur Ben Kornell said late last month, “My spidey sense basically goes to ‘Something’s not adding up here and there’s more to the story.’” He theorized a “critical failure point” that’s yet to emerge “because you don’t see things like this fall apart this quickly, this immediately” for such a small company, especially in the middle of a $6 million contract.

My spidey sense basically goes to 'Something's not adding up here and there's more to the story.'

Ben Kornell, education entrepreneur

Kornell said the possibilities fall into just a few categories: an accounting or financial misstep, a breakdown among AllHere’s staff, board and funders or “major customer payment issues.” 

The district also may have withheld payment for undelivered products, but he said the sudden collapse of the company seemed unusual. “If you are headed towards a cash crisis, the normal thing to do would be: Go to your board, go to your funders, and get a bridge to get you through that period and land the plane.”

Bellwether’s Spurrier said L.A. deserves a measure of credit “for being willing to lean into AI technology and think about ways that it could work.” But he wonders whether the best use of generative AI at this moment will be found not in “revolutionizing instruction,” as L.A. has pursued, but elsewhere. 

There's plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls.

Alex Spurrier, Bellwether Education Partners

“There’s plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls,” rather than focusing on changing how schools deliver instruction. “I think that’s the wrong place to start.”

ISTE’s Culatta noted that just down the road from Los Angeles, in Santa Ana, California, district officials there responded to the dawn of tools like ChatGPT and Google’s Gemini by creating evening classes for adults. “The parents come in and they talk about what AI is, how they should be thinking about it,” he said. “It’s about exploring. It’s about helping people build their skills.” 

‘How are your financials?’

The fate of AllHere’s attendance work in districts nationwide isn’t clear at the moment. In one large district, the Prince George’s County, Maryland, Public Schools, near Washington, D.C., teachers piloted AllHere with 32 schools as far back as January 2020, spokeswoman Meghan Thornton said. The district added two more schools to the pilot in 2022, but AllHere notified the district on June 18 that, effective immediately, it wouldn’t be able to continue its services due to “unforeseen financial circumstances.” 

District officials are now looking for another messaging system to replace AllHere “should it no longer be available,” Thornton said.

Bickerstaff said the field more broadly suffers from “a major, major overestimation of the capabilities of the technology to date.” L.A., she noted, is the nation’s second-largest school district, so even the pilot stage likely saw “very high” usage, raising its costs. She predicted a fast acquisition of AllHere, noting that they’d been looking for outside investment for several months.

As founder of the startup , which offers teachers tools to streamline their workload, Adeel Khan is no stranger to hustling for funding — and to competitors running out of money. But he said the news about AllHere and Ed was bad for the industry more broadly, leaving districts with questions about whether to partner with newer, untested companies.

“I see it as something that is certainly not great for the startup ecosystem,” he said.

I see (AllHere’s failure) as something that is certainly not great for the startup ecosystem.

Adeel Khan, Magic School AI

Even before the news about AllHere broke last month, Khan attended ISTE’s big national conference in Denver last month, where he talked to school district officials about prospective partnerships. “More than one time I was asked directly, ‘How are your financials?’” he recalled. 

Usually technology directors ask about features and what a product can do for students, he said. But they’re beginning to realize that a failed product doesn’t just waste time and money. It damages reputations as well. “That is on the mind of buyers,” he said. 

When school districts invest in new tech, he said, they’re not just committing to funding it for months or even years, but also to training teachers and others, so they want responsible growth.

“There’s a lot of disruption to K-12 when a product goes out of business,” Khan said. “So people remember this. They remember, ‘Hey, we committed to this product. We discovered it at ISTE two years ago and we loved it. It was great — and it’s not here anymore. And we don’t want to go through that again.’ ”

]]>
Survey: AI is Here, but Only California and Oregon Guide Schools on its Use /article/survey-ai-is-here-but-only-california-and-oregon-guide-schools-on-its-use/ Wed, 01 Nov 2023 04:01:00 +0000 /?post_type=article&p=717117 Artificial intelligence now has a daily presence in many teachers’ and students’ lives, with chatbots like ChatGPT, Khan Academy’s tutor and AI image generators like all freely available. 

But nearly a year after most of us came face-to-face with the first of these tools, a that few states are offering educators substantial guidance on how to best use AI, let alone fairly and with appropriate privacy protections.

As of mid-October, just two states, California and , offered official guidance to schools on using AI, according to the Center for Reinventing Public Education at Arizona State University. 

CRPE said 11 more states are developing guidance, but that another 21 states don’t plan to give schools guidelines on AI “in the foreseeable future.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Seventeen states didn’t respond to CRPE’s survey and haven’t made official guidance publicly available.

Bree Dusseault

As more schools experiment with AI, good policies and advice — or a lack thereof — will “drive the ways adults make decisions in school,” said Bree Dusseault, CRPE’s managing director. That will ripple out, dictating whether these new tools will be used properly and equitably.

“We’re not seeing a lot of movement in states getting ahead of this,” she said. 

The reality in schools is that AI is here. Edtech companies are pitching products and schools are buying them, even if state officials are still trying to figure it all out. 

Satya Nitta

“It doesn’t surprise me,” said Satya Nitta, CEO of , a generative AI company developing voice-activated assistants for teachers. “Normally the technology is well ahead of regulators and lawmakers. So they’re probably scrambling to figure out what their standard should be.”

Nitta said a lot of educators and officials this week are likely looking “very carefully” at Monday’s on AI “to figure out what next steps are.” 

The order requires, among other things, that AI developers share safety test results with the U.S. government and develop standards that ensure AI systems are “safe, secure, and trustworthy.” 

It follows five months after the U.S. Department of Education released a detailed, with recommendations on using AI in education.

Deferring to districts

The fact that 13 states are at least in the process of helping schools figure out AI is significant. Last summer, no states offered such help, CRPE found. Officials in New York, , Rhode Island and Wyoming said decisions about many issues related to AI, such as academic integrity and blocking websites or tools, are made on the local level.

Still, researchers said, it’s significant that the majority of states still don’t plan AI-specific strategies or guidance in the 2023-24 school year.

There are a few promising developments: North Carolina will soon require high school graduates to pass a computer science course. In Virginia, Gov. Glenn Youngkin in September on AI careers. And Pennsylvania Gov. Josh Shapiro in September to create a state governing board to guide use of generative AI, including developing training programs for state employees.

Tara Nattrass

But educators need help understanding artificial intelligence, “while also trying to navigate its impact,” said Tara Nattrass, managing director of innovation strategy at the International Society for Technology in Education. “States can ensure educators have accurate and relevant guidance related to the opportunities and risks of AI so that they are able to spend less time filtering information and more time focused on their primary mission: teaching and learning.”

Beth Blumenstein, Oregon’s interim director of digital learning & well-rounded access, said AI is already being used in Oregon schools. And the state Department of Education has received requests from educators asking for support, guidance and professional development.

Beth Blumenstein

Generative AI is “a powerful tool that can support education practices and provide services to students that can greatly benefit their learning,” she said. “However, it is a highly complex tool that requires new learning, safety considerations, and human oversight.”

Three big issues she hears about are cheating, plagiarism and data privacy, including how not to run afoul of Oregon’s Student Information Protection Act or the federal Children’s Online Privacy and Protection Act. 

‘Now I have to do AI?’

In August, CRPE conducted focus groups with 18 superintendents, principals and senior administrators in five states who said they were cautiously optimistic about AI’s potential, but many complained about navigating yet another new disruption.

“We just got through this COVID hybrid remote learning,” one leader told researchers. “Now I have to do AI?”

Nitta, Merlyn Mind’s CEO, said that syncs with his experience.

“Broadly, school districts are looking for some help, some guidance: ‘Should we use ChatGPT? Should we not use it? Should we use AI? Is it private? Are they in violation of regulations?’ It’s a complex topic. It’s full of all kinds of mines and landmines.” 

And the stakes are high, he said. No educator wants to appear in a newspaper story about her school using an AI chatbot that feeds inappropriate information to students. 

“I wouldn’t go so far as to say there’s a deer-caught-in-headlights moment here,” Nitta said, “but there’s certainly a lot of concern. And I do believe it’s the responsibility of authorities, of responsible regulators, to step in and say, ‘Here’s how to use AI safely and appropriately.’ ” 

]]>
Exclusive: For Busy Teachers, AI Could Crack Open the Dense World of Ed Research /article/exclusive-phonics-learning-styles-teachers-confounded-by-education-research-may-soon-turn-to-new-ai-chatbots-for-help/ Wed, 06 Sep 2023 11:15:00 +0000 /?post_type=article&p=714153 As students across the U.S. enter their first full school year with access to powerful AI tools like ChatGPT and Bard, many educators remain skeptical of their usefulness — and preoccupied with their potential to .

But this fall, a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them. 

Their aspiration is to offer new tools that are more focused and helpful than wide-ranging ones like ChatGPT, which tends to stumble over research questions with competing findings. And like many kids faced with questions they can’t answer, it has a frustrating tendency to make things up.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Tapping into curated research bases and filtering out lousy results would also make the bots more reliable: If all goes according to plans, they’d cite their sources.

The result, supporters say, could revolutionize education. If their work takes hold, millions of teachers for the first time could routinely access high-quality research and make it part of their everyday workflow. Such tools could also help stamp out adherence to stubborn but ill-supported fads in areas from “learning styles” to reading instruction.

So far, the two groups are each feeling their way around the vast undertaking, with slightly different approaches.

In June, the International Society for Technology in Education introduced , a tool built on content vetted by ISTE and the Association for Supervision and Curriculum Development. (The two groups merged in 2022.) ISTE has made it available in to selected users. All of the chatbot’s content is educator-focused, and it’s trained solely on materials developed or approved by the two organizations. 

Richard Culatta

Now its creators say that within about six months, they expect that the tool will also be able to scour outside, peer-reviewed education research and return “pretty understandable, pretty meaningful results” from vetted journals, said Richard Culatta, ISTE’s CEO.

“There’s this big gap between what we know in the research and what happens in practice,” he said. One reason: Most research is published in a format that “is just totally inaccessible to teachers.”

Case in point: A set of by the Jefferson Education Exchange, a nonprofit supported by the University of Virginia’s Curry School of Education, found that while educators prefer research they can act on — and that’s presented in a way that applies to their work — only about 16% of teachers actually use research to inform instruction.

So he and others are building a digital tool, “purpose-built for educators by educators,” that can translate research into practice, using “very practical language that teachers understand.”

For instance, a teacher could ask the chatbot, “What does the research say about creating a healthy school culture?” or “What’s the evidence for teaching phonics to developing readers?” One could also ask it to suggest activities that are appropriate for middle school students learning about digital citizenship.

Joseph South, ISTE’s chief learning officer, said teachers want the latest research, but are up against formidable obstacles. “They have to find the article in the journal that happens to relate to the thing that they want to do,” he said. “They have to somehow understand academic-speak. They have to have the time to read this, and they have to translate it into something useful.”

While ChatGPT can comb through journals it has access to, translate and summarize the research, he said, it’s not reliable. The typical chatbot — and thus the typical end user — doesn’t know whether the results are from a credible, peer-reviewed journal or not, and it may not necessarily care.

Joseph South

“We do, though,” he said. “So we can do that filtering and let the AI do its magic.”

As with its beta version, the new chatbot will also cite the sources used to generate each response. And it’ll let users know when it simply doesn’t have enough information to return a reliable response.

Developers are still in the early stages of deciding what academic journals to include. For now, they’re experimenting with a handful of key research articles, but will expand the chatbot’s range if initial prototypes prove helpful to educators.

Culatta and South, both veterans of the U.S. Department of Education, have spent years working on the research-to-practice problem, offering, in effect, translation services for research findings. “We’ve spent so much work trying to figure out how to do it and it’s just never really worked,” he said. “It’s just always been a struggle. And we actually think that this could be the first for-real, sustainable, scalable approach to taking research and getting it into language that actually could be used by teachers.”

Daniel Willingham

, a professor of psychology at the University of Virginia and a well-known translator of education research, said his limited experience with ChatGPT has shown that when asked about a subject where there’s general consensus, such as “What is the effect of sleep on memory?” it produces helpful results. But it isn’t very good at synthesizing conflicting findings.

It’s also inconsistent in its willingness to reveal, in Willingham’s words, that “‘I really don’t know anything about that.’ And so it, you know, just .”

A paid ChatGPT subscriber, Willingham said he gets “really useful” results only about 20% of the time. “But it requires plenty of verification on my part. And this is all within my area of expertise, so it’s not very hard for me to verify.”

Tapping ‘What Works’

ISTE isn’t the only organization pushing to make education research more widely accessible via chatbot. The Learning Agency, a Washington, D.C.-based consulting firm, is also testing a of a bot designed to offer answers to education research queries.

Unlike ISTE’s, the agency’s tool taps an already existing, if finite, resource: the U.S. Department of Education’s What Works Clearinghouse, or more specifically its , a curated collection of materials developed by the department’s Institute of Education Sciences.

“We were inspired to basically create a special version of ChatGPT that was exposed to more high-quality educational data and research evidence on what works,” said Perpetual Baffour, the group’s research director.

In a sense, she said, much of the work had already been done, since the library, though limited, exists to translate research findings into more digestible forms for educators. The result is a prototype that offers what Baffour calls “evidence-based education advice” on topics from adolescent literacy to dropout prevention and school turnarounds.

Perpetual Baffour

Baffour noted that the app development tool allowed the agency to create a “simple but robust” prototype within a day.

At the moment, the version they’re testing is “very basic,” Baffour said. The agency is still considering what it’ll look like in the future. 

“It only knows as much as the Doing What Works Library tells it,” she said. So queries about topics that are in the library return rich results. But asking it about topics that aren’t can be problematic. 

For example, ask it about myths around the aforementioned learning styles and it defaults to a more general knowledge base scraped from Wikipedia articles, transcripts of recorded conversations and materials from “different corners of the Internet,” Baffour said. “And as you can imagine, those sources might not have the most up-to-date and accurate information about education,” Baffour said.

Indeed, a query about the topic returns this: “The myth concerning learning styles is that there is one specific style that works best for everyone. This is not true, as different people learn in different ways and have different preferences for how they learn. Additionally, there is no evidence that suggests that focusing on one particular learning style is more effective than focusing on multiple styles.”

Not exactly accurate or helpful.

In the first place, the widely believed “myth” holds that people with different learning styles learn best when their preferred mode of learning is indulged — not that one style works best for everyone. At a more basic level, while many people may express preferences for ways to take in new information and study — receiving instruction verbally, for example, instead of via pictures — scientists have yet to find good evidence that material tuned to these preferences . 

Unfortunately, at the moment the agency’s bot doesn’t confess whether it knows a lot or little about a topic. Baffour said they want to change that soon. For now, however, that’s just an aspiration.

“I think you’re more likely to get a confident chatbot producing inaccurate information than you are to get a self-aware chatbot admitting its false and incomplete knowledge,” she said. 

Willingham, the UVA researcher, said a useful education-focused chatbot would not just have to incorporate reliable findings, but put them in context. For example, an answer to a query about the evidence for phonics instruction would properly note that, while the record is fairly strong, a lot of mediocre research and “hyperbolic claims” made in support of alternative methods serve to cloud the overall picture — a delicate but accurate detail.

“How is an aggregator going to negotiate that?” he said. 

Asked if he thought a chatbot might soon replace him, Willingham, the author of and a that translate learning science into plain English, said he wouldn’t make any predictions. 

“I was never much of a futurist, but I hocked my crystal ball 15 years ago,” he said.

]]>