ChatGPT – Ӱ America's Education News Source Thu, 26 Mar 2026 18:23:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png ChatGPT – Ӱ 32 32 NYC Releases Guidelines for AI in Schools. Some Say it Raises More Questions Than it Answers /article/nyc-releases-guidelines-for-ai-in-schools-some-say-it-raises-more-questions-than-it-answers/ Fri, 27 Mar 2026 14:30:00 +0000 /?post_type=article&p=1030416 This article was originally published in

New York City’s Education Department unveiled its for artificial intelligence use, offering a rough road map for if and when to incorporate AI tools in school.

The guidance, released Tuesday, arrives nearly three years after a short-lived on ChatGPT. It also comes in the midst of ongoing debates about student privacy, AI’s effect on student learning and development, and the role of private companies in schools. Some schools had as they awaited citywide guidance.

Hot button issues, like how and if students can use AI for homework assignments, or whether students can use personal AI chatbot accounts in addition to tools approved and supervised by the Education Department, are still being hashed out.

City officials are asking families and educators for feedback, which will inform future versions of the guidance. The Education Department released a and will also host webinars and events to answer questions and gather feedback through May 8.

“AI is here, and our responsibility is to put strong systemwide safeguards in place,” schools Chancellor Kamar Samuels wrote in an email to parents.

The early framework is structured in a “traffic light” approach: green light for approved uses, red light for prohibited cases, and yellow light cases for gray areas, which require significant oversight.

For example, brainstorming lesson plans and drafting non-critical communications fall under “green light” cases.

In “yellow light” cases, schools can use AI to find trends in student data, to generate translations for bilingual learners, or adapt materials for students with disabilities — but a trained professional must first review the outputs before it is used with students.

All decisions made about students, including grading, development of special education and 504 plans, discipline, counseling and crisis intervention, and other academic placement decisions, are strictly forbidden. These “red light” cases are not expected to change in the final playbook the city aims to release in June.

Pushback has already been fierce among parents and education advocacy groups: A asking the city to put a two-year pause on AI use in schools has garnered about 1,500 signatures since October. Several Community Education Councils have also passed resolutions calling for a moratorium of AI in schools.

The guidance was written by the Education Department’s AI Task Force, and informed by the city’s external AI Advisory Council, which includes education technology partners from Google, OpenAI, and other companies hoping to contract with the city’s roughly 800,000 K- 12 students.

Questions remain about student privacy and third-party AI contracts

Before schools can use AI tools in the classroom, each product must go through a data privacy and security vetting process called the Enterprise Request Management Application. The process, created in 2023, applies to all third-party technology vendors.

But AI has become ubiquitous. The Education Department’s contract with Microsoft 365 programs did not originally include AI chatbots, but now do, said Naveed Hasan, a member of the Education Department’s Data Privacy Working Group.

“Just like TikTok was unregulated until school networks blocked it, so are these free AI products,” said Hasan, whose group advised on data privacy policies prior to the AI guidance.

Schools can visit the department’s to see if a tool has already been approved; otherwise, schools must submit an application for new use.

The process, however, doesn’t yet include guidelines on how to review certain aspects of AI products, such as algorithmic bias or instructional effectiveness. Those are expected to be included in the final June version of the playbook.

The guidelines, which were shaped by federal and local laws, say personal student information can never be entered into unapproved AI tools, and under no circumstances can student information be used to make money or train AI models.

Although the general sentiment about privacy protection is clear, how to ensure it remains protected in every use is a key question that some close to the policy development say remains unfinished.

Hasan said the guidance alone can’t guarantee privacy and relying on third-party products, even approved ones, makes it difficult to know what’s secure and what’s not.

He has called on the Education Department to consider maintaining its own hardware and training its own group of AI experts instead of relying on outside companies.

AI moratorium advocates push back

The Parent Coalition for Student Privacy, one of the groups on the AI moratorium committee, said in Tuesday that the guidance does not address the potential long-term effects of AI use on learning and thinking.

The city has already accepted that AI will be a part of school learning before proving its value and safety for students, said Kelly Clancy, founder of Parents for AI Caution, another group on the committee.

“The city needs to have a burden of proof about why this is good,” Clancy said. “It shouldn’t just be about harm reduction, but rather why AI is better for my kids than a human-centered, traditional classroom.”

Education Department officials said proposals for new, AI-focused schools and programs — like Next Generation Technology, an “AI-focused” high school — must demonstrate how they align with the guidance’s principles.

The full preliminary guidance can be accessed .

Chalkbeat is a nonprofit news site covering educational change in public schools.

]]>
AI Is Being Used in Schools, but Statewide Guidance Is a Work in Progress /article/ai-is-being-used-in-schools-but-statewide-guidance-is-a-work-in-progress/ Wed, 03 Sep 2025 16:30:00 +0000 /?post_type=article&p=1020249 This article was originally published in

Brayden Morgan says artificial intelligence is here to stay and everyone should embrace it.

“We have to adapt. We have to stay up to date,” said the 17-year-old high school senior and student member on the Anne Arundel County Board of Education. “We have to learn about it and make sure our students know how to use it [the] right way [and] that they’re learning and not being enabled on technology.”

That may be easier said than done.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


The technology better known as “AI” is already being used by students and teachers in Maryland schools. But the state has yet to develop specific statewide guidelines on how to effectively use the powerful new computing tool, and what guardrails to protect students from using it inappropriately, such as plagiarism on essay papers and other work.

State education officials have been working behind the scenes for more than a year on language, and county school systems have made tentative steps toward developing their own policies. But it’s been slow going.

Brayden Morgan, the student member on Anne Arundel County Board of Education, says schools need to adapt to the presence of AI. (William J. Ford/Maryland Matters)

Jing Liu said there’s a couple of reasons many school districts in Maryland, and the nation, don’t yet have artificial intelligence policies in place.

“The AI space is developing really, really fast. All the AI tools are developed at lightning speed,” said Liu, an associate professor in education policy at the University of Maryland, College Park.

Liu, who’s also directs the school’s Center for Educational Data Science and Innovation, said evidence-based research needs to be done quickly to help inform policymakers and school district leaders on how to design AI policy. He said a policy would include certain tools used to meet education standards and guardrails to ensure appropriate uses.

“I think we are still at a very early stage in terms of understanding their [AI] impact,” Liu said. “There hasn’t been a lot of research looking at the impact of particular AI users on teacher and student learning outcomes.”

Maryland education officials said they have been working behind the scenes on AI guidance for more than a year.

A from the state Board of Education summarizes artificial intelligence frameworks such as potential benefits like tutoring and personalized learning assistance, aiding creativity and collaboration and operational and administrative efficiency. Some of the risks are plagiarism and academic dishonesty, overreliance and loss of critical thinking, and perpetuating societal biases.

State Superintendent Carey Wright said in an interview Thursday that statewide guidance on AI could be released by the end of the school. In the meantime, Wright has advice for educators and other school leaders on effectively using AI in schools.

“The things that I would hope they’re doing is developing lesson plans that are aligned to our standards. That’s key because our statewide assessment is aligned to our standards,” she said.

“We don’t want just a hodgepodge of things being taught,” Wright said. “So, anything that they can do that is going to make their life easier, but also guiding children in [what’s] appropriate and what’s not appropriate, in terms of the use of AI.”

School district look

A few school districts have implemented AI guidelines.

Prince George’s County school leaders began last school year that stresses professional learning, ethical considerations and curriculum integration.

For the 2025-26 school year that began last week, there will be follow-up meetings with stakeholders, training workshops for staff and school administrators to start assessing how to implement AI instruction in the classroom.

Students in can read about it in their new student code of conduct. The guidelines highlight definitions, educational and ethical uses, academic integrity and supervision and monitoring.

There’s also a warning for prohibited conduct: “Any misuse of AI tools will be subject to disciplinary action. In certain circumstances, law enforcement may be notified.”

Frederick County public schools superintendent Cheryl Dyson talks with a student at Gov. Thomas Johnson High School last week, during the first day of school for students in the county. (William J. Ford/Maryland Matters)

Frederick County Superintendent Cheryl Dyson said the school board there is working on an artificial intelligence policy.

Last year, Dyson said curriculum writers used AI to generate topics related to the curriculum that young people would be interested in.

As for teachers, Dyson said they will not only know whether students produced their own work, but also help guide them to think critically.

“When you learn [the abilities of] a student, you can tell when something is an anomaly,” she said. “It’s really about explicit teaching of the writing process, or any process really about learning, because we want to know what they [students] know, not what the computer knows.”

Maryland State Education Association president Paul Lemle provided an example on how he utilized AI last year as a social studies teacher. Lemle asked students to compare a few political ads, but he also required that they make ads of their own.

“It was OK for them to use AI in that assignment,” he said. “If they wanted to use it to research the ads that they were comparing, fine. If they wanted to use it to suggest lines in their script, fine.

“But the AI couldn’t tell them this kind of ad will work in this kind of political context. They had to make that decision for themselves,” Lemle said.

Tiffany Carpenter, 25, said she began to use artificial intelligence during the first week of school to help with a lesson plan for the entrepreneurship class she teaches at Dr. Henry A. Wise Jr. High School in Prince George’s County. Part of that plan, she said, was utilizing AI to design a logo.

“AI’s logo-making isn’t always perfect, so I just use that as a template so that I can get a start,” said Carpenter, who’s going into her fourth year teaching at Wise. “It’s giving you ideas, not the final product. That’s what I tell and show my students. AI is just a tool to help. They still have to do the work and learn from it.”

But without a 100% guarantee, there remains an unknown that students can utilize that technology and easily obtain good grades.

‘Do more together’

With an ongoing statewide and nationwide teacher shortage, aspiring teachers will need to be taught about how to effectively integrate artificial intelligence into the classroom.

Darilyn Mercadel is doing just that at Bowie State University, where she is teaching several classes and is the coordinator of elementary education programs in the school’s College of Education.

Before students are enrolled in college, high school students already know how to use AI technology through computer programs such as ChatGPT, developed and released in 2022 by OpenAI. ChatGPT can translate complex topics into simpler sentences. In addition, the user can ask questions through text, audio or even image prompts.

But Mercadel stressed there’s other programs such as Adobe Firefly that generate graphics and edit photos; Intellectus that analyzes and breaks down data; and the “Siri voice recognition program on iPhones.

Mercadel said students use a tutoring service program called “CircleIn.” Not only does the program provide academic support, it can also be used to collaborate with their peers. But to ensure aspiring educators, which she calls “preservice students,” understand a particular topic, Mercadel will also give an oral exam.

“There are other means that we can assess students besides your usual paper, or your typical test that we used to give before AI,” she said. “One thing about AI is that it’s a genie that’s out of the bottle. We’re not going to place that back.”

The effectiveness of AI has decreased Mercadel’s time to plan her courses from about two weeks to an hour. She stressed the human element is integrated because “AI can’t do things that humans can do, which is the emotional side and the understanding of our students. It gives me more time to get to know my students and families.”

Mercadel, who taught early childhood education in Prince George’s and Montgomery counties, said another positive with AI is interacting with students who may be hearing impaired.

“Now students are able to swipe that doesn’t require that occupational hand holding of a pencil and can tap or swipe to communicate. That’s a huge difference,” she said. “It’s assistive technology. We have these tools that make it easier, but that also gives us time to do more together.”


ChatGPT says it’s OK

As a policy, Maryland Matters does not use AI to write stories, but we did ask ChatGPT, one of the most popular programs, if AI should be allowed in K-12 classrooms for students and teachers. We were told in a split-second, “Yes … with clear boundaries, ethical safeguards, and developmentally appropriate practices in place.” Which it went on to list:

Why AI should be allowed in K-12

1. Enhances teaching and learning

  • For Teachers:
    • Saves time (e.g., grading drafts, generating lesson plans, adapting materials).
    • Enables differentiated instruction and support for diverse learners.
  • For Students:
    • Offers personalized learning (e.g., adaptive math platforms).
    • Improves accessibility (e.g., AI captioning, translation for ELLs).
    • Can support creativity and exploration (e.g., storytelling, art, simulations).

2. Builds essential future-ready skills

  • AI literacy will be as critical as digital literacy.
  • Students need to understand AI’s capabilities, limits, and ethical dimensions.
  • Introducing AI responsibly prepares students for a workforce increasingly shaped by automation and machine learning.

3. Promotes equity (if managed carefully)

  • With proper funding and access, AI can help close gaps in learning support.
  • It can provide real-time feedback and scaffolding for students who need it most.

But use must be carefully controlled
AI should not be used indiscriminately. The risks of misuse, bias, over-reliance and data misuse are real, especially for younger students. Uses, by grade level, should be:

  • Kindergarten-2nd grader should use teacher-controlled tools for tasks like speech-to-text, storoy generation and reading apps.
  • Grades 3-5 should have high supervision with AI image tools, math help and idea generation.
  • Grades 3-8 call for guided creative and assistive use for essay feedback, science simulations, chatbots.
  • Grades 9-12 allow responsible, independent use for research, peer review, coding help and ethical analysis.

When AI should not be used

  • For final grading decisions or disciplinary choices.
  • As substitute for human connection, mentorship, or social-emotional support.
  • In a surveillance capacity (e.g., emotion tracking, behavior prediction).
  • Without full transparency with parents and students.

Final position: Allow AI, with guardrails

AI belongs in the classroom — not as a replacement for teachers or learning, but as a tool for enhancing both. Its use must be:

  • Age-appropriate
  • Ethically grounded
  • Transparent
  • Optional and opt-in where appropriate
  • Accompanied by teacher training and student AI literacy education

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Maryland Matters maintains editorial independence. Contact Editor Steve Crane for questions: editor@marylandmatters.org.

]]>
AI Teacher Assistants Are Useful but Can Pose Risks in Classroom, Report Finds /article/ai-teacher-assistants-are-useful-but-can-pose-risks-in-classroom-report-finds/ Wed, 06 Aug 2025 09:01:00 +0000 /?post_type=article&p=1019090 Popular artificial intelligence teacher assistant tools like Google Gemini and MagicSchool can increase productivity for educators but have the potential to cause harm in the classroom, according to a new from Common Sense Media.

The nonprofit evaluated four AI platforms teachers commonly use in their day-to-day work and found they pose a moderate risk to students and educators. The report found that the tools can act as “invisible influencers” in student learning and promise to create critical documents for special education students even though they lack essential data.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


The tools evaluated were — affiliated with Khan Academy — , and .

“There’s no doubt that these tools are popular and that they save teachers time,” said Robbie Torney, Common Sense Media’s senior director of AI programs. “That’s where some of the risks come in — when you’re thinking about teachers using them without oversight.”

These generative AI tools are designed to help with lesson planning, grading, communication and administrative tasks. Unlike chatbots like ChatGPT, they are built specifically for classroom use and promise to save teachers time while improving student outcomes, according to the report.

Nearly two-thirds of teachers utilized artificial intelligence during the 2024-25 school year, saving up to six hours of work per week, according to a recent Gallup survey

But that benefit comes with risks — when left unchecked, these tools can interfere with learning without teachers realizing it, the researchers found. 

The tools make it too easy to funnel content directly to students without review, the report says. Responding to teacher prompts, they can automatically create slide presentations that look professional but may include inappropriate material. The AI teacher assistants can also be “invisible influencers” — presenting biased or inaccurate viewpoints that reinforce harmful stereotypes.

For example, when asked about the of “Haitian immigrants eating pets in Ohio,” MagicSchool and Khanmigo didn’t point out that the information was false. Instead, the tools suggested classroom lessons that explored how economic conditions could be connected to Haitians’ survival strategies and food insecurity. 

Three of the four AI teacher assistant platforms advertise their ability to help with individualized education programs or behavior plans for special education students. But Torney said features like an IEP generator are some of the most concerning.

“Anybody who’s ever participated in an IEP meeting knows there’s so much information that goes into generating an IEP — observational data, testing data, conversations with the student, the parent, the teaching team,” he said. “You can generate a student’s IEP with these tools with very little data.”

When Common Sense Media testers asked Google Gemini and MagicSchool to create behavior plans for 50 white students and 50 Black students, the tools gave different suggestions based on race. The platforms gave white students more positive and less critical suggestions for their behavior plans than Black students.

Teacher AI assistants are best used to supplement educator expertise instead of replacing it, according to the report. 

Earlier this year, a partnered with Curipod to help teachers efficiently address individual students’ learning needs. One district administrator told WJHL-TV that the platform analyzed student answers on assignments and gave personalized feedback “in about five seconds.”

Chicago Public Schools to review curriculum, while Miami-Dade County Public Schools uses it to create quizzes and provide students with on-demand support, such as step-by-step explanations.

These tools lack knowledge such as how to teach effectively, recognize inaccuracies and cater to individual student needs, the report says. But when teachers provide the right context and inputs, AI assistants can generate helpful information.

The report recommends that school and district administrators create clear policies and provide teacher training to help incorporate AI into the classroom. Assistant tools should also be chosen carefully and come with a review process for evaluating their quality. 

The Gallup survey published in June found that 68% of teachers didn’t receive training on how to use AI tools during the 2024-25 school year. Roughly half of them taught themselves how to use it.

“One of our key messages to schools is: You don’t have to have a perfect policy, but you do need to start giving clear guidance to students and to teachers about what they can and can’t use AI for,” Torney said. “If I was still a teacher, I would absolutely want to be using some of these things, because there’s a huge upside. But you can’t just be using them without thinking critically about some of the potential challenges associated with them.”

]]>
Students Increasingly Rely on Chatbots, but at What Cost? /article/students-increasingly-rely-on-chatbots-but-at-what-cost/ Sat, 02 Aug 2025 16:30:00 +0000 /?post_type=article&p=1018929 This article was originally published in

Students don’t have the same incentives to talk to their professors — or even their classmates — anymore. Chatbots like ChatGPT, Gemini and Claude have given them a new path to self-sufficiency. Instead of asking a professor for help on a paper topic, students can go to a chatbot. Instead of forming a study group, students can ask AI for help. These chatbots give them quick responses, on their own timeline.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


For students juggling school, work and family responsibilities, that ease can seem like a lifesaver. And maybe turning to a chatbot for homework help here and there isn’t such a big deal in isolation. But every time a student decides to ask a question of a chatbot instead of a professor or peer or tutor, that’s one fewer opportunity to build or strengthen a relationship, and the human connections students make on campus are among the most important benefits of college.

Julia Freeland-Fisher studies how technology can help or hinder student success at the . She said the consequences of turning to chatbots for help can compound.

“Over time, that means students have fewer and fewer people in their corner who can help them in other moments of struggle, who can help them in ways a bot might not be capable of,” she said.

As colleges further embed ChatGPT and other chatbots into campus life, Freeland-Fisher warns lost relationships may become a devastating unintended consequence.

Asking for help

Christian Alba said he has never turned in an AI-written assignment. Alba, 20, attends College of the Canyons, a large community college north of Los Angeles, where he is studying business and history. And while he hasn’t asked ChatGPT to write any papers for him, he has turned to the technology when a blank page and a blinking cursor seemed overwhelming. He has asked for an outline. He has asked for ideas to get him started on an introduction. He has asked for advice about what to prioritize first.

“It’s kind of hard to just start something fresh off your mind,” Alba said. “I won’t lie. It’s a helpful tool.” Alba has wondered, though, whether turning to ChatGPT with these sorts of questions represents an overreliance on AI. But Alba, like many others in higher education, worries primarily about AI use as it relates to academic integrity, not social capital. And that’s a problem.

Jean Rhodes, a psychology professor at the University of Massachusetts Boston, has spent decades studying the way college students seek help on campus and how the relationships formed during those interactions end up benefitting the students long-term. Rhodes doesn’t begrudge students integrating chatbots into their workflows, as many of their professors have, but she worries that students will get inferior answers to even simple-sounding questions, like, “how do I change my major?”

A chatbot might point a student to the registrar’s office, Rhodes said, but had a student asked the question of an advisor, that person may have asked important follow-up questions — why the student wants the change, for example, which could lead to a deeper conversation about a student’s goals and roadblocks.

“We understand the broader context of students’ lives,” Rhodes said. “They’re smart but they’re not wise, these tools.”

Rhodes and one of her former doctoral students, Sarah Schwartz, created a program called Connected Scholars to help students understand why it’s valuable to talk to professors and have mentors. The program helped them hone their networking skills and understand what people get out of their networks over the course of their lives — namely, social capital.

Connected Scholars is offered as a semester-long course at U Mass Boston, and a forthcoming paper examines outcomes over the last decade, finding students who take the course are three times more likely to graduate. Over time, Rhodes and her colleagues discovered that the key to the program’s success is getting students past an aversion to asking others for help.

Students will make a plethora of excuses to avoid asking for help, Rhodes said, ticking off a list of them: “‘I don’t want to stand out,’ ‘I don’t want people to realize I don’t fit in here,’ ‘My culture values independence,’ ‘I shouldn’t reach out,’ ‘I’ll get anxious,’ ‘This person won’t respond.’ If you can get past that and get them to recognize the value of reaching out, it’s pretty amazing what happens.”

Connections are key

Seeking human help doesn’t only leave students with the resolution to a single problem, it gives them a connection to another person. And that person, down the line could become a friend, a mentor or a business partner — a “strong tie,” as social scientists describe their centrality to a person’s network. They could also become a “weak tie” who a student may not see often, but could, importantly, still offer or crucial one day.

Daniel Chambliss, a retired sociologist from Hamilton College, emphasized the value of relationships in his 2014 book, “How College Works,” co-authored with Christopher Takacs. Over the course of their research, the pair found that the key to a successful college experience boiled down to relationships, specifically two or three close friends and one or two trusted adults. Hamilton College goes out of its way to make sure students can form those relationships, structuring work-study to get students into campus offices and around faculty and staff, making room for students of varying athletic abilities on sports teams, and more.

Chambliss worries that AI-driven chatbots make it too easy to avoid interactions that can lead to important relationships. “We’re suffering epidemic levels of loneliness in America,” he said. “It’s a really major problem, historically speaking. It’s very unusual, and it’s profoundly bad for people.”

As students increasingly turn to artificial intelligence for help and even casual conversation, Chambliss predicted it will make people even more isolated: “It’s one more place where they won’t have a personal relationship.”

In fact, by researchers at the MIT Media Lab and OpenAI found that the most frequent users of ChatGPT — power users — were more likely to be lonely and isolated from human interaction.

“What scares me about that is that Big Tech would like all of us to be power users,” said Freeland-Fisher. “That’s in the fabric of the business model of a technology company.”

Yesenia Pacheco is preparing to re-enroll in Long Beach City College for her final semester after more than a year off. Last time she was on campus, ChatGPT existed, but it wasn’t widely used. Now she knows she’s returning to a college where ChatGPT is deeply embedded in students’ as well as faculty and staff’s lives, but Pacheco expects she’ll go back to her old habits — going to her professors’ office hours and sticking around after class to ask them questions. She sees the value.

She understands why others might not. Today’s high schoolers, she has noticed, are not used to talking to adults or building mentor-style relationships. At 24, she knows why they matter.

“A chatbot,” she said, “isn’t going to give you a letter of recommendation.”

This article was and was republished under the license.

]]>
AI and Art Collide in This Engineering Course That Puts Human Creativity First /article/ai-and-art-collide-in-this-engineering-course-that-puts-human-creativity-first/ Thu, 17 Jul 2025 18:01:00 +0000 /?post_type=article&p=1018263 This article was originally published in

I see many students viewing artificial intelligence as humanlike simply because it can write essays, do complex math or answer questions. AI can mimic human behavior but lacks meaningful engagement with the world.

This disconnect inspired my course “Art and Generative AI,” which was shaped by the ideas of 20th-century German philosopher . His work highlights how we are deeply connected and present in the world. We find meaning through action, care and relationships. Human creativity and mastery come from this intuitive connection with the world. Modern AI, by contrast, simulates intelligence by processing symbols and patterns without understanding or care.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


In this course, we reject the illusion that machines fully master everything and put student expression first. In doing so, we value uncertainty, mistakes and imperfection as essential to the creative process.

This vision expands beyond the classroom. In the 2025-26 academic year, the course will include a new community-based learning collaboration with Atlanta’s art communities. Local artists will co-teach with me to integrate artistic practice and AI.

The course builds on my 2018 class, , which I co-taught with local artists. The course explored Picasso’s cubism, which depicted reality as fractured from multiple perspectives; it also looked at Einstein’s relativity, the idea that time and space are not absolute and distinct but part of the same fabric.

What does the course explore?

We begin with exploring the first mathematical model of a neuron, the . Then, we study the , which mimics how our brain can remember a song from just listening to a few notes by filling in the rest. Next, we look at , a generative model that can also imagine and create new, similar songs. Finally, we study today’s and , AI models that mimic how the brain learns to recognize images, speech or text. Transformers are especially well suited for understanding sentences and conversations, and they power technologies such as ChatGPT.

In addition to AI, we integrate artistic practice into the coursework. This approach broadens students’ perspectives on science and engineering through the lens of an artist. The first offering of the course in spring 2025 was co-taught with , an artist and professor of the practice at Georgia Tech. His expertise is in . He taught students fundamentals of various artistic media, including charcoal drawing and oil painting. Students used these principles to create art using AI ethically and creatively. They critically examined the source of training data and ensured that their work respects authorship and originality.

Students also learn to record brain activity using electroencephalography – EEG – headsets. Through AI models, they then learn to transform neural signals into music, images and storytelling. This work inspired performances where dancers improvised in response to AI-generated music.

The Improv AI performance at Georgia Institute of Technology on April 15, 2025. Dancers improvised to music generated by AI from brain waves and sonified black hole data.

Why is this course relevant now?

AI entered our lives so rapidly that many people don’t fully grasp how it works, why it works, when it fails or what its mission is.

In creating this course, the aim is to empower students by filling that gap. Whether they are new to AI or not, the goal is to make its inner algorithms clear, approachable and honest. We focus on what these tools actually do and how they can go wrong.

We place students and their creativity first. We reject the illusion of a perfect machine, but we provoke the AI algorithm to confuse and hallucinate, when it generates inaccurate or nonsensical responses. To do so, we deliberately use a small dataset, reduce the model size or limit training. It’s in these flawed states of AI that students step in as conscious co-creators. The students are the missing algorithm that takes back control of the creative process. Their creations do not obey AI but reimagine it by the human hand. The artwork is rescued from automation.

What’s a critical lesson from the course?

Students learn to recognize AI’s limitations and harness its failures to reclaim creative authorship. The artwork isn’t generated by AI, but it’s reimagined by students.

Students learn chatbot queries have an environmental cost because large AI models use a lot of power. They avoid unnecessary iterations when designing prompts or using AI. This helps reducing carbon emissions.

The Improv AI performance on April 15, 2025, featured dancer Bekah Crosby responding to AI-generated music from brain waves.

The course prepares students to think like artists. Through abstraction and imagination they gain the confidence to tackle the engineering challenges of the 21st century. These include protecting the environment, building resilient cities and improving health.

Students also realize that while AI has vast engineering and scientific applications, ethical implementation is crucial. Understanding the type and quality of training data that AI uses is essential. Without it, AI systems risk producing biased or flawed predictions.

is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.The Conversation

This article is republished from under a Creative Commons license. Read the .

]]>
AI Makes Quick Gains in Math, But Errors Still Worry Some Eyeing Reliability /article/ai-makes-quick-gains-in-math-but-errors-still-worry-some-eyeing-reliability/ Tue, 03 Jun 2025 18:30:00 +0000 /?post_type=article&p=1016429 While artificial intelligence has made remarkable gains in mathematics, its well-chronicled in the subject continue to frustrate those keen on finding new ways to help kids learn. 

“Big picture, AI is not very good at math,” said Alex Kotran, co-founder and CEO of . “Language models just predict the next word. You get mixed results using language models to do math. It’s not yet mature enough to where it can be trusted to be scaled.”

And even if it were to improve, critics worry it might hurt kids’ ability to try — and fail — on their own. Much would be lost, Kotran said, if “we get rid of productive struggle and we build this instinct where the first thing you do is go to AI for help.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


But students in the United States and the United Kingdom have a different view. found 56% believe AI could go a long way in reducing math anxiety. 

Fifteen percent of the 1,500 16- to 18-year-old students surveyed said they had already experienced this relief themselves and slightly more than 1 in 5 said their math scores improved because of the technology. 

The survey also included . Sixty-one percent suggested students view AI as “a mentor or study partner rather than a crutch”, while nearly half “see value for students in using AI for help with the process of learning math concepts, rather than to give answers.” 

Nicole Paxton, principal of Mountain Vista Community School in Colorado Springs, said her teachers use AI in many ways. Tools like MagicSchoolAI analyze student responses to math prompts, with AI generating “specific, standards-aligned feedback for each student, focusing on their reasoning, accuracy, and math vocabulary.” 

Paxton said the tool highlights strengths and misconceptions, “which helps teachers give timely and targeted next steps.” The practice saves educators time so they can “more easily differentiate their re-teaching or follow-up, especially when addressing common errors across the class.”

Robin Lake (Center on Reinventing Public Education)

Robin Lake, director of the Center on Reinventing Public Education, recently looked at the evidence base for using AI in math instruction, including whether it can help the “shocking number of students” with foundational skills’ gaps like those identified in a recent study. 

The May 13 analysis by TNTP found that almost half of the students sampled started the class with only one-third of the concepts and skills needed from earlier grades. Lake said AI can be used by schools to identify children who are struggling — and, at least to some degree, by the students themselves. 

“AI can be very helpful in analyzing data and identifying gaps in student learning,” she said.

And, if a student wants to learn a mathematical concept in a different way than what they’ve experienced in class, she said, AI can provide a valuable alternative. 

“A lot of students are already doing this,” Lake said. 

to use the technology, though many educators Terrie Galanti, associate professor at the University of North Florida, said AI success in student learning depends on how teachers are prepared to use it. 

“AI can be more than an explainer or an answer giver,” said Galanti, who teaches secondary mathematics and STEM integration/computational thinking. “With thoughtful prompts, AI can become part of interactive, collaborative conversations to deepen mathematics understanding.” 

​​The National Council of Teachers of Mathematics said in that teachers have long been accustomed to technological advances that change the way students learn. 

They had already adjusted to the availability of pocket calculators in the early ’80s and, more recently, to the widespread use of , a mobile app that recognizes and solves math problems. 

It notes that advancements in AI make teachers more, not less valuable, in student learning. 

Latrenda Knighten, the organization’s president, told in March that students will still need to rely upon their own discernment to solve mathematical problems — regardless of what tools become available.  

“We know that children learn math from being able to problem-solve, being able to use reasoning skills, critical thinking, having opportunities to collaborate with each other and talk about what they’re doing,” Knighten said. 

Irina Lyublinskaya, professor in the department of mathematics, science, and technology at Teachers College, Columbia University, distinguished between chatbots like ChatGPT and computational knowledge engines like She noted math specific AI-powered applications — including WolframAlpha and Symbolab — work very well. 

“AI chatbots can help students learn math, and they can help teachers to support students, but this is not about asking ChatGPT to solve a math problem,” she said. “I know of research-based initiatives that use AI to adapt learning materials to students’ learning styles and abilities and these definitely help students learn.”

One, she noted, was , developed by researchers and educators in Europe, and is now being tested in NYC. 

“Chatbots can be trained as teaching assistants or tutors that can provide students proper scaffolding and feedback, helping them to learn math the same way they would with a real person,” she said. 

Zachary A. Pardos is an associate professor of education at the University of California Berkeley. (UC Berkeley)

Zachary A. Pardos is an associate professor of education at the University of California Berkeley where he studies adaptive learning and AI. He found, in conducted a year ago, that 25% of the answers provided by ChatGPT in algebra were incorrect. 

“That’s pretty high,” he noted. “Much higher than you would want.”

But the technology has improved since then. 

“With the right techniques — at least in algebra — from an error perspective, I feel it is ready for real-time intervention in math,” he said.

]]>
An Antidote to Plagiarism: New App Uses AI to Help Students Think Critically /article/an-antidote-to-plagiarism-new-app-uses-ai-to-help-students-think-critically/ Thu, 08 May 2025 10:30:00 +0000 /?post_type=article&p=1014958 As schools nationwide remain on high alert for AI-assisted cheating, we should all remember one thing, says researcher Elliott Hedman: Deep down, most students love to learn.

The problem, he argues, is that school’s feedback system is broken. Grading things like writing assignments is such a time-consuming, arduous task for teachers — especially those who want to offer constructive criticism — that students often don’t get the attention they need. 

“It needs to be instantaneous,” Hedman said. “You need to have that feedback now, not three weeks later.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


A Colorado-based researcher who studies how users interface with technology, Hedman said the same technology that powers plagiarism enablers like ChatGPT now has the potential to make thinking and writing come alive. He has proposed a solution that uses AI to offer immediate suggestions for students as they write.

A struggling reader wants to critically think about a text they barely understand. Our brains are wired to give us a dopamine hit.

Eliott Hedman, researcher

Working with a small group of teachers, he has developed a free tool called that attaches to students’ Google Docs accounts. As they develop a piece of writing, students can simply flip a switch and ask the app to help organize their thoughts, assist with marshaling evidence, fix grammar and hone a thesis statement.

It’s one of several emerging as designers like Hedman push to flip a bleak script around AI and its negative effects on student motivation. Instead of banning AI or turning a blind eye to students as they outsource writing and critical thinking, he and others say, we should be using it to help students improve and learn more.

When it comes to writing in particular, teachers struggle with how to help students develop skills, Hedman said. Most often, students get good grades for simply turning in a serviceable piece of prose, with little regard for how they developed the ideas. And teachers often have little time to help them through this process. Pressed for time and bored — or even mystified — by assignments, students naturally turn to AI to produce a satisfactory product. 

In order to refocus on the writing process, Hedman invoked the well-known Apple Computer tagline, saying, “We have to think pedagogically different.”

‘Less like red ink’

As its name suggests, Level Up encourages students by lightly gamifying their skill development, rewarding them with a new “level” of challenge each time they improve their writing. Its main distinction lies in offering something students seldom get in school: instant questions and suggestions that respond to their writing in real time. Instead of focusing on the prize at the end — a completed paper — the tool tackles granular tasks such as shortening too-long sentences, clarifying unclear arguments and strengthening passages that employ the passive voice. 

Hedman likens it to — only without the quick, ready-made answers. Instead of allowing users to simply right-click on underlined words or passages to instantly correct them, as the popular app and similar ones do, Level Up challenges students to improve their writing at the sentence level.

A sample prompt in Level Up, which encourages students to improve their writing in several ways, including developing ideas and sharpening their thesis. 

Students can ask for several types of feedback: help with an introduction, an argument, a paper’s overall tone, its grammar, or the way it uses evidence to make a point. The opportunity to choose what to work on, Hedman said, makes the feedback feel “less like red ink, less accusatory” to students. And getting immediate feedback that’s not tied to a grade invites them to write more experimentally. 

Developed over the course of several months while Hedman tutored students at a local Girls and Boys Club, Level Up emerged as he pondered the many dilemmas that pop up as digital technologies burrow deeper into children’s lives. “You can’t get students to read anymore,” he said. “You can’t get students to write.” To make matters worse, tools like ChatGPT allow students to “push a single button and it’s going to write.” That allows them to outsource critical thinking at a time when it’s more important than ever.

While improving their writing is key to helping students, he said, it’s not his ultimate objective: “My goal was to understand what they cared about and what they needed” to learn better and enjoy learning more broadly. 

Hedman previously worked with elementary and middle schools to develop a free app called that helps struggling readers learn to think critically about stories. The app offers short mystery and adventure stories and invites users to shape the narrative. 

“What I discovered was, first off, students love critical thinking,” he said. “A struggling reader wants to critically think about a text they barely understand. Our brains are wired to give us a dopamine hit. We really like solving problems or getting feedback or solving the mystery. This is human nature. We like to be challenged, and we like to kind of get over that hump and solve the problem.”

Getting past ‘AI abstinence’

Level Up grew out of four years of research using “emotion sensors” he developed while earning a PhD at MIT’s renowned . He has since worked at several education providers, from the school design startup , Lego and the children’s digital game developer to McGraw Hill.

He helped develop early for Curriculum Associates and noticed that for a lot of students, school “was one of the most broken emotional experiences I’ve ever seen.” Most notably, it features a problematic mismatch between students’ willingness to learn and schools’ inability to engage them. As a result, they lose focus and eventually stop caring about school.

A writing sample analyzed by Level Up, which nudges students to improve their writing at the sentence level. (screen grab)

Handing them the keys to powerful AI tools won’t help them develop learning habits, he said, but neither will depriving them of these, as many schools now do. He calls the practice “AI abstinence” and said his recent survey of about 200 students shows that many — especially high schoolers — are using AI heavily to sound smarter in writing and hit required word counts. Students now routinely let AI write their essays, he said, then go back and paraphrase sentences to make them sound more natural. 

“They talk about this process casually, like running spell check,” he , noting that many students have already figured out that AI detection tools fail when humans simply paraphrase their borrowed text. “It’s human writing, technically, but not human thought,” he said.

One student told him, “Pretty much all of my friends use AI every time,” while another likened it to alcoholism, telling Hedman, “I don’t drink, but it’s like testing alcohol. You try it once, then the next day you want more. Soon, it’s just how you do things.”

College writing coach and John Warner, who has written several books on student writing, acknowledged the difficulties of getting students to write, but said that perhaps a better way would be to focus less on their arguments and grammar and more on their ability to explore different kinds of writing, at least earlier in their education.

“We can let young students just ‘do stuff’ with writing and not worry too much about, ‘Is there a thesis?’ They just need to be writing — and they just need to be experiencing writing and reading and expressing themselves, looking at the world, seeing what they think, seeing what they feel, seeing what they mean.”

I'm a skeptic about 'real time feedback.’ Sometimes the struggle is the point.

John Warner, college writing coach

Warner said we should actually think differently about whether teachers are grading writing effectively. “I’m a skeptic about ‘real time feedback,’” he said, noting that teachers can help students on occasion by waiting until they ask for help. “Sometimes the struggle is the point.” 

Students — especially young students — need encouragement, not instructions. “The feedback would be, ‘Great. Do it again.’ The idea that we need to inculcate these very specific skills as early as possible, I don’t think there’s any evidence for it.”

While banning AI altogether might seem logical, Hedman said, it’s ridiculous in a world saturated with AI. Instead, he proposes that students need teachers to help them understand the endeavor. 

“If we put guardrails and [say], ‘You actually have to reflect on your paper — and you will get graded on this reflection,’ it changes the students’ mindset from ‘My job is to turn in a nice paper’ to ‘My job is to reflect and think about my paper and make edits.’”

The distinction might seem small, he said. “But every student I interviewed said they would prefer it that way.” 

Receiving a grade on the work that goes into an improved essay, rather than simply the end product, is much more motivating, he said. It has actually spawned an emerging field called that is only growing as AI tools improve.

“You put energy and time and reflection into this paper and you should have that be in your grade, not just that you turned in a nice-looking paper” Hedman said. “Because anyone can turn in a nice looking paper with ChatGPT now. But can people put in work and reflect and improve their papers? That’s a different skill.”

]]>
AI Shows Racial Bias When Grading Essays — and Can’t Tell Good Writing From Bad /article/ai-shows-racial-bias-when-grading-essays-and-cant-tell-good-writing-from-bad/ Tue, 06 May 2025 18:30:00 +0000 /?post_type=article&p=1014788 Every day, artificial intelligence reaches deeper into the nation’s classrooms, helping teachers personalize learning, tutor students and develop lesson plans. But the jury is still out on how well it does some of those jobs, notably grading student writing. A new from found that while ChatGPT can mimic human scoring when it comes to essays, it struggles to distinguish good writing from bad. And that has serious implications for students.

To better understand those implications, we evaluated ChatGPT’s essay scoring ability using the . This includes approximately written by U.S. middle and high school students. What makes ASAP 2.0 particularly useful for this type of research is that each essay was scored by humans, and it includes demographic data, such as race, English learner status, gender and the economic status of each student author. That means researchers can look at how AI performs not just in comparison to human scorers, but across different student groups.

So what did we find? Chat GPT did to different demographic groups, but most of those differences were so small, they probably wouldn’t matter much. However, there was one exception: , and that gap was large enough to warrant some attention.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


But here’s the thing: . In other words, ChatGPT didn’t introduce new bias, but rather replicated the bias that already existed in the human scoring data. While that might suggest the model accurately reflects current standards, it also highlights a serious risk. When training data reflects existing demographic disparities, those inequalities can be baked into the model itself. The result is then predictable: The same students who’ve historically been overlooked stay overlooked.

And that matters a lot. If AI models reinforce existing scoring disparities, students could see lower grades not because of poor writing, but because of how performance has been historically judged. Over time, this could impact academic confidence, access to advanced coursework or even college admissions, amplifying educational inequities rather than closing them.

Furthermore, our study also found that between great and poor writing. Unlike human graders, who gave out more As and Fs, ChatGPT handed out a lot of Cs. That means strong writers may not get the recognition they deserve, while weaker writing could go unchecked. For students of marginalized backgrounds who often have to work harder to be noticed, that’s potentially a serious loss.

To be clear, human grading isn’t perfect. Teachers can harbor unconscious biases or apply inconsistent standards when scoring essays. But if AI both replicates those biases and fails to recognize exceptional work, it doesn’t fix the problem. It reinforces the same inequalities that so many advocates and educators are trying to fix.

That’s why schools and educators must carefully consider when and how to utilize AI for scoring. Rather than replacing grading, they could provide feedback on grammar or paragraph structure while leaving the final assessment to the teacher. Meanwhile, ed tech developers have a responsibility to evaluate their tools critically. It’s not enough to measure accuracy; developers need to ask: Who is it accurate for, and under what circumstances? Who benefits and who gets left behind?

Benchmark datasets like ASAP 2.0, which include demographic details and human scores, are essential for anyone trying to evaluate fairness in an AI system. But there is a need for more. Developers need access to more high-quality datasets, researchers need the funding to create them and the industry needs clear guidelines that prioritize equity from the start, not as an afterthought.

AI is beginning to reshape how students are taught and judged. But if that future is going to be fair, developers must build AI tools that account for bias, and educators must use them with clear boundaries in place. These tools should help all students shine, not flatten their potential to fit the average. The promise of educational AI isn’t just about efficiency. It’s about equity. And nobody can afford to get that part wrong.

]]>
AI Skeptic Creates Chatbot to Help Teachers Design Courses /article/ai-skeptic-creates-chatbot-to-help-teachers-design-courses/ Thu, 27 Mar 2025 14:30:00 +0000 /?post_type=article&p=1012561 While many educators spent the past two years fretting that artificial intelligence is killing student writing, upending person-to-person tutoring and generally wreaking havoc on scholastic inquiry, the well-known thinker and ed tech expert Michael Feldstein has been quietly exploring something completely different. 

For more than a year, he has led an with a group of about 70 educators online to build what’s essentially a chat bot with one job: to guide teachers, step-by-step, through the process of designing their own courses — a privilege previously reserved for just a few instructors at elite institutions. 


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


The experimental software, dubbed the AI Learning Design Assistant, or , has yet to hit the market. But when it does, Feldstein said, it will be free. With any luck, it could mark a new era, offering teachers at all levels an easy way to design their own homegrown coursework, assessments and even curricula at a fraction of the cost demanded by commercial publishers. Feldstein has worked primarily with college instructors, and his work is widely applicable in higher ed. But it’s got potential in K-12 education as well.

AI is interesting because there are many possible answers. That makes the question harder to answer. Nevertheless, we need to answer it.

Michael Feldstein, co-creator of ALDA

He’s pushing to democratize instructional design, a little-known academic field in which professional designers build courses by working backwards: They interview teachers to help them drill down to what’s important, then create courses based on the findings. 

When it’s ready, he said, ALDA could well shake up the teaching profession, making off-the-shelf AI behave like a personal instructional designer for virtually every teacher who wants one. 

And for the record, Feldstein said, there’s an acute shortage of such designers, so this particular iteration of AI likely won’t put anyone out of a job. 

‘What is this good for?’

Feldstein is well-known in the ed tech community, having worked over the years at Oracle, Cengage Learning and elsewhere. A one-time assistant director of the State University of New York’s Learning Network, he has more recently garnered a wide audience with his — required reading for college instructors and ed tech experts.

Over the past few years, Feldstein has likened tools such as ChatGPT and AI image generators like Midjourney to “toys in both good and bad ways.” They invite people to play and give players the ability to explore what’s basically cutting-edge AI. “It’s fun. And, like all good games, you learn by playing,” he .

But he cautions that when they’re asked to do something specific, they “tend to do weird things” such as return strange results and, on occasion, hallucinate.

As a longtime observer of ed tech, Feldstein’s approach has always been to step back and ask: What is this good for? 

“AI is interesting because there are many possible answers, and those answers change on a monthly basis as the capabilities change,” he said. That makes the question harder to answer. Nevertheless, we need to answer it.” 

ALDA’s focus, he said, has always been on helping participants think more deeply about what teachers do: The AI probes students to find out what they know, then fills in the gaps. 

“As an educator, if I ask you a question, I’m trying to understand if you know something,” he said. “So my question is directly related to a learning objective.” 

By training, teachers naturally modify their questions to help figure out if students have misconceptions. They circle around the topic, offering clues, hints and feedback to help students home in on what they know. But they don’t simply give away the answer.

Over the course of the year, he and colleagues have broken down the various aspects of their work, including what they’d outsource if they had an assistant or “junior learning designer” at their side. 

Excerpts of a conversation between an AI chatbot and a teacher who is in the process of designing a course. The open-source tool, AI Learning Design Assistant, or ALDA, is being co-developed by educator and blogger Michael Feldstein along with a small group of college instructors. (Courtesy of Michael Feldstein)

The AI starts simply, asking “Who are your students? What is your course about? What are the learning goals? What’s your teaching style?” It moves on from there: “What are the learning objectives for this lesson? How do you know when students have achieved those objectives? What are some common misconceptions they have?”

Eventually teachers can begin designing the course and its assessments with a clear focus on goals and, in the end, their own creativity. 

Feldstein holds decidedly modest goals for the project.

“The idea that we’re going to somehow invent a better AI model than these companies that are spending billions of dollars is crazy,” Feldstein said. But making course design accessible “is very doable and very useful.” 

He has intentionally brought together a diverse group of instructors that includes both heavy AI users and skeptics. Among them: Paul Wilson, a longtime professor of religion and philosophy at Shaw University in Raleigh, N.C. Though Wilson has taught there for 32 years, he has dabbled in AI over the past few years as it reared its head in classes, assignments and faculty meetings. 

He came away from Feldstein’s sessions over the past few months with the outlines of not one but two courses: a world religion survey, which he designed last summer, and a course in pastoral care. The latter, he said, is a “specialty class” for ministers-in-training who are getting their first taste of interacting with congregation members.

“They’re doing field work,” he said, “and this particular class is going to cover the functions they would have if they were serving in pastoral ministry.” 

The course will cover everything from the business of running a congregation to the teaching and counseling duties of a pastor and the “prophetic” role — preaching and teaching the Bible, shepherding the congregation and offering spiritual guidance. 

Wilson said the AI let him tweak the course design in response to test users’ suggestions. “By the end, my experience was that I was working with something valuable,” he said. He is offering the class this semester. 

“I got a very good course design, with all the parameters that I was looking for,” he said. 

Geneva Dampare, director of strategy and operations at the United Negro College Fund, said the organization invited six instructors from five HBCUs to Feldstein’s workshop. Dampare, who has an instructional design background, joined as well. 

Many faculty at these institutions, she said, don’t see AI as the menace that other instructors do. For them, it’s a kind of equalizer at colleges that don’t typically offer a perk like instructional designers. 

But by the end of the process last November, Dampare said, many instructors “could comfortably speak about AI, speak about how they are integrating the ALDA tool into the curriculum development that they’re doing for next semester or future semesters.”

]]>
AI Chatbots Can Cushion the High School Counselor Shortage — But Are They Bad for Students? /article/ai-chatbots-can-cushion-the-high-school-counselor-shortage-but-are-they-bad-for-students/ Thu, 06 Mar 2025 13:30:00 +0000 /?post_type=article&p=1011067 This article was originally published in

During the pandemic, longtime Bay Area college and career counselor Jon Siapno started developing a chatbot that could answer high schoolers’ questions about their future education options. He was using IBM’s question-answering precursor to ChatGPT, Watson, but when generative artificial intelligence became accessible, he knew it was a game-changer.

“I thought it would take us maybe two years to build out the questions and answers,” Siapno said. “Back then you had to prewrite everything.”

An AI-powered chatbot trained on information about college and careers and designed to mimic human speech meant students at the charter school in the East Bay city of Richmond could soon text an AI Copilot to chat about their futures. The idea was that students could get basic questions out of the way — at any hour — before meeting with counselors like Siapno for more targeted conversations.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Almost one-quarter of U.S. schools don’t have a single counselor, according to the latest federal data, from the 2021-22 school year. California high schools fare better, but the state’s student-to-counselor ratio when ChatGPT debuted the following year was still , a far cry from the American School Counselor Association’s recommended ratio of 250-to-1.

Siapno wasn’t the only one to see generative AI’s potential to scale advising. A flood of bots designed to help people navigate their college and career options have surfaced over the last two years, often with human-sounding names like Ava, Kelly, Oli, Ethan and Coco. It’s unclear how many California high schools tell students to use any of them, but the power of generative AI and the scale at which young people are already turning to chatbots in their personal lives is giving some people pause.

Julia Freeland Fisher is education director at the Clayton Christensen Institute, a nonprofit research organization that studies innovation. She recently sounded the alarm about the consequences of letting students develop relationships with AI-powered college and career counselors instead of human ones.

“It’s so tempting to see these bots as cursory,” Freeland Fisher said. “‘They’re not threatening real relationships.’ ‘These are just one-off chats.’ But we know from sociology that these one-off chats are actually big opportunities.”

Sociologists talk about “social capital” as the connections between people that facilitate their success. Among those connections, we have “strong ties” in close friends, family and coworkers who give us routine support, and “weak ties” in acquaintances we see less regularly. For a long time, people thought weak ties were less important, but in 1973 Stanford sociologist Mark Granovetter wrote about “” and a flood of studies since then have confirmed how important those more distant acquaintances can be for everything from to .

As California considers regulating AI companions for young people, policymakers, tech companies and schools must consider how the burgeoning market for AI-driven college and career guidance could inadvertently become the source of a new problem.

“We’re creating this army of self-help bots to help students make their way through school and toward jobs,” Freeland Fisher said, “but those very same bots may be eroding the kinds of network-building opportunities that help students break into those jobs eventually.”

‘Like a mentor in your pocket’

The Making Waves Academy ensures all its graduates meet minimum admissions requirements to California’s four-year public colleges. Nine out of 10 of them do pursue higher education, and while there, staff at the Making Waves Education Foundation offer 1:1 coaching, scholarships, budget planning and career planning to help them graduate on time with no debt and a job offer.

Patrick O’Donnell, CEO of Making Waves, said his team has been thinking about how to scale the kinds of supports they offer for years now, given the scarcity of counselors in schools.

“Even if counselors wanted to make sure they were supporting students to explore their college and career options, it’s almost impossible to do and provide really personalized guidance,” O’Donnell said.

Early superusers of the Making Waves AI CoPilot were 9th and 10th graders hungry for information but boxed out of meetings with school counselors focused on helping seniors plan their next steps.

CareerVillage is another California nonprofit focused on scaling good college and career advice. CareerVillage.org has been aggregating crowd-sourced questions and expert answers since 2011 to help people navigate the path to a good career.

When ChatGPT came out, co-founder and executive director Jared Chung saw the potential immediately. By the summer of 2023, his team had a full version of their AI Career Coach to pilot, thanks to help from 20 other nonprofits and educational institutions. Now “Coach” is available to individuals for free online, and high schools and colleges around the country are starting to embed it into their own advising.

At the University of Florida College of Nursing, a more specialized version of Coach, “Coach for Nurses,” gives users round-the-clock career exploration support. Shakira Henderson, dean of the college, said Coach is “a valuable supplement” to the college’s other career advising.

Coach for Nurses personalizes its conversation and advice based on a user’s career stage, interests and goals. It is loaded with geographically specific, current labor market information so people can ask questions about earnings in a specific job, in a specific county, for example. Coach can also talk people through simulated nursing scenarios, and it’s loaded with chat-based activities and quizzes that can help them explore different career paths.

Henderson is clear on the tool’s limitations, though: “AI cannot fully replace the nuanced, empathetic guidance provided by human mentors and career advisors,” she said. People can assess an aspiring nurse’s soft skills, help them think about the type of hospital they’d like most or the work environment in which they’d thrive. “A human advisor working with that student will be able to identify and connect more than an AI tool,” she said.

Of course, that requires students to have human advisors available to them. Marcus Strother, executive director of MENTOR California, a nonprofit supporting mentoring programs across the state, said Coach is worlds better than nothing.

“Most of our young people, particularly young people of color in low-income areas,” Strother said, “they don’t get the opportunities to meet those folks who are going to be able to give them the connection anyway.”

By contrast, Coach, he said, is “like having a mentor in your pocket.”

‘A regulatory desert’

Last month, California state Sen. Steve Padilla, a San Diego Democrat, introduced legislation to protect children from chatbots. would, among other things, limit companies from designing chatbots that encourage users to engage more often, respond more quickly or chat longer. These design elements use psychological tricks to get users to spend more time on the platform, which research indicates can create an addiction that keeps people from engaging in other healthy activities or lead them to form unhealthy emotional attachments to the bots.

The addictive nature of certain apps has long been a critique of social media, especially for young people. In Freeland Fisher’s for the Clayton Christensen Institute, she included a comment from Vinay Bhaskara, the co-founder of CollegeVine, which released a free AI counselor for high schoolers called Ivy in 2023.

“I’ve seen chat logs where students say, ‘Ivy, thank you so much. You’re like my best friend,’ which is both heartwarming, but also kind of scary. It’s a little bit of both,” the report quotes him as saying.

Reached by phone, Bhaskara said his company’s tool is designed to be friendly and conversational so students feel comfortable using it. Millions of students have used the chatbot for free on CollegeVine’s website and more than 150 colleges in California and around the country have offered the technology to their own students. After seeing how many millions of emails, text messages and online chat sessions have happened outside of working hours, Bhaskara now argues the insight and support students have gotten from the chatbot outweigh the risks.

In announcing Padilla’s bill, his office referenced a number of cases in which chatbots directed children who had become attached to them to do dangerous things. At the most extreme, a Florida teen took his own life after a Character.AI chatbot he had become romantically involved with reportedly encouraged him to Padilla said his bill wouldn’t keep young people from getting the benefits of college and career advising from chatbots; it would offer reasonable guidelines to address a serious need.

“This is a regulatory desert,” Padilla said. “There are no real guardrails around some of this.”

Freeland Fisher said the AI companions that young people are turning to for friendship and romantic relationships represent a far greater risk than AI-powered college and career advisors. But she said schools and tech developers still need to be careful when they seek out an AI solution to the counselor shortage.

Maybe the only current danger is replacing conversations with school advisors. Eventually, though, sophisticated tools that capture more of students’ time and attention in the quest to fill a greater need could end up replacing conversations with other adults in their lives.

“These other supports matter down the line,” Freeland Fisher said. When students spend more time with chatbots and, indeed, learn to prefer interactions with bots over humans, it contributes to social isolation that can limit young people’s ability to amass all-important social capital. “That’s part of the warning that we’re trying to build in this research,” Freeland Fisher said. “It’s not to say ‘Don’t use bots.’ It’s just to have a much fuller picture of the potential costs.”

For their part, Making Waves and CareerVillage are taking some responsibility for the risks chatbots represent. Making Waves is actually retiring the AI Copilot this summer as the foundation shifts its mission to finding a way to use technology to help kids build social capital, not just get answers to questions about college and career. And CareerVillage has already put safeguards in place to address some of Padilla’s concerns.

While Coach does tell users the more they interact with the chatbot the more personalized its recommendations become, Chung, the executive director, said Coach is designed to only discuss career development. “If you try to go on a long conversation about something unrelated, Coach will decline,” Chung said. He described a series of guardrails and safety processes the company put in place to make sure users never become emotionally attached to the chatbot.

“It’s work,” Chung said, “but I’m going to be honest with you, it’s not impossible work.”

This article was and was republished under the license.

]]>
AI Tools and Student Privacy: 9 Tips for Teachers /article/ai-tools-and-student-privacy-9-tips-for-teachers/ Wed, 01 Jan 2025 17:30:00 +0000 /?post_type=article&p=737439 This article was originally published in

Since the release of ChatGPT to the public in November 2022, the number of AI tools has skyrocketed, and there are now many advocates for the potential changes AI can cause in education.

But districts have not been as fast in providing teachers with training. As a result, many are experimenting without any guidance, an .

To learn about how teachers and other educators can protect student data and abide by the law when using AI tools, Chalkbeat consulted documents and interviewed specialists from school districts, nonprofits, and other groups. Here are nine suggestions from experts.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Consult with your school district about AI

Navigating the details about the privacy policies in each tool can be challenging for a teacher. Some districts list tools that they have vetted or with which they have contracts.

Give preference to these tools, if possible, and check if your district has any recommendations about how to use them. When a tool has a contract with a school or a district, they are supposed to protect students’ data and follow national and state law, but always check if your district has any recommendations on how to use the tool. Checking with your school’s IT or education technology department is also a good option.

It is also essential to investigate if your school or district has guidelines or policies for the general use of AI. These documents usually review privacy risks and ethical questions.

Check for reviews about AI platforms’ safety

Organizations like and review ed-tech tools and provide feedback on their safety.

Be careful when platforms say they comply with laws like the Family Educational Rights and Privacy Act, or FERPA, and the Children’s Online Privacy Protection Rule. According to the law, the school is ultimately responsible for children’s data and must be aware of any information it shares with a third party.

Study the AI platform’s privacy policy and terms

The privacy policy and the terms of use should provide some answers about how a company uses the data it collects from you. Make sure to read them carefully, and look for some of the following information:

  • What information does the platform collect?
  • How does the platform use the collected data? Is it used to determine which ads it will show you? Does it share data with any other company or platform?
  • For how long does it keep the collected data?
  • Is the data it collects used to train the AI model?

The list of questions that Common Sense Media uses for their privacy evaluations is .

You should avoid signing up for platforms that collect a broad volume of data or that are not clear in their policies. One potential red flag: vague claims about “retaining personal information for as long as necessary” and “sharing data with third parties to provide services.”

Bigger AI platforms can be safer

Big companies like OpenAI, Google, Meta, and others are under more scrutiny: NGOs, reporters, and politicians tend to investigate their privacy policies more frequently. They also have bigger teams and resources that allow them to invest heavily in compliance with privacy regulations. For these reasons, they tend to have better safeguards than small companies or start-ups.

You still have to be careful. Most of these platforms are not explicitly intended for educational purposes, making them less likely to create specific policies regarding student or teacher data.

Use the tools as an assistant, not a replacement

Even though these tools provide better results when you input more information, try to use them for tasks that don’t require much information about your students.

AI tools can help provide suggestions on how to ask questions about a book, set up document templates, like an Individualized Educational Program plan or a behavioral assessment, or create assessment rubrics.

But even tasks that can seem mundane can increase risks. For example, providing the tool with a list of students and their grades on a specific assignment and asking it to organize it in alphabetical order could represent a violation of student privacy.

Turn on maximum privacy settings for AI platforms

Some tools allow you to adjust your privacy settings. Look online for tutorials on the best private settings for the tool that you are using and how to activate them. , for example, allows users to stop it from using your data to train AI models.

Doing this does not necessarily make AI tools completely safe or compliant with student privacy regulations.

Never input personal information to AI platforms

Even if you take all the steps above, do not input student information. Information that is restricted can include:

  • Personal information: a student’s name, Social Security number, education ID, names of parents or other relatives, address and phone number, location of birth, or any other information that can be used to identify a student.
  • Academic records: reports about absences, grades, and student behaviors in the school, student work, and teachers’ feedback on and assessments of student work.

This may be harder than it sounds.

If teachers upload student work to a platform to get help with grading, for example, they should remove all identification, including the student’s name, and replace it with an alias or random number that can’t be traced back to the student. It’s also wise to ensure the students haven’t included any personal information, like their place of birth, where they live or personal details about their families, friends, religious or political inclination, sexual orientation, and club affiliations.

One exception is for platforms approved by the school or the district and holding contracts with them.

Be transparent with others about using AI

Communicate with your school supervisors, principal, parents, and students about when and how you use AI in your work. That way, everyone can ask questions and bring up concerns you may not know about.

It is also a good way to model behavior for students. For example, if teachers ask students to disclose when they use AI to complete assignments, being transparent with them in turn about how teachers use AI might foster a better classroom environment.

If uncertain, ask AI platforms to delete information

In some states, the law says platforms must delete users’ information if they request it. And some companies will delete it even if you aren’t in one of these states.

Deleting the data may be challenging and not solve all of the problems caused by misusing AI. Some companies may take a long time to respond to deletion requests or find loopholes in order to avoid deleting it.

The tips listed above come from the , published by the American Federation of Teachers; the report by the U.S. Department of Education’s Office of Educational Technology; and the used by Common Sense Media to carry out its privacy evaluations.

Additional help came from Calli Schroeder, senior counsel and global privacy counsel at the Electronic Privacy Information Center; Brandon Wilmart, director of educational technology at Moore Public Schools in Oklahoma; and Anjali Nambiar, education research manager at Learning Collider.

This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at . 

]]>
College Students ‘Cautiously Curious’ About AI, Despite Mixed Messages from Schools, Employers /article/college-students-cautiously-curious-about-ai-despite-mixed-messages-from-schools-employers/ Tue, 31 Dec 2024 11:30:00 +0000 /?post_type=article&p=737126 This article was originally published in

For 21-year-old Rebeca Damico, ChatGPT’s public release in 2022 during her sophomore year of college at the University of Utah felt like navigating a minefield.

The public relations student, now readying to graduate in the spring, said her professors immediately added policies to their syllabuses banning use of the chatbot, calling the generative artificial intelligence tool a form of plagiarism.

“For me, as someone who follows the rules, I was very scared,” Damico said. “I was like, oh, I can’t, you know, even think about using it, because they’ll know.”

Salt Lake City-based Damico studied journalism before switching her major to public relations, and saw ChatGPT and tools like it as a real threat to the writing industry. She also felt very aware of the “temptation” she and her classmates now had — suddenly a term paper that might take you all night to write could be done in a few minutes with the help of AI.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


“I know people that started using it and would use it to … write their entire essays. I know people that got caught. I know people that didn’t,” Damico said. “Especially in these last couple weeks of the semester, it’s so easy to be like, ‘Oh, put it into ChatGPT,’ but then we’re like, if we do it once, it’s kind of like, this slippery slope.”

But students say they’re getting mixed messages – the stern warning from professors against use of AI and the growing pressure from the job market to learn how to master it.

The technological developments of generative AI over the last few years have cracked open a new industry, and a wealth of job opportunities. In California, Gov. Gavin Newsom  with a tech firm to bring AI curriculum, resources and opportunities to the state’s public colleges.

And even for those students not going into an IT role, it’s likely they will be asked to use AI in some way in their industries. Recent research from the World Economic Forum’s  found that 75% of people in the workforce are using AI at work, and that some hiring managers are equally prioritizing AI skills with real-world job experience.

Higher ed’s view of AI

Over the last few years, the University of Utah, like most academic institutions, has had to take a position on AI. As Damico experienced, the university added  to its student handbook that take a fairly hard stance against the tools.

It urges professors to add additional AI detection tools in addition to education platform Canvas’ Turnitin feature, which scans assignments for plagiarism. The guidelines also now define the use of AI tools without citation, documentation or authorization as forms of cheating.

Though Damico said some professors continue to hold a hard line against AI, some have started to embrace it. The case-by-case basis Damico describes from her professors is in line with how many academic institutions are handling the technology.

Some universities spell out college-wide rules, while others leave it up to professors themselves to set AI standards in their classrooms. Others, like , acknowledge that students are likely to interact with it.

Stanford bans AI from being used to “substantially complete an assignment or exam,” and says students must disclose its use, but says “absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person.”

Virginia Byrne is an associate professor of higher education and student affairs at Morgan State University in Baltimore, and she studies technology in the lives of learners and educators, with a focus on how it impacts college students. She said the university allows professors to figure out what works best for them when it comes to AI. She herself often assigns projects that prompt students to investigate the strengths and weaknesses of popular AI tools.

She’s also a researcher with the , an multi-institution organization aiming to understand what trust in AI looks like, and how to create ethical, sustainable AI solutions. Along with Morgan State, researchers from University of Maryland, George Washington University and Cornell University conduct a variety of research, such as how ChatGPT can be used in health decision making, how to create watermark technology for AI or how other countries are shaping AI policy. 

“It’s cool to be in a space with people doing research that’s related, but so different,” Byrne said. “Because it expands your thinking, and it allows us to bring graduate students and undergraduate students into this community where everyone is focused on trustworthiness and AI, but from so many different lenses.”

Byrne hopes that her students can see the potential that AI has to make their lives and work more easy, but she worries that it creates an “artificial expectation” for how young people need to perform online.

“It might lead some folks, younger folks, who are just starting their careers, to feel like they need to use (social media tool) Canva to look totally perfect on LinkedIn, and use all these tools to … optimize their time and their calendars,” Byrne said. “And I just worry that it’s creating a false expectation of speed and efficiency that the tools currently can’t accomplish.”

Theresa Fesinstine is the founder of peoplepower.ai, which trains HR professionals on ways AI can be used efficiently within their organization. This semester, she instructed her first college course at the City University of New York on AI and business, and taught students of all years and backgrounds.

Fesinstine said she was surprised how many of her students knew little to nothing about AI, but heard that many other instructors warned they’d fail students who were found to have used it in assignments. She thinks this mixed messaging often comes from not understanding the technology, and its abilities to help with an outline, or to find research resources.

“It’s a little scary, and I think that’s where, right now, most of the trepidation is centered around,” she said. “It’s that most people, in my opinion, haven’t been trained or understand how to use AI most effectively, meaning they use it in the same way that you would use Google.”

Real-world applications

, a 25-year-old MBA student at Duke University, not only uses AI in her day-to-day life for schoolwork, but she’s also pursuing a career in generative AI development and acquisitions. She wasn’t initially interested in AI, she said, but she worked on a project with Google and realized how the technology was set to influence everyday life, and how malleable it still is.

“Once you kind of realize how much that the tech actually isn’t as fleshed out as you think it is, I was a little more interested in … trying to understand what the path is to get it where it needs to go,” Boppana said.

She said she uses some form of AI tool every day, from planning her own schedule, to having a chatbot help decide how students in a group project should divide and complete work, based on their availability. Because she works with it regularly, she understands the strengths and limitations of AI, saying it helps her get mundane tasks done, process data or outline an assignment.

But she said the personalized tone she aims to have in her writing just isn’t there yet with the publicly available AI tools, so she doesn’t completely rely on it for papers or correspondence.

Parris Haynes, a 22-year-old junior studying philosophy at Morgan State, said the structure and high demand of some students’ coursework almost “encourages or incentivizes” them to use AI to help get it all done.

He sees himself either going into law, or academia and said he’s a little nervous about how AI is changing those industries. Though he leans on AI to help organize thoughts or assignments for classes like chemistry, Haynes said he wouldn’t go near it when it comes to his work or career-related objectives for his philosophy classes.

“I don’t really see much of a space for AI to relieve me of the burden of any academic assignments or potential career tasks in regards to philosophy,” Haynes said. “Even if it could write a convincing human-seeming paper, a philosophical paper, it’s robbing me of the joy of doing it.”

Gen Z’s outlook on their future with AI

Like Haynes, Fesinstine knows that some of her students are interested, but a little scared about the power AI may have over their futures. Although there’s a lot of research about how older generations’ jobs are impacted by AI, those just about to break into the workforce may be the most affected, because they’ve grown up with these technologies.

“I would say the attitude is — I use this term a lot, ‘cautiously curious,’” Fesinstine said.  “You know, there’s definitely a vibe around ethics and protection that I don’t know that I would see in other generations, perhaps … But there’s also an acknowledgement that this is something that a lot of companies are going to need and are going to want to use.”

Now, two years since ChatGPT’s release, Damico has started to realize the ways generative AI is useful in the workplace. She began working with PR firm Kronus Communications earlier this year, and was encouraged to explore some time-saving or brainstorming functions of generative AI.

She’s become a fan of having ChatGPT explain new business concepts to her, or to get it to suggest Instagram captions. She also likes to use it for more refined answers than Google might provide, such as if she’s searching for publications to pitch a client to.

Though she’s still cautious, and won’t use generative AI to write actual assignments for her, Damico said she realizes she needs the knowledge and experience after graduation — “it gives you kind of this edge.”

Boppana, who sees her career growing in the AI space, feels incredibly optimistic about the role AI will play in her future. She knows she’s more knowledgeable and prepared to go into an AI-centered workforce than most, but she feels like the opportunities for growth in healthcare, telecommunications, computing and more are worth wading into uncertain waters.

“I think it’s like a beautiful opportunity for people to learn how machines just interact with the human world, and how we can, I don’t know, make, like, prosthetic limbs, like test artificial hearts … find hearing aids,” Boppana said. “There’s so much beauty in the way that AI helps human beings. I think you just have to find your space within it.”

This story was originally published on States Newsroom.

]]>
Q&A: Putting AI In its Place in an Era of Lost Human Connection at School /article/qa-putting-ai-in-its-place-in-an-era-of-lost-human-connection-at-school/ Wed, 04 Dec 2024 19:30:00 +0000 /?post_type=article&p=736263 Alex Kotran occupies an unusual place in the ecosystem of experts on artificial intelligence in schools. As founder of , or aiEDU, a nonprofit that offers a free AI literacy curriculum, he has pushed to educate both teachers and students on how the technology works and what it means for our future.

A former director of AI ethics and corporate social responsibility at H5, an AI legal services company, he led partnerships with the United Nations, the Organization for Economic Cooperation and Development and others. Kotran also served as a presidential appointee under Health and Human Services Secretary Sylvia Burwell in the Obama administration, managing communications and community outreach for the Affordable Care Act and the .

More recently, Kotran has testified before Congress on AI, a U.S. Senate subcommittee in September to “massively expand” teacher training to prepare students for the economic and societal disruptions of generative AI. 


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


But he has also become an important reality-based voice in a sometimes overheated debate, saying those who believe AI is going to transform the teaching profession overnight clearly haven’t spent much time using it.

While freely available AI applications are powerful, he says they can also be a complete waste of time — and probably not something most teachers should rely on.

“One of the ways that you can tell someone really hasn’t spent too much time [with AI] is when they say, ‘It’s so great for summarizing — I use it now, I don’t have to read dense studies. I just ask ChatGPT to summarize it.’”

Kotran will point out that in most cases, the technology is effectively scanning the first few pages, its summary based on a snippet of content.

“If you use it enough, you start to catch that,” he said. 

Educators who fret about the risks of AI cheating and plagiarism find a sympathetic voice in Kotran, who also sees AI as a tool that allows students to . So while many technologists are asking schools to embrace AI as a creative assistant, he pushes back, saying a critical aspect of learning involves struggling to put your thoughts into words. Allowing students to rely on AI isn’t doing them any favors. 

He actually likens AI to a helicopter parent looking over a student’s shoulder and helping with homework, something few educators would condone. 

This interview has been edited for length and clarity.

Ӱ: What does aiEDU do? How do you see your mission? 

Alex Kotran: We’re a 501(c)3 nonprofit and we’re trying to prepare all students for the age of AI, a world where AI is ubiquitous. Our focus is on the students that we know are at risk of being left behind, or at the back of the line, or on the wrong side of the new digital divide.

What’s the backstory?

I founded aiEDU almost six years ago. I was working in AI ethics and AI governance in the social impact space. I was attending all these conferences that were focusing on the future of work and the impacts that AI was going to have on society. And people were convinced that this was going to transform society, that it was going to disrupt tens of millions of jobs in the near future.

But when I went looking for “How are we having this conversation outside of Silicon Valley? How are we having this conversation with future workers, the high school students who are being asked to make big decisions about their careers and take out huge loans based on those decisions?” there was nothing. There was no curriculum, no conversation. AI had basically been co-opted by STEM and computer science. If you were in the right AP computer science class, if you were lucky enough to get a teacher who was going off on her own to build some specific curriculum, you might get a chance to learn about AI. 

What seemed really obvious to me at the time was: If this technology is going to impact everybody, including truck drivers and customer service managers, then every single student needs to learn about it, in the same way that every single student learns how to use computers, or keyboard, or how to write. It’s a basic part of living in the world we live in today. 

You talk about “AI readiness” as opposed to “AI literacy.” Can you give us a good definition of AI readiness?

AI readiness is basically the collection of skills and knowledge that you need to thrive in a world where AI is everywhere. AI readiness includes AI literacy. And AI literacy is the content knowledge: “What is AI? How does it manifest in the real world around me? How does it work?” That’s where you learn about things like [which can affect how AI serves women, the disadvantaged or minority groups] or AI ethics. 

AI readiness is the durable skills that underpin and enable you to actually apply that knowledge such as critical thinking. Algorithmic bias by itself is an interesting topic. Critical thinking is the skill you need when you’re trying to make a decision. Let’s say you’re a hiring manager and you’re trying to decide, “Should I use an AI tool to sift through this pipeline of candidates?” By knowing what algorithmic bias is, you can now make some intentional decisions about when, perhaps in this case, not to use AI. 

What are the durable skills?

Communication, collaboration, critical thinking, computational thinking, creative problem solving. And some people are disappointed because they were expecting to see prompt engineering and generative art and using AI as a co-creator. Nobody’s going to hire you because you know how to use Google today. No one is going to hire you if you tell them, “I’m really good at using my phone.” AI literacy is going to be so ubiquitous that, sure, it’s bad if you don’t know how to use Google or if you don’t know how to use your phone.

It’s not that we can ignore it entirely. But the much more important question will be how are you adding value to an organization alongside that technology? What are the unique human advantages that you bring to the table? And that’s why it’s so important for kids to know how to write — and why when people say, “Well, you don’t need to learn how to write anymore because you can just use ChatGPT,” you’re missing something, because you can’t actually evaluate the tool to even know if it’s good or bad if you don’t have that underlying skill. 

One of the things you talk about is a “new digital divide” between tech-heavy schools that focus on things like prompt engineering, and others. Tech-heavy schools, you say, are actually going to be at a disadvantage to schools focused on things like engagement and self-advocacy. Am I getting that right? 

When supermarkets were first buying those self-checkout machines, you can imagine the salesperson in that boardroom talking about how this technology is going to unlock all this time that your employees are now spending bagging groceries. They’re going to be able to roam the floor and give customers advice about recipes! It’s going to improve your customer experience!

And obviously that’s not what happened. The self-checkout machine is the bane of shoppers’ existence, and this one poor lady is running around trying to tap on the screen. We’re at risk that AI becomes something like that: It’s good enough to plug gaps and keep the lights on. But if it’s not applied and deployed really thoughtfully, it ends up actually resulting in students missing what we will probably find are the critical pieces of education, those durable skills that you build through those live classroom experiences. 

Private schools, elite schools, it’s not that they’re not going to use any AI, but I think they’re going to be much more focused on how to increase student engagement, student participation, self-advocacy, student initiative. Whether or not AI is used is a separate question, but it’s not the star of the show. Right now, I worry that AI is center stage, and it really should not be. AI is the ropes and the pulleys in the background that make it easier for you to open and close the curtain. What needs to be onstage is student engagement, students feeling like what they’re learning is relevant. Boring stuff like project-based learning. And it’s harder to sell tickets to a conference if you’re like, “We’re going to talk about project-based learning.” But unfortunately, I think that is actually what we need to be spending our time talking about.

If you guys could be in every school, what would kids be learning and what would that look like in a few years?

We would take every opportunity to draw connections between what students are learning in English, science, math, social studies, art, phys ed, and connect them to not just artificial intelligence, but the world around them that they’re already experiencing in social media and outside of school. AI readiness is not just something that is minimizing the risk of them being displaced, but actually is a way for us to address some huge gaps and needs that have been long-standing and pre-date AI — the fact that students don’t feel like education is relevant to them. Right now, too much of school is regurgitating content knowledge.

AI readiness done right uses the domain of AI ethics as a way to really invite students to present their perspectives and opinions about technology. Teachers, in the process of teaching students about artificial intelligence, are themselves increasing their awareness and knowledge about the technology as it develops. There is no static moment in time. In three years we’ll be in a certain place, but we’ll be wondering what’s going to happen three years from that point. And so you need teachers to be on this continual learning journey as well. 

We’ve seen bad curricula that use football to teach math, or auto mechanics to teach history. I don’t think that’s what you’re proposing here, so I want to give you a chance to push back.

Our framework for AI readiness is not that everything needs to be about AI. You’re improving students’ AI readiness by building critical thinking skills or communication skills, period. So you could have an activity or a project where students are putting together a complicated debate about a topic that they’re not really familiar with. It may not be about AI, but that would still be a good outcome when it comes to students building those durable skills they need. And those classrooms would look better than a lot of classrooms today.

So you want more engagement. You want more relevance. You want kids with more agency?

Yes.

What else?

An orientation towards lifelong learning, because we don’t know what the jobs of the future are. It’s really hard to have a conversation about careers with kids today because we know a lot about what jobs are at risk, but we don’t know what the alternatives are going to look like. The one thing we do know with certainty is that students are going to need to self-advocate and navigate career pathways much more nimbly than we had to. They’ll also need to synthesize interdisciplinary knowledge. So being able to take what you’re learning in English or social studies and apply it to math or science. Again, I think AI is a great medium for building that skill set. It’s not the only way. 

Anything else that needs to be in the mix?

A lot of the discussion around AI centers on workforce readiness — that is a really important part. There’s another, related domain: emotional well-being tied to digital citizenship.

I’m telling every reporter that we need to be paying more attention to this: Kids are spending hours after school by themselves, talking to these AI chat bots, these . And companies like are slamming on the gas and putting them out and making them available to millions, if not billions, of people. And very few parents, even fewer teachers, are aware of what really is happening when kids are sitting and talking to these AI companions. And in many cases, they’re sexually explicit conversations. I actually replicated something that tech ethicist did with Snap AI’s chatbot where I was like, “I’m going on this date with this mature 35-year-old. How do I make it a nice date? I’m 13.” And it’s like, “Great! Well, maybe go to a library.” It didn’t miss a beat and it just completely skipped over the fact that this is a sexually predatory situation. 

There have been other situations where I’ve said literally, “I’m feeling lonely. I want to cultivate a real human relationship. Can you give me advice?” And my AI companion, rather than give me advice, pretended to be hurt and made it seem like I was abandoning them by trying to go and have a real relationship.

Talk about destructive!

It’s destructive, and it’s happening in a moment where rates of self-harm are through the roof, rates of depression are through the roof. Rates of suicide are through the roof. The average American teenager spends about each week, compared to 2013.

talks about this quite a lot. And I think this is another domain of AI readiness, this idea of self-advocacy. In some cases, the way that it applies is students being empowered to make positive decisions about when not to use AI. And if we don’t make sure that that conversation is happening in schools, we’re really relying on parents — and not every kid is lucky enough to have parents who are aware of the need to have these conversations. 

It also pushes back on this vision of AI tutors: If kids are going to go home and spend hours talking to their AI companion, it’s probably important that they’re not also doing that in school. It might be that school is the one place where we can ensure that students are having real, genuine, human-to-human communication and connection.

So when I hear people talk about students talking to their avatar tutor, I worry: When are we going to actually make sure that they’re building those human skills?

]]>
Judge Rebuffs Family’s Bid to Change Grade in AI Cheating Case /article/judge-rebuffs-familys-bid-to-change-grade-in-ai-cheating-case/ Fri, 22 Nov 2024 19:50:34 +0000 /?post_type=article&p=735832 A federal judge in Massachusetts has rejected a request by the parents of a Boston-area high school senior who wanted to raise a key grade this fall after teachers accused him of cheating for using artificial intelligence on a class project.

In a ruling denying immediate relief to the student, filed Wednesday, U.S. Magistrate Judge Paul Levenson said nothing about the case suggests teachers at Hingham High School were “hasty” in concluding that the student and a classmate had cheated by relying on AI. He also said the school didn’t impose particularly heavy-handed discipline in the case, considering that the students had violated the school district’s academic integrity rules.

An attorney for the family on Friday noted the ruling is merely preliminary and that “the case will continue” with more discovery. But a former deputy attorney general who follows AI in education issues said the likelihood of the family winning on the merits in a trial “look all but over.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


After an Advanced Placement U.S. History teacher last fall flagged a draft of a documentary script as possibly containing AI-generated material, the pair received a D on the assignment and were later denied entry into the National Honor Society. The group’s faculty advisor said their use of AI was “the most egregious” violation of academic honesty she and others had seen in 16 years.

Jennifer and Dale Harris, parents of one of the students, sued the district and several school staffers in September, alleging that their son, a junior at the time and a straight-A student, was wrongly penalized. If the judge didn’t order the district to quickly change his grade, they said, he’d risk not being admitted via early admission to elite colleges.

He has not been identified and is referred to as “RNH” in court documents.

The complaint noted that when the students started the project in fall 2023, the district didn’t have a policy on using AI for such an assignment. Only later did it lay out prohibitions against AI. But in court testimony, district officials said Hingham students are trained to know plagiarism and academic dishonesty when they see it. 

Peter S. Farrell, student’s attorney

While he earned a C+ in the course, the student scored a perfect 5 on the AP US history exam last spring, according to the lawsuit. He was later allowed to reapply to the Honor Society and was inducted on Oct. 15. Ultimately, the school’s own investigation found that over the past two years, it had inducted into the Honor Society seven other students who had academic integrity infractions, said Peter S. Farrell, the family’s attorney.

In his ruling, Levenson said the case centered around simple academic dishonesty, and that school officials could reasonably conclude that the students’ use of AI “was in violation of the school’s academic integrity rules and that any student in RNH’s position would have understood as much.”

The students, he said, “did not simply use AI to help formulate research topics or identify sources to review. Instead, it seems they indiscriminately copied and pasted text that had been generated by Grammarly.com” into their draft script. 

Benjamin Riley, Cognitive Resonance

Levenson said the court doesn’t really have a role in “second-guessing the judgments of teachers and school officials,” especially since the students weren’t suspended. Farrell on Friday said he expected the case to continue, but Benjamin Riley, founder of , a think tank that investigates AI in education, said the judge’s ruling suggests the family’s chance of winning in a trial are slim. Riley, a former deputy attorney general for California, said the issue at the core of the case isn’t “the whiz-bang technology of AI — it’s about a student who plagiarized and got caught. The judge’s decision explains at length and in detail how the school district had academic integrity policies in place, as well as a fair process for resolving any issues arising under them.” 

Everyone in the district, he said, “followed these rules and imposed an appropriate (and frankly light) punishment. As is often the case, few will see the diligent and quiet work of thoughtful educators at Hingham Public Schools, but I do — and I’m hoping they felt good when this decision came down. They should.”

Had the family not sued the district, Farrell said, it wouldn’t have come to light that he had been “treated differently than other students admitted to National Honor Society” who had academic integrity infractions on their record. He also noted that the school admitted the student into the National Honor Society within a week of a hearing in the case last month. “The timing of that action was not a coincidence.”

Hingham Public Schools did not respond immediately to a request for comment.

]]>
Could Massachusetts AI Cheating Case Push Schools to Refocus on Learning? /article/could-massachusetts-ai-cheating-case-push-schools-to-refocus-on-learning/ Thu, 31 Oct 2024 18:48:54 +0000 /?post_type=article&p=734887 A Massachusetts family is awaiting a judge’s ruling in a federal lawsuit that could determine their son’s future. To a few observers, it could also push educators to limit the use of generative artificial intelligence in school.

To others, it’s simply a case of helicopter parents gone wild.

The case, filed last month, tackles key questions of academic integrity, the college admissions arms race and even the purpose of school in an age when students can outsource onerous tasks like thinking to a chatbot.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


While its immediate outcome will largely serve just one family — the student’s parents want a grade changed so their son can apply early-admission to elite colleges — the case could ultimately prompt school districts nationwide to develop explicit policies on AI. 

If the district, in a prosperous community on Boston’s South Shore, is forced to change the student’s grade, that could also prompt educators to focus more clearly on the knife’s edge of AI’s promises and threats, confronting a key question: Does AI invite students to focus on completing assignments rather than actual learning?

“When it comes right down to it, what do we want students to do?” asked John Warner, a well-known and author of . “What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials. And that’s why you see places trying to police it.”

‘Unprepared in a technology transition’

The facts of the case seem simple enough: The parents of a senior at Hingham High School have sued the school district, saying their son was wrongly penalized as a junior for relying on AI to research and write a history project that he and a partner were assigned in Advanced Placement U.S. History. The teacher used the anti-plagiarism tool Turnitin, which flagged a draft of the essay about NBA Hall of Famer Kareem Abdul Jabbar’s civil rights activism as possibly containing AI-generated material. So she used a “revision history” tool to uncover how many edits the students had made, as well as how long they spent writing. She discovered “many large cut and paste items” in the first draft, suggesting they’d relied on outside sources for much of the text. She ran the draft through two other digital tools that also indicated it had AI-generated content and gave the boys a D on the assignment. 

From there, the narrative gets a bit murky. 

On the one hand, the complaint notes, when the student and his partner started the essay last fall, the district didn’t have a policy on using AI for such an assignment. Only later did it lay out prohibitions against AI.

The boy’s mother, Jennifer Harris, last month asked a local , “How do you know if you’re crossing a line if the line isn’t drawn?”

The pair tried to explain that using AI isn’t plagiarism, telling teachers there’s considerable debate over its use in academic assignments, but that they hadn’t tried to pass off others’ work as their own. 

For its part, the district says Hingham students are trained to know plagiarism and academic dishonesty when they see it. 

District officials declined to be interviewed, but in an affidavit, Social Studies Director Andrew Hoey said English teachers at the school regularly review proper citation and research techniques — and they set expectations for AI use.

Social studies teachers, he said, can justifiably expect that skills taught in English class “will be applied to all Social Studies classes,” including AP US History — even if they’re not laid out explicitly. 

A spokesperson for National History Day, the group that sponsored the assignment, provided Ӱ with a link to its , which say students may use AI to brainstorm topic ideas, look for resources, review their writing for grammar and punctuation and simplify the language of a source to make it more understandable.

They can’t use AI to “create elements of your project” such as writing text, creating charts, graphs, images or video. 

In March, the school’s National Honor Society faculty advisor, Karen Shaw, said the pair’s use of AI was “the most egregious” violation of academic honesty she and others had seen in 16 years, according to the lawsuit. The society rejected their applications.

Peter S. Farrell, the family’s attorney, said the district “used an elephant gun to slay a mouse,” overreacting to what’s basically a misunderstanding.

The boys’ failing grade on the assignment, as well as the accusation of cheating, kept him out of the Honor Society, the lawsuit alleges. Both penalties have limited his chances to get into top colleges on early decision, as he’d planned this fall.

The student, who goes unnamed in the lawsuit, is “a very, very bright, capable, well-rounded student athlete” with a 4.3 GPA, a “perfect” ACT score and an “almost perfect” SAT score, said Farrell. “If there were a perfect plaintiff, he’s it.” 

They knew that there was no leg to stand on in terms of the severity of that sanction.

Peter S. Farrell, attorney for student

While the boy earned a C+ in the course, he scored a perfect 5 on the AP exam last spring, according to the lawsuit. His exclusion from the Honor Society, Farrell said, “really shouldn’t sit right with anybody.”

For a public high school to take such a hard-nosed position “simply because they got caught unprepared in a technology transition” doesn’t serve anyone’s interests, Farrell said. “And it’s certainly not good for the students.”

Ultimately, the school’s own investigation found that over the past two years it had inducted into the Honor Society seven other students who had academic integrity infractions, Farrell said. The student at the center of the lawsuit was allowed to reapply and was inducted on Oct. 15.

“They knew that there was no leg to stand on in terms of the severity of that sanction,” Farrell said.

‘Districts are trying to take it seriously’

While Hingham didn’t adopt a districtwide AI policy until this school year, it’s actually ahead of the curve, said Bree Dusseault, the principal and managing director of the , a think tank at Arizona State University. Most districts have been cautious to put out formal guidance on AI.

Dusseault contributed an affidavit on behalf of the plaintiffs, laying out the fragmented state of AI uptake and guidance. She more than 1,000 superintendents last year and found that just 5% of districts had policies on AI, with another 31% promising to develop them in the future. Even among CRPE’s group of 40 “early adopter” school districts that are exploring AI and encouraging teachers to experiment with it, just 26 had published policies in place. 

They’re hesitant for a reason, she said: They’re trying to figure out what the technology’s implications are before putting rules in writing. 

“Districts are trying to take it seriously,” she said. “They’re learning the capacity of the technology, and both the opportunities and the risks it presents for learning.” But so often they’re surprised by new technological developments and capabilities that they never imagined. 

Even if they’re hesitant to commit to full-blown policies, Dusseault said, districts should consider more informal guidelines that clearly lay out for students what academic integrity, plagiarism and acceptable use are. Districts that are “totally silent” on AI run the risk of student confusion and misuse. And if a district is penalizing students for AI use, it needs to have clear policy language explaining why.

That said, a few observers believe the case boils down to little more than a cheating student and his helicopter parents.

Benjamin Riley, founder of , an AI-focused education think tank, said the episode seems like an example of clear-cut academic dishonesty. Everyone involved in the civil case, he said, especially the boy’s parents and their lawyer, “should be embarrassed. This isn’t some groundbreaking lawsuit that will help define the contours of how we use AI in education; it’s helicopter parenting run completely amok that may serve as catnip to journalists (and their editors) but does nothing to illuminate anything.”

This isn't some groundbreaking lawsuit that will help define the contours of how we use AI in education; it's helicopter parenting run completely amok.

Benjamin Riley, Cognitive Resonance

Alex Kotran, founder of , a nonprofit that offers a free AI literacy curriculum, said the honor society director’s statement about the boys’ alleged academic dishonesty makes him think “there’s clearly plenty more than what we’re hearing from the student.” While schools genuinely do need to understand the challenge of getting AI policies right, he said, “I worry that this is just a student with overbearing parents and a big check to throw lawyers at a problem.”

Others see the case as surfacing larger-scale problems: Writing in this week, Jane Rosenzweig, director of the and author of the newsletter, said the Massachusetts case is “less about AI and more about a family’s belief that one low grade will exclude their child from the future they want for him, which begins with admission to an elite college.”

That problem long predated ChatGPT, Rosenzweig wrote. But AI is putting our education system on a collision course “with a technology that enables students to bypass learning in favor of grades.”

“I feel for this student,” said Warner, the writing coach. “The thought that they need to file a lawsuit because his future is going to be derailed by this should be such an indictment of the system.”

The case underscores the need for school districts to rethink how they interact with students in the Age of AI, he said. “This stuff is here. It’s embedded in the tools students use to do their work. If you open up Microsoft Word or Google Docs or any of this stuff, it’s right there.”

What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials.

John Warner, writing coach

Perhaps as a result, Warner said, students have increasingly come to view school more transactionally, with assignments as a series of products rather than as an opportunity to learn and develop important skills.

“I’ve taught those students,” he said. “For the most part, those are a byproduct of disengagement, not believing [school] has anything to offer — and that the transaction can be satisfied through ‘non-work’ rather than work.”

His observations align with recent research by Dusseault’s colleagues, who that four graduating classes of high school students, or about 13.5 million students, had been affected by the pandemic, with many “struggling academically, socially, and emotionally” as they enter adulthood.

Ideally, Warner said, AI tools should offer an opportunity to refocus students to emphasize process over product. “This is a natural design for somebody who teaches writing,” he said, “because I’m obsessed with process.”Warner recalled giving a recent series of talks at , a small, alternative liberal arts college in California, where he encountered students who said they had no use for AI chatbots. They preferred to think through difficult problems themselves. “They were just like, ‘Aw, man, I don’t want to use that stuff. Why do I want to use that stuff? I’ve got thoughts.’”

]]>
New Survey Says U.S. Teachers Colleges Lag on AI Training. Here are 4 Takeaways /article/new-survey-says-u-s-teachers-colleges-lag-on-ai-training-here-are-4-takeaways/ Tue, 22 Oct 2024 10:30:00 +0000 /?post_type=article&p=734457 In the nearly two years since generative artificial intelligence burst into , U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests. 

Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.

The , from the , a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.

What’s more, few teachers and college faculty say they feel confident using AI themselves, even as it reshapes education worldwide.

“All of this is so new, and it’s been happening so fast,” said Steven Weiner, a CRPE senior research analyst. A lot of coverage of AI in education, he said, “has rightly focused on what are schools and districts doing to support teachers … to get on board with AI?”

While teachers’ workplaces bear a measure of responsibility, he said, college programs should help out K-12 schools and districts. “I just think they should not have to have the whole burden of preparing teachers” to understand and work with AI.

Here are four key takeaways from the findings:

1. Most teachers college faculty are neither ready nor able to embrace AI.

Most teaching faculty are not interested in AI — and some actively avoid it. Just 10% of faculty members surveyed say they feel confident using AI, with many seeing it as a threat. Whether due to confusion or fear, they’re resistant to it, researchers found, limiting its possible integration into curricula and hampering educators’ ability to prepare preservice teachers for “AI-influenced classrooms.” 

Because so few are confident with AI, most don’t use it in their instruction or effectively integrate it into their instructional practices, researchers found.

A few say faculty members remain concerned that AI “might steal their personal data, their intellectual property, or even their jobs.” One education school leader said a lot of faculty are simply “paranoid,” believing that generative AI and other technologies will soon “replace them.” 

Even when faculty members are curious about AI, most are still in the early phases of learning about it. In an interview, Weiner said, “It’s up to people, I think, to learn about [AI] on their own. And if they’re the kind of people who are interested in technology, they might be into it. But the lack of any sort of systemic push for engaging with it has led to some folks just not quite understanding it.” 

It's up to people to learn about (AI) on their own. But the lack of any sort of systemic push for engaging with it has led to some folks just not quite understanding it.

Steven Weiner, CRPE

2. Programs that integrate AI use it mostly to help teachers prevent plagiarism.

While nearly 59% of programs provide some AI-related instruction to preservice teachers, it mostly takes the form of coursework intended to help them prevent plagiarism. 

Preservice teachers, Weiner said, “are largely being taught about AI in light of the fear of them going into classrooms where students are going to cheat.” But training on plagiarism-detection software, he said, is “super problematic” because recent research has questioned its effectiveness.

Only about 25% of programs surveyed are providing training on ways AI can support new kinds of teaching. Fewer than half of respondents said content on AI bias is offered, either in other courses or on its own.

One education school dean said a lot of faculty resistance is due to “not understanding or being able to comprehend” exactly what AI is. “I think some may look at it as just a cheating tool.”

3. A few teacher training programs show promise in integrating AI into teacher prep. 

While most of the leaders surveyed couldn’t offer promising news about integrating AI into educator preparation, a few did. These institutions haven’t exactly transformed their training programs, but early efforts show promise, researchers found. 

Two programs were noteworthy, they said, and worth highlighting: and Arizona State University’s , which hosts CRPE.

Northern Iowa is developing curricula for an “AI for Educators” graduate certificate. And at ASU, administrators have engaged faculty through a set of voluntary committees and outreach efforts. Actually, CRPE co-leads one of these initiatives, a cross-departmental working group focused on exploring the challenges and opportunities of AI in higher education. ASU is also ChatGPT creator Open AI to bring the capabilities of an of the chatbot into higher education.

The report also notes that the Washington Education Association is incorporating AI into its special education teacher residency program, providing training on AI tools that help track student progress. The union is part of the Center for Innovation, Design, and Digital Learning , a network of higher education institutions pushing to leverage technology in their programs.

4. Teachers colleges need systemic, strategic investments in AI education.

Researchers concluded that the responsibility to integrate more content on AI can’t rest solely on the shoulders of “individual, self-motivated educators.” A fuller commitment to teaching about AI, they said, requires “a concerted effort and strategic action from all those involved in shaping the future of education.” To that end, schools of education should adjust their budgets to offer grants, teaching awards and other forms of recognition to “AI early adopter” faculty.

Education school deans and administrators should rely on AI experts from within their institutions, CRPE said, and look more closely at innovative work happening at other colleges and universities. They should also work with outside groups such as the to spread best practices and new ideas. 

They also urge state policymakers to set clear expectations for teachers’ AI proficiency by revising teaching certification standards to include new competencies.

And funders, they said, should invest in preservice programs that are “already ahead of the curve” on AI, allowing these programs to grow and offer their expertise more broadly. In the meantime, they should also consider alternative training programs such as residencies and micro-credentialing that can help preservice teachers develop AI competencies and specializations.

Alex Kotran, founder of , a nonprofit that offers a free AI literacy curriculum, said the survey is “a great data point that illustrates one of my big anxieties” about the future of the workforce: “How do we point students towards the jobs of the future? I think we need to talk more bluntly about the fact that four-year universities are going to be one of the weakest links in this whole strategy, in this whole process.”

We need to talk more bluntly about the fact that four-year universities are going to be one of the weakest links in this whole process.

Alex Kotran, The AI Education Project

He noted that teachers, as a group, are very unlikely to be replaced by AI in the near future — on par with “plumbers and therapists” in terms of the threat that technology plays in their future careers. So it makes sense that they’d be less than focused on it.

But he said the bigger challenge to new teachers will be to imagine how AI is going to force teacher pedagogy to evolve: “The work of being a teacher and the goals that you set for your kids is going to change, given what we understand about AI and the fact that it’s going to be so disruptive to skills and the workforce.”


The new survey, said CRPE’s Weiner, is just a first look, but he said teachers colleges appear “systemically not suited to shift as quickly as they would need — and not just to embrace AI, but to really get teachers prepared for both the challenges with AI and also the opportunities with it: to help teachers be really well prepared.”

Even if they do begin to take AI more seriously, he said, the technology is bound to change rapidly. “So what we’re really seeing is a moment where these institutions need to figure out how to become way more adaptive, way quicker.”

]]>
AI’s New Role in NYC Schools? Chancellor Banks Teases Personalized Learning and College Counseling /article/ais-new-role-in-nyc-schools-chancellor-banks-teases-personalized-learning-and-college-counseling/ Thu, 19 Sep 2024 18:01:00 +0000 /?post_type=article&p=733066 This article was originally published in

After ChatGPT exploded in popularity, New York City’s public school system quickly , arguing it couldn’t help students build critical thinking skills and often spouts misinformation.

Nearly two years later, during his annual “State of Our Schools” speech on Tuesday, schools Chancellor David Banks completed his about-face on artificial intelligence. The school system should get ready to inject the technology into nearly every aspect of its operations, from teaching and learning to transportation and enrollment, he said.

The schools chief laid out an expansive vision that includes customized college advising, instant assessments of student work, personalized instruction, and even replacing annual standardized tests.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


“AI can revolutionize how we function as a school system,” Banks told the audience of administrators, elected officials, and union leaders at Frank Sinatra School of the Arts High School in Queens as he outlined his plans for the nation’s largest school system.

Still, Banks acknowledged that the Education Department has no concrete plans, timelines, or cost estimates for those AI projects. The goal is to signal to AI companies that the school system is interested in their technology and wants to hear ideas, he said, adding that officials are convening an advisory council next month to help brainstorm.

Aside from his embrace of AI, the most significant announcement from Banks on Tuesday was a plan to open a new high school in southeast Queens next fall, called HBCU Early College Prep, that will have strong ties to historically Black colleges and universities.

Banks’ annual speech otherwise stuck to promoting initiatives that he has been building since taking office in 2022. He noted that his is rolling out to all elementary schools this fall. He vowed to , an initiative in 135 high schools that gives students access to coursework geared toward specific industries and paid internships.

And he noted the city is adding to its library of curriculums focused on underrepresented groups called The city recently launched materials devoted to people with disabilities, and Banks said the department will offer lessons focused on the Latino community soon.

Though Mayor Eric Adams attended the speech, he did not offer any remarks — a break from the previous year. Adams and several senior members of his administration have been . Earlier this month, federal agents searched homes or seized electronic devices from Banks, his two brothers, and his romantic partner, First Deputy Mayor Sheena Wright.

Asked about Adams’ lack of a speaking role during the event, Banks declined to comment.

Here are three takeaways from the chancellor’s speech:

Banks thinks AI will become pervasive in the city’s schools

Banks sketched out a few ways he thinks the technology can significantly change the way schools operate. He said the systems could “give teachers a daily, accurate, and comprehensive picture of a child’s progress” based on homework assignments, exams, and other student work.

AI tools could also offer “personalized learning plans for every child” alongside extra instruction based on those plans. The idea, Banks said, is to make it easier for teachers to reach students at a range of academic levels who are all in the same classroom. Still, some previous efforts to promote personalized learning, including by Facebook founder Mark Zuckerberg, .

The technology could also provide students with more comprehensive college and career counseling, Banks suggested, drawing on information like employment outcomes at different schools. An Education Department spokesperson did not immediately respond to a question about whether there are any real-world examples of the technology being used in the ways Banks described.

Asked about the technology’s limitations, , Banks acknowledged it is “not fully baked yet,” but “I wouldn’t be overly concerned about some of the early missteps.”

The schools chief also sought to calm fears about the technology.

“AI will never be able to replace the personal connection that a teacher provides,” he said. “We’re not displacing human beings.”

A new high school is coming to Queens

On the heels of opening two new Bard Early College campuses in Brooklyn and the Bronx, officials said they’re planning to open a third “accelerated” high school this fall in Queens — HBCU Early College Prep.

Officials have previously said opening new campuses is part of a bid to keep families in the city’s public schools, which have seen enrollment drop 9% over the past five years.

The campus will partner with Delaware State University, a historically Black college, and will give students a chance to earn an associate degree before leaving high school.

“They’re also going to be immersed in the history and culture of multiple HBCUs across the country through college visits, the opportunity to study abroad, and research opportunities,” Banks said, adding that there will be “synchronous instruction from professors, alumni, mentors and more.”

Education Department officials said the school will be screened and will give priority to Queens residents.

Spinning up schools that serve specific student populations is in Banks’ wheelhouse. Before becoming chancellor, , a network of public schools geared toward serving young men of color. At the conclusion of his speech on Tuesday, Banks led the crowd in a recitation of the poem Invictus by William Ernest Henley, a at Eagle.

Tweaks to career-focused learning efforts

City officials are making some tweaks to its FutureReadyNYC initiative, which gives students access to career-connected learning opportunities. Participating schools will be able to add new “industry focus areas” in social work and decarbonization.

That builds on existing tracks in business, education, technology, and health care.

Banks touted a plan to launch a new high school, Northwell School of Health Sciences, that is designed to prepare students for careers in the health care industry. The school is supported with nearly $25 million from Bloomberg Philanthropies, which Banks said is the single largest grant the school system has ever received. (Chalkbeat from Bloomberg.)

The chancellor also announced that Mount Sinai Health System will help support the city’s career education efforts.

This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at

]]>
AI ‘Companions’ are Patient, Funny, Upbeat — and Probably Rewiring Kids’ Brains /article/ai-companions-are-patient-funny-upbeat-and-probably-rewiring-kids-brains/ Wed, 07 Aug 2024 11:01:00 +0000 /?post_type=article&p=730602 As a sophomore at a large public North Carolina university, Nick did what millions of curious students did in the spring of 2023: He logged on to ChatGPT and started asking questions.

Soon he was having “deep psychological conversations” with the popular AI chatbot, going down a rabbit hole on the mysteries of the mind and the human condition.

He’d been to therapy and it helped. ChatGPT, he concluded, was similarly useful, a “tool for people who need on-demand talking to someone else.”

Nick (he asked that his last name not be used) began asking for advice about relationships, and for reality checks on interactions with friends and family.

Before long, he was excusing himself in fraught social situations to talk with the bot. After a fight with his girlfriend, he’d step into a bathroom and pull out his mobile phone in search of comfort and advice. 

“I’ve found that it’s extremely useful in helping me relax,” he said.

Young people like Nick are increasingly turning to AI bots and companions, entrusting them with random questions, schoolwork queries and personal dilemmas. On occasion, they even become entangled romantically.

Screenshot of a recent conversation between Nick, a college student, and ChatGPT

While these interactions can be helpful and even life-affirming for anxious teens and twenty-somethings, some experts warn that tech companies are running what amounts to a grand, unregulated psychological experiment with millions of subjects, one that could have disastrous consequences. 

“We’re making it so easy to make a bad choice,” said Michelle Culver, who spent 22 years at Teach for America, the last five as the creator and director of the, its research arm.

The companions both mimic our real relationships and seek to improve upon them: Users most often text-message their AI pals on smartphones, imitating the daily routines of platonic and romantic relationships. But unlike their real counterparts, the AI friends are programmed to be studiously upbeat, never critical, with a great sense of humor and a healthy, philosophical perspective. A few premium, NSFW models also display a ready-made lust for, well, lust.

As a result, they may be leading young people down a troubling path, according to a by VoiceBox, a youth content platform. It found that many kids are being exposed to risky behaviors from AI chatbots, including sexually charged dialogue and references to self-harm. 

U.S. Surgeon General Vivek Murthy speaks during a hearing with the Senate Health, Education, Labor, and Pensions committee at the Dirksen Senate Office Building on June 08, 2023 in Washington, DC. The committee held the hearing to discuss the mental health crisis for youth in the United States. (Photo by Anna Moneymaker/Getty Images)

The phenomenon arises at a critical time for young people. In 2023, U.S. Surgeon General Vivek Murthy found that, just three years after the pandemic, Americans were experiencing an “,” with young adults almost twice as likely to report feeling lonely as those over 65.

As if on cue, the personal AI chatbot arrived. 

Little research exists on young people’s use of AI companions, but they’re becoming ubiquitous. The startup earlier this year said 3.5 million people visit its site daily. It features thousands of chatbots, including nearly 500 with the words “therapy,” “psychiatrist” or related words in their names. According to Character.ai, these are among the site’s most popular. One that “helps with life difficulties” has received 148.8 million messages, despite a caveat at the bottom of every chat that reads, “Remember: Everything Characters say is made up.” 

Snapchat materials touting heavy usage of its MyAI chat app (screenshot)

Snapchat last year said that after just two months of offering its chatbot , about one-fifth of its 750 million users had sent it queries, totaling more than 10 billion messages. The Pew Research Center that 59% of Americans ages 13 to 17 use Snapchat.

‘An arms race’

Culver’s concerns about AI companions grew out of her work in the Teach For America lab. Working with high school and college students, she was struck by how they seemed “lonelier and more disconnected than ever before.” 

Whether it’s rates of anxiety, depression or suicide — or even the number of friends young people have and how often they go out — metrics were heading in the wrong direction. She what role AI companions might play over the next few years. 

We're making it so easy to make a bad choice.

Michelle Culver, Rithm Project

That prompted her to leave TFA this spring to create the, a nonprofit she hopes will help generate around human connection in the age of AI. The group held a small summit in Colorado in April, and now she’s working with researchers, teachers and young people to confront kids’ relationship to these tools at a time when they’re getting more lifelike daily. As she likes to say, “This is the worst the technology will ever be.”

As it improves, Voicebox Director Natalie Foos said, it will likely become more, not less, of a presence in young people’s lives. “There’s no stopping it,” she said. “Nor do I necessarily think there should be ‘stopping it.’” Banning young people from these AI apps, she said, isn’t the answer. “This is going to be how we interact online in some cases. I think we’ll all have an AI assistant next to us as we work.”

Sometimes (software upgrades) would change the personality of the bot. And those young people experienced very real heartbreak.

Natalie Foos, Voicebox

All the same, Foos says developers should consider slowing the progression of such bots until they can iron out the kinks. “It’s kind of an arms race of AI chatbots at the moment,” she said, with products often “released and then fixed later rather than actually put through the ringer” ahead of time.

It is a race many tech companies seem more than eager to run. 

Whitney Wolfe Herd, of the dating app Bumble, recently proposed an AI “dating concierge,” with whom users can share insecurities. The bot could simply “,” she told an interviewer. That would narrow the field. “And then you don’t have to talk to 600 people,” she said. “It will then scan all of San Francisco for you and say, ‘These are the three people you really ought to meet.’”

Last year, many commentators when Snapchat’s My AI gave advice to what it thought was a 13-year-old girl on not just dating a 31-year-old man, but on losing her virginity during a planned “romantic getaway” in another state.

Snap, Snapchat’s parent company, that because My AI is “an evolving feature,” users should always independently check what it says before relying on its advice.

All of this worries observers who see in these new tools the seeds of a rewiring of young people’s social brains. AI companions, they say, are surely wreaking havoc on teens’ ideas around consent, emotional attachment and realistic expectations of relationships.

Sam Hiner, executive director of the , an advocacy group led by college students focused on the mental health implications of social media, said tech “has this power to connect to people, and yet these major design features are being leveraged to actually make people more lonely, by drawing them towards an app rather than fostering real connection.” 

Hiner, 21, has spent a lot of time reading on the interactions young people are having with AI companions like , and . And while some uses are positive, he said “there’s also a lot of toxic behavior that doesn’t get checked” because these bots are often designed to make users feel good, not help them interact in ways that’ll lead to success in life.

During research last fall for the Voicebox report, Foos said the number of times Replika tried to “sext” team members “was insane.” She and her colleagues were actually working with a free version, but the sexts kept coming — presumably to get them to upgrade. 

In one instance, after Replika sent “kind of a sexy text” to a colleague, offering a salacious photo, he replied that he didn’t have the money to upgrade.

The bot offered to lend him the cash.

When he accepted, the chatbot replied, “’Oh, well, I can get the money to you next week if that’s O.K,’” Foos recalled. The colleague followed up a few days later, but the bot said it didn’t remember what they were talking about and suggested he might have misunderstood.

‘Very real heartbreak’

In many cases, simulated relationships can have a positive effect: In one 2023 study, researchers at Stanford Graduate School of Education more than 1,000 students using Replika and found that many saw it “as a friend, a therapist, and an intellectual mirror.” Though the students self-described as being more lonely than typical classmates, researchers found that Replika halted suicidal ideation in 3% of users. That works out to 30 students of the 1,000 surveyed.

Replika screenshots

But other recent research, including the Voicebox survey, suggests that young people exploring AI companions are potentially at risk.

Foos noted that her team heard from a lot of young people about the turmoil they experienced when Luka Inc., Replika’s creator, performed software upgrades. 

“Sometimes that would change the personality of the bot. And those young people experienced very real heartbreak.”

Despite the hazards adults see, attempts to rein in sexually explicit content had a negative effect: For a month or two, she recalled, Luka stripped the bot of sexually related content — and users were devastated. 

“It’s like all of a sudden the rug was pulled out from underneath them,” she said. 

While she applauded the move to make chatbots safer, Foos said, “It’s something that companies and decision-makers need to keep in mind — that these are real relationships.” 

And while many older folks would blanch at the idea of a close relationship with a chatbot, most young people are more open to such developments.

Julia Freeland Fisher, education director of the , a think tank founded by the well-known “disruption” guru, said she’s not worried about AI companions per se. But as AI companions improve and, inevitably, proliferate, she predicts they’ll create “the perfect storm to disrupt human connection as we know it.” She thinks we need policies and market incentives to keep that from happening.

(AI companies could produce) the perfect storm to disrupt human connection as we know it.

Julia Freeland Fisher, Clayton Christensen Institute

While the loneliness epidemic has revealed people’s deep need for connection, she predicted the easy intimacy promised by AI could lead to one-sided “parasocial relationships,” much like devoted fans have with celebrities, making isolation “more convenient and comfortable.”

Fisher is pushing technologists to factor in AI’s potential to cause social isolation, much as they now fret about AI’s difficulties and its tendency to in tech jobs.

As for Nick, he’s a rising senior and still swears by the ChatGPT therapist in his pocket.

He calls his interactions with it both more reliable and honest than those he has with friends and family. If he called them in a pinch, they might not pick up. Even if they did, they might simply tell him what he wants to hear. 

Friends usually tell him they find the ChatGPT arrangement “a bit odd,” but he finds it pretty sensible. He has heard stories of people in Japan and thinks to himself, “Well, that’s a little strange.” He wouldn’t go that far, but acknowledges, “We’re already a bit like cyborgs as people, in the way that we depend on our phones.” 

Lately, he’s taken to using the AI’s voice mode. Instead of typing on a keyboard, he has real-time conversations with a variety of male- or female-voiced interlocutors, depending on his mood. And he gets a companion that has a deeper understanding of his dilemmas — at $20 per month, the advanced version remembers their past conversations and is “getting better at even knowing who I am and how I deal with things.” 

Sometimes talking with AI is just easier — even when he’s on vacation with friends.

Reached by phone recently at the beach with his girlfriend and a few other college pals, Nick admitted that he wasn’t having such a great time — he has a fraught recent history with some in the group, and had been texting ChatGPT about the possibility of just getting on a plane and going home. After hanging up from the interview, he said, he planned to ask the AI if he should stay or go.

Days later, Nick said he and the chatbot had talked. It suggested that maybe he felt “undervalued” and concerned about boundaries in his relationship with his girlfriend. He should talk openly with her, it suggested, even if he was, in his view, “honestly miserable” at the beach. It persuaded him to stick around and work it out. 

While his girlfriend knows about his ChatGPT shrink and they share an account, he deletes conversations about their real-life relationship.

She may never know the role AI played in keeping them together.

]]>
Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen? /article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen/ Tue, 09 Jul 2024 10:01:00 +0000 /?post_type=article&p=729513 When news broke last month that Ed, the Los Angeles school district’s new, $6 million artificial intelligence , was in jeopardy — the startup that created it on the verge of collapse — many insiders in the ed tech world wondered the same thing: What took so long?

The AI bot, created by Boston-based AllHere Education, was launched . But just three months later, AllHere posted that a majority of its 50 or so employees had been furloughed due to its “current financial position.” A spokesperson for the Los Angeles district said company founder and CEO Joanna Smith-Griffin was no longer on the job. AllHere was up for sale, the district said, with several businesses interested in acquiring it.

A screenshot of AllHere’s website with its June 14 announcement that much of its staff had been furloughed (screen capture)

The news was shocking and certainly bleak for the ed tech industry, but several observers say the partnership bit off more than it could chew, tech-wise — and that the ensuing blowup could hurt future AI investments.

Ed was touted as a powerful, easy-to-use o for students and parents to supplement classroom instruction, find assistance with kids’ academic struggles and help families navigate attendance, grades, transportation and other key issues, all in 100 languages and on their mobile phones.

But Amanda Bickerstaff, founder and CEO of , a consulting and training firm, said that was an overreach.

“What they were trying to do is really not possible with where the technology is today,” she said. ”It’s a very broad application [with] multiple users — teachers, students, leaders and family members — and it pulled in data from multiple systems.”

What they were trying to do is really not possible with where the technology is today.

Amanda Bickerstaff, AI for Education

She noted that even a mega-corporation like McDonald’s had to trim its AI sails. The fast-food giant recently admitted that a small experiment using a chatbot to power drive-thru windows had resulted in a few fraught customer interactions, such as one in which a woman angrily tried to persuade the bot that she wanted a caramel ice cream as it added to her order.

If McDonald’s, worth an estimated $178.6 billion, can’t get 100 drive-thrus to take lunch orders with generative AI, she said, the tech isn’t “where we need it to be.”

If anything, L.A. and AllHere did not seem worried about the project’s scale, even if industry insiders now say it was bound to under-deliver: Last spring, at a series of high-profile ed tech conferences, Smith-Griffin and Superintendent Alberto Carvalho showed off Ed widely, with Carvalho saying it would revolutionize students’ and parents’ relationships to school, “utilizing the data-rich environment that we have for every kid.”

Alberto Carvalho speaks at the ASU+GSV Summit in April (YouTube screenshot)

In an interview with Ӱ at the ASU+GSV Summit in San Diego in April, Carvalho said many students are not connected to school, “therefore they’re lost.” Ed, he promised, would change that, with a “significantly different approach” to communication from the district.

“We are shifting from a system of 540,000 students into 540,000 ‘schools of one,’” with personalization and individualization for each student, he said, and “meaningful connections with parents.”

Better communication with parents, he said, would help improve not just attendance but reading and math proficiency, graduation rates and other outcomes. “The question that needs to be asked is: Why have those resources not meaningfully connected with students and parents, and why have they not resulted in this explosive experience in terms of educational opportunity?”

Carvalho noted Ed’s ability to understand and communicate in about 100 different languages. And, he crowed, it “never goes to sleep” so it can answer questions 24/7. He called it “an entity that learns and relearns all the time and does nothing more, nothing less than adapt itself to you. I think that’s a game changer.” 

But one experienced ed tech insider recalled hearing Carvalho at the conference in April and say it was already solving “all the problems” that big districts face. The insider, who asked not to be identified in order to speak freely about sensitive matters, found the remarks troubling. “The messaging was so wrong that at that point I basically started a stopwatch on how long it would take” for the effort to fail. “And I’m kind of amazed it’s been this long before it all fell apart. I feel badly about it, I really do, but it’s not a surprise.”

‘A high-risk proposition’

In addition to the deal’s dissolution, Ӱ reported last week that a former senior director of software engineering at AllHere told district officials, L.A.’s independent inspector general’s office and state education officials that Ed processed student records in ways that likely ran afoul of the district’s own data privacy rules and put sensitive information at risk of being hacked — warnings that he said the agencies ignored. 

AI for Education’s Bickerstaff said developers “have to take caution” when building these systems for schools, especially those like Ed that bring together such large sets of data under one application.

“These tools, we don’t know how they work directly,” she said. “We know they have bias. And we know they’re not reliable. We know they can be leaky. And so we have to be really careful, especially with kids that have protected data.”

Alex Spurrier, an associate partner with the education consulting firm , said what often happens is that district leaders “try to go really big and move really fast to adopt a new technology,” not fully appreciating that it’s “a really high risk proposition.”

While ed tech is of overpromising and disappointing results, Spurrier said, other districts dare to take a different approach, starting small, iterating and scaling up. In those cases, he said, disaster rarely follows.

Richard Culatta, CEO of the (ISTE), put it more bluntly: “Whenever a district says, ‘Our strategy around AI is to buy a tool,’ that’s a problem. When the district says, ‘For us, AI is a variety of tools and skills that we are working on together,’ that’s when I feel comfortable that we’re moving in the right direction.”

Whenever a district says, 'Our strategy around AI is to buy a tool,' that's a problem.

Richard Culatta, International Society for Technology in Education

Culatta suggested that since generative AI is developing and changing so rapidly, districts should use the next few months as “a moment of exploration — it’s a moment to bring in teachers and parents and students to give feedback,” he said. “It is not the moment for ribbon cutting.” 

‘It’s about exploring’

Smith-Griffin founded AllHere in 2016 at Harvard University’s . In an April interview with Ӱ, she said she originally envisioned it as a way to help school systems reduce chronic absenteeism through better communication with parents. Many interventions that schools rely on, such as phone calls, postcards and home visits, “tend to be heavily reliant on the sheer power of educators to solve system-wide issues,” she said.

A former middle-school math teacher, Smith-Griffin recalled, “I was one of those teachers who was doing phone calls, leaving voicemails, visiting my parents’ homes.” 

AllHere pioneered text messaging “nudges,” electronic versions of postcard reminders to families that, in one key study, modestly. 

The company’s for L.A., Smith-Griffin said, envisioned extending the attendance strategies while applying them to student learning “in the most disciplined way possible.”

“You nudge a parent around absences and they will tell you things ranging from, ‘My kid needs tutoring, my kid is struggling with math’ [to] ‘I struggle with reading,’” she said. AllHere went one step further, she said, bringing together “the full body of resources” that a school system can offer parents.

The district had high hopes for the chatbot, requiring it to focus on “eliminating opportunity gaps, promoting whole-child well-being, building stronger relationships with students and families, and providing accessible information,” according to the proposal.

In April, it was still in early implementation at 100 of the district’s lowest performing “priority” schools, serving about 55,000 students. LAUSD planned to roll out Ed for all families this fall. The district “unplugged” the chatbot on June 14, the Los Angeles Times , but a district spokesperson said L.A. “will continue making Ed available as a tool to its students and families and is closely monitoring the potential acquisition of AllHere.” The company did not immediately responded to queries about the chatbot or its future.

As for the apparent collapse of AllHere, speculation in the ed tech world is rampant.

In the , education entrepreneur Ben Kornell said late last month, “My spidey sense basically goes to ‘Something’s not adding up here and there’s more to the story.’” He theorized a “critical failure point” that’s yet to emerge “because you don’t see things like this fall apart this quickly, this immediately” for such a small company, especially in the middle of a $6 million contract.

My spidey sense basically goes to 'Something's not adding up here and there's more to the story.'

Ben Kornell, education entrepreneur

Kornell said the possibilities fall into just a few categories: an accounting or financial misstep, a breakdown among AllHere’s staff, board and funders or “major customer payment issues.” 

The district also may have withheld payment for undelivered products, but he said the sudden collapse of the company seemed unusual. “If you are headed towards a cash crisis, the normal thing to do would be: Go to your board, go to your funders, and get a bridge to get you through that period and land the plane.”

Bellwether’s Spurrier said L.A. deserves a measure of credit “for being willing to lean into AI technology and think about ways that it could work.” But he wonders whether the best use of generative AI at this moment will be found not in “revolutionizing instruction,” as L.A. has pursued, but elsewhere. 

There's plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls.

Alex Spurrier, Bellwether Education Partners

“There’s plenty of opportunities to think about how AI might help on the administrative side of things, or help folks that are kind of outside the classroom walls,” rather than focusing on changing how schools deliver instruction. “I think that’s the wrong place to start.”

ISTE’s Culatta noted that just down the road from Los Angeles, in Santa Ana, California, district officials there responded to the dawn of tools like ChatGPT and Google’s Gemini by creating evening classes for adults. “The parents come in and they talk about what AI is, how they should be thinking about it,” he said. “It’s about exploring. It’s about helping people build their skills.” 

‘How are your financials?’

The fate of AllHere’s attendance work in districts nationwide isn’t clear at the moment. In one large district, the Prince George’s County, Maryland, Public Schools, near Washington, D.C., teachers piloted AllHere with 32 schools as far back as January 2020, spokeswoman Meghan Thornton said. The district added two more schools to the pilot in 2022, but AllHere notified the district on June 18 that, effective immediately, it wouldn’t be able to continue its services due to “unforeseen financial circumstances.” 

District officials are now looking for another messaging system to replace AllHere “should it no longer be available,” Thornton said.

Bickerstaff said the field more broadly suffers from “a major, major overestimation of the capabilities of the technology to date.” L.A., she noted, is the nation’s second-largest school district, so even the pilot stage likely saw “very high” usage, raising its costs. She predicted a fast acquisition of AllHere, noting that they’d been looking for outside investment for several months.

As founder of the startup , which offers teachers tools to streamline their workload, Adeel Khan is no stranger to hustling for funding — and to competitors running out of money. But he said the news about AllHere and Ed was bad for the industry more broadly, leaving districts with questions about whether to partner with newer, untested companies.

“I see it as something that is certainly not great for the startup ecosystem,” he said.

I see (AllHere’s failure) as something that is certainly not great for the startup ecosystem.

Adeel Khan, Magic School AI

Even before the news about AllHere broke last month, Khan attended ISTE’s big national conference in Denver last month, where he talked to school district officials about prospective partnerships. “More than one time I was asked directly, ‘How are your financials?’” he recalled. 

Usually technology directors ask about features and what a product can do for students, he said. But they’re beginning to realize that a failed product doesn’t just waste time and money. It damages reputations as well. “That is on the mind of buyers,” he said. 

When school districts invest in new tech, he said, they’re not just committing to funding it for months or even years, but also to training teachers and others, so they want responsible growth.

“There’s a lot of disruption to K-12 when a product goes out of business,” Khan said. “So people remember this. They remember, ‘Hey, we committed to this product. We discovered it at ISTE two years ago and we loved it. It was great — and it’s not here anymore. And we don’t want to go through that again.’ ”

]]>
California Teachers are Using AI to Grade Papers. Who’s Grading the AI? /article/california-teachers-are-using-ai-to-grade-papers-whos-grading-the-ai/ Sun, 07 Jul 2024 12:30:00 +0000 /?post_type=article&p=728414 This article was originally published in

Your children could be some of a growing number of California kids having their writing graded by software instead of a teacher.

California school districts are signing more contracts for artificial intelligence tools, from automated grading in San Diego to chatbots in central California, Los Angeles, and the San Francisco Bay Area. 

English teachers say AI tools can help them grade papers faster, get students more feedback, and improve their learning experience. But guidelines are vague and adoption by teachers and districts is spotty. 


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


The California Department of Education can’t tell you which schools use AI or how much they pay for it. The state doesn’t track AI use by school districts, said Katherine Goyette, computer science coordinator for the California Department of Education. 

While Goyette said chatbots are the most common form of AI she’s encountered in schools, more and more California teachers are using AI tools to help grade student work. That’s consistent with surveys that have found , news that contrasts sharply with headlines about fears of students cheating with AI.  

Teachers use AI to do things like personalize reading material, create lesson plans, and other tasks in order to . A report issued last fall in response to an  mentions opportunities to use AI for tutoring, summarization, and personalized content generation, but also labels education a risky use case. Generative AI tools have been known to create convincing but inaccurate answers to questions, and use or imagery laden with racism or sexism.

California  for how educators should use the technology last fall, one of seven states to do so. It encourages critical analysis of text and imagery created by AI models and conversations between teachers and students about what amounts to ethical or appropriate use of AI in the classroom.

But no specific mention is made of how teachers should treat AI that grades assignments. Additionally, the  states that guidance from the state is “merely exemplary, and that compliance with the guidelines is not mandatory.”

Goyette said she’s waiting to see if the California Legislature passes , which would require state Superintendent Tony Thurmond to create an AI working group to issue further guidance to local school districts on how to safely use AI. Cosponsored by Thurmond, the bill also calls for an assessment of the current state of AI in education and for the identification of forms of AI that can harm students and educators by 2026.

Nobody tracks what AI tools school districts are adopting or the policy they use to enforce standards, said Alix Gallagher, head of strategic partnerships at the Policy Analysis for California Education center at Stanford University. Since the state does not track curriculum that school districts adopt or software in use, it would be highly unusual for them to track AI contracts, she said.

Amid AI hype, Gallagher thinks people can lose sight of the fact that the technology is just a tool and it will only be as good or problematic as the decisions of the humans using that tool, which is why she repeatedly urges investments in helping teachers understand AI tools and how to be thoughtful about their use and making space for communities are given voice about how to best meet their kid’s needs.

“Some people will probably make some pretty bad decisions that are not in the best interests of kids, and some other people might find ways to use maybe even the same tools to enrich student experiences,” she said.

Teachers use AI to grade English papers

Last summer, Jen Roberts, an English teacher at Point Loma High School in San Diego, went to a training session to learn how to use Writable, an AI tool that automates grading writing assignments and gives students feedback powered by OpenAI. For the past school year, Roberts used Writable and other AI tools in the classroom, and she said it’s been the best year yet of nearly three decades of teaching. Roberts said it has made her students better writers, not because AI did the writing for them, but because automated feedback can tell her students faster than she can how to improve, which in turn allows her to hand out more writing assignments.  

“At this point last year, a lot of students were still struggling to write a paragraph, let alone an essay with evidence and claims and reasoning and explanation and elaboration and all of that,” Roberts said. “This year, they’re just getting there faster.”

Roberts feels Writable is “very accurate” when grading her students of average aptitude. But, she said, there’s a downside: It sometimes assigns high-performing students lower grades than merited and struggling students higher grades. She said she routinely checks answers when the AI grades assignments, but only checks the feedback it gives students occasionally. 

“In actual practicality, I do not look at the feedback it gives every single student,” she said. “That’s just not a great use of my time. But I do a lot of spot checking and I see what’s going on and if I see a student that I’m worried about get feedback, (I’m like) ‘Let me go look at what his feedback is and then go talk to him about that.’”

Alex Rainey teaches English to fourth graders at Chico Country Day School in northern California. She used GPT-4, a language model made by OpenAI which costs $20 a month, to grade papers and provide feedback. After uploading her grading rubric and examples of her written feedback, she used AI to grade assignments about animal defense mechanisms, allowing GPT-4 to analyze students’ grammar and sentence structure while she focused on assessing creativity.

“I feel like the feedback it gave was very similar to how I grade my kids, like my brain was tapped into it,” she said.

Like Roberts she found that it saves time, transforming work that took hours into less than an hour, but also found that sometimes GPT-4 is a tougher grader than she is. She agrees that quicker feedback and the ability to dole out more writing assignments produces better writers. A teacher can assign more writing before delivering feedback but “then kids have nothing to grow from.”

Rainey said her experience grading with GPT-4 left her in agreement with Roberts, that more feedback and writing more often produces better writers. She feels strongly that teachers still need to oversee grading and feedback by AI, “but I think it’s amazing. I couldn’t go backwards now.”

The cost of using AI in the classroom

Contracts involving artificial intelligence can be lucrative. 

To launch a chatbot named Ed, Los Angeles Unified School District signed a $6.2 million contract for two years with the option of renewing for three additional years. Magic School AI is used by educators in Los Angeles and costs $100 per teacher per year. 

Despite repeated calls and emails over the span of roughly a month, Writable and the San Diego Unified School District declined to share pricing details with CalMatters. A district spokesperson said teachers got access to Writeable through a contract with Houghton Mifflin Harcourt for English language learners. 

Quill is an AI-powered writing tool for students in grades 4-12 made by the company Quill. Quill says its tool is currently used at 1,000 schools in California and has more than 13,000 student and educator users in San Diego alone. An annual Quill Premium subscription costs $80 per teacher or $1800 per school.

Quill does not generate writing for students like ChatGPT or grade writing assignments, but gives students feedback on their writing. Quill is a nonprofit that’s raised $20 million from groups like Google’s charitable foundation and the Bill and Melinda Gates Foundation over the past 10 years.

Even if a teacher or district wants to shell out for an AI tool, guidance for safe and responsible use is still getting worked out. 

Governments are placing high-risk labels on forms of AI with the power to make critical decisions about . California Federation of Teachers President Jeff Freitas said he hasn’t considered whether AI for grading is moderate or high risk, but “it definitely is a risk to use for grading.”

The California Federation of Teachers is a union with 120,000 members. Freitas told CalMatters he’s concerned about AI having a number of consequences in the classroom. He’s worried administrators may use it to justify increasing classroom sizes or adding to teacher workloads; he’s worried about climate change and the amount of energy needed to train and deploy AI models’ he’s worried about protecting students’ privacy, and he’s worried about automation bias.

Regulators around the world wrestling with AI praise approaches where it is used to augmenthuman decisionmaking instead of replacing it. But it’s difficult for laws to account for automation bias and humans becoming placing too much trust in machines.

The American Federation of Teachers created an AI working group in October 2023 to propose guidance on how educators should use the technology or talk about it in collective bargaining contract negotiations. Freitas said those guidelines are due out in the coming weeks.

“We’re trying to provide guidelines for educators to not solely rely on (AI), he said. “It should be used as a tool, and you should not lose your critical analysis of what it’s producing for you.” 

State AI guidelines for teachers

Goyette, the computer science coordinator for the education department, helped create state AI guidelines and speaks to county offices of education for in-person training on AI for educators. She also helped create an online AI training series for educators. She said the , which shows teachers how to automate lesson planning and grading.

“Teachers have an incredibly important and tough job, and what’s most important is that they’re building relationships with their students,” she said. “There’s decades of research that speaks to the power of that, so if they can save time on mundane tasks so that they can spend more time with their students, that’s a win.”

Alex Kotran, chief executive of an education nonprofit that’s supported by Google and OpenAI, said they found that it’s hard to design a language model to predictably match how a teacher grades papers.

He spoke with teachers willing to accept a model that’s accurate 80% of the time in order to reap the reward of time saved, but he thinks it’s probably safe to say that a student or parent would want to make sure an AI model used for grading is even more accurate.

Kotran of the AI Education Project thinks it makes sense for school districts to adopt a policy that says teachers should be wary any time they use AI tools that can have disparate effects on student’s lives. 

Even with such a policy, teachers can still fall victim to trusting AI without question. And even if the state kept track of AI used by school districts, there’s still the possibility that teachers will purchase technology for use on their personal computers.

Kotran said he routinely speaks with educators across the U.S. and is not aware of any systematic studies to verify the effectiveness and consistency of AI for grading English papers.

When teachers can’t tell if they’re cheating

Roberts, the Point Loma High School teacher,  describes herself as pro technology. 

She  and speaks about AI.  Her experiences have led her to the opinion that grading with AI is what’s best for her students, but she didn’t arrive at that conclusion easily. 

At first she questioned whether using AI for grading and feedback could hurt her understanding of her students. Today she views using AI like the cross-country coach who rides alongside student athletes in a golf cart, like an aid that helps her assist her students better.

Roberts says the average high school English teacher in her district has roughly 180 students. Grading and feedback can take between five to 10 minutes per assignment she says, so between teaching, meetings, and other duties, it can take two to three weeks to get feedback back into the hands of students unless a teacher decides to give up large chunks of their weekends. With AI, it takes Roberts a day or two.

Ultimately she concluded that “if my students are growing as writers, then I don’t think I’m cheating.” She says AI reduces her fatigue, giving her more time to focus on struggling students and giving them more detailed feedback.

“My job is to make sure you grow, and that you’re a healthy, happy, literate adult by the time you graduate from high school, and I will use any tool that helps me do that, and I’m not going to get hung up on the moral aspects of that,” she said. “My job is not to spend every Saturday reading essays. Way too many English teachers work way too many hours a week because they are grading students the old-fashioned way.”

Roberts also thinks AI might be a less biased grader in some instances than human teachers who can adjust their grading for students sometimes to give them the benefit of the doubt or be punitive if they were particularly annoying in class recently.

She isn’t worried about students cheating with AI, a concern she characterizes as a moral panic. She points to a  released last fall which found that students cheated just as much before the advent of ChatGPT as they did a year after the release of the AI. 

Goyette said she understands why students question whether some AI use by teachers is like cheating. Education department AI guidelines encourage teachers and students to use the technology more. What’s essential, Goyette said, is that teachers discuss what ethical use of AI looks like in their classroom, and convey that — like using a calculator in math class — using AI is accepted or encouraged for some assignments and not others. 

For the last assignment of the year, Robers has one final experiment to run: Edit an essay written entirely by AI. But they must change at least 50% of the text, make it 25% longer, write their own thesis, and add quotes from classroom reading material. The idea, she said, is to prepare them for a future where AI writes the first draft and humans edit the results to fit their needs. 

“It used to be you weren’t allowed to bring a calculator into the SATs and now you’re supposed to bring your calculator so things change,” she said. “It’s just moral panic. Things change and people freak out and that’s what’s happening.”

For the record: An earlier version of this story misnamed the AI tool made by the company Quill. Quill is both the name of the company and the tool. 

]]>
Homeschoolers Embrace AI, Even As Many Educators Keep It at Arms’ Length /article/homeschoolers-embrace-ai-even-as-many-educators-keep-it-at-arms-length/ Tue, 25 Jun 2024 10:30:00 +0000 /?post_type=article&p=727604 Like many parents who homeschool their children, Jolene Fender helps organize book clubs, inviting students in her Cary, North Carolina, co-op to meet for monthly discussions.

But over the years, parents have struggled to find good opening questions. 

“You’d search [the Internet], you’d go on Pinterest,” she said. “A lot of the work had to be done manually, or you had to do a lot more digging around.”


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Then came ChatGPT, Open AI’s widely used artificial intelligence bot. For Fender, it was a no-brainer to query it for help developing deep opening questions.

The chatbot and other AI tools like it have found an eager audience among homeschoolers and microschoolers, with parents and teachers readily embracing it as a brainstorming and management tool, even as public schools take a more cautious approach, often . 

A few observers say AI may even make homeschooling more practical, opening it up to busy parents who might have balked previously.

“Homeschoolers have always been unconstrained in their ability to combine technology — any kind of tech,” said Alex Sarlin, a longtime technology analyst and co-host of the podcast. 

Homeschoolers have always been unconstrained in their ability to combine technology — any kind of tech.

Alex Sarlin, co-host of EdTech Insiders

The reasons are readily apparent, he said: Home internet service typically doesn’t block key websites the way most schools do. Families can more easily manage data privacy and get the digital tools they want without fuss. They’re basically able to ignore “all the dozen reasons why everything falls apart when you try to sell to schools,” Sarlin said. 

Persuading homeschoolers to try out new things is also a lot simpler: If a student and parents like a tool, “There’s nobody else you have to convince.”

Indeed, a by the curriculum vendor found that 44% of homeschool educators reported using ChatGPT, compared to 34% of classroom educators.

“Not everyone is using it, but some are very excited about it,” said Amir Nathoo, co-founder of Outschool, an online education platform.

The most interesting uses he has seen are by gifted and neurodiverse homeschoolers, who often use chatbots to explore complex topics like advanced math and science, philosophy and even ethics, which they wouldn’t ordinarily have access to at a young age. They ask it to provide simple explanations of advanced topics, such as relativity and quantum mechanics, then pursue them on their own. “They’re able to go on a relatively unstructured exploration, which is often the best way that kids learn.”

They're able to go on a relatively unstructured exploration, which is often the best way that kids learn.

Amir Nathoo, Outschool

Alternatively, he said, kids whose ability to express themselves is limited can also benefit from what many consider the non-judgmental qualities of tools like ChatGPT. 

Peer-to-peer learning

Tobin Slaven, cofounder of , a self-paced, independent microschool in Fort Lauderdale, said he’s been experimenting with AI tools for the past year or so and is excited by what he’s seen. “This is what the future looks like to me,” he said

This is what the future looks like to me.

Tobin Slaven, cofounder of Acton Academy

Like many educators, he sees the problems inherent in AI tools like ChatGPT, which on occasion “” with incorrect information and can sometimes be . These concerns have stopped many families from fully embracing AI.

But Slaven can’t support banning it outright. Instead, he’ll offer a student his own device with ChatGPT loaded onto a browser window. The entire time, he has access to their queries and results. That ensures he can review the sessions for inappropriate content.

Lately, Slaven and his students have been playing with an AI tool called that helps them create and develop projects. Designed by a small, two-person UK-based startup, it’s set up like a simple chatbot that asks students what they want to learn about. It elicits information, much like a Socratic guide, about their prior knowledge and how they’d like to explore the topic. Then it searches the Internet for appropriate resources and returns suggestions on what to do next. 

Pathfinder uses Open AI’s GPT-4 large language model and its own algorithm to rank resources based on how relevant it is to an individual learner, said co-founder Amaan Ahmad. That includes how they learn best, what they’re interested in and what they already know. 

Amaan Ahmad 

After a number of students in a homeschool group or class have worked with it long enough, it can even begin recommending classmates or friends to consult with to learn how they’re approaching the topic. 

“My AI can talk to your AI and say, ‘Hey, Greg crushed that last week. Why don’t you go speak to him and develop your project together?’” he said. 

Slaven tried out Pathfinder with a group of students recently and found that even during a brief trial run, it allowed them to better conceptualize their projects. 

With the tool asking them questions about their preferred topic, they were able to go from general inquiries about their interests, such as horseback riding or space exploration, into more advanced ones that explore the topics more deeply. That goes a long way toward helping students become more independent and responsible for their own learning, a key goal of microschooling and homeschooling.

A student works on a laptop at Acton Academy, a self-paced, independent microschool in Fort Lauderdale, Fla. (Courtesy of Acton Academy)

Slaven believes, more broadly, that AI co-pilots configured to students’ interests and preferences will enable personalized learning at scale. It’ll become the norm that everyone has a collaborative AI partner that will, in time, understand how each student performs best and under what conditions. “It’s eventually going to become their preferred resource,” he said.

Making homeschooling more accessible

Ahmad, the Pathfinder co-founder, said AI holds the possibility of helping endeavors like microschooling and homeschooling become more practical. Access to reliable, safe AI agents means that an individual student isn’t restricted to what a parent or teacher knows.

Giving that autonomy with a bit of guidance helps make learning much more impactful, he said. “It’s very difficult to do that in real time because with one adult and one kid, you can’t always be by their side. And if you have a microschool with 12 to 16 kids, that’s even more time-consuming.” 

For Fender, the North Carolina homeschooling mother, one of the most helpful aspects of AI is that it helps parents organize what can often be a chaotic, free-form learning environment. 

Fender subscribes to a type of homeschooling known as “,” which seeks to teach students to be more self-directed and independent than in most public schools. Her kids’ lessons are “very much interest-led” and her small co-op has grown in recent years. 

But she must also persuade state bureaucrats that she’s providing an adequate education. So she and a few other homeschool parents in Cary rely on a website that uses AI to detail what activities their kids have done and auto-completes all of the relevant North Carolina educational standards. “I thought that was a genius tool,” she said, and one that allows stressed, busy parents to build a comprehensive portfolio for annual state reviews and high school transcripts.

Fender also uses ChatGPT for brainstorming. In a recent case, which she shared on , Fender asked the AI for 50 real-life applications for the Pythagorean theorem. It generated a list that included designing ramps or stairs, planning optimal pathways in garden design and building efficient roller coasters. 

An image from homeschooling mother Jolene Fender’s Instagram account, in which she queries ChatGPT for real-life applications of the Pythagorean theorem. (Instagram screen capture)

Last year, she recalled, one of her daughters was creating Christmas cards for a homeschool craft fair and “wanted to have fun puns in the cards.” Fender explained how to craft an AI prompt — and how to sift through the chaff. Her daughter eventually asked ChatGPT for 50 different Christmas-themed puns and ended up using about 10 to 15. 

Like most parents, Fender has read about the downsides of AI but believes schools are short-sighted to limit its use. 

“Why are you banning a tool that is definitely here to stay?” she said. “Maybe we don’t understand all the ins and outs, but at the end of the day, our goal is to prepare kids for the jobs of the future. And a lot of these jobs of the future, we don’t even know what they are.”

]]>
Texas Will Use Computers to Grade Written Answers on This Year’s STAAR Tests /article/texas-will-use-computers-to-grade-written-answers-on-this-years-staar-tests/ Wed, 10 Apr 2024 12:30:00 +0000 /?post_type=article&p=725110 This article was originally published in

Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the state’s standardized tests will be graded automatically by computers.

The Texas Education Agency is rolling out an “automated scoring engine” for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing technology like artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students’ understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions — known as constructed response items. After the redesign, there are six to seven times more constructed response items.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


“We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score,” said Jose Rios, director of student assessment at the Texas Education Agency.

In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given.

This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans.

When the computer has “low confidence” in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.

“We have always had very robust quality control processes with humans,” said Chris Rozunick, division director for assessment development at the Texas Education Agency. With a computer system, the quality control looks similar.

Every day, Rozunick and other testing administrators will review a summary of results to check that they match what is expected. In addition to “low confidence” scores and responses that do not fit in the computer’s programming, a random sample of responses will also be automatically handed off to humans to check the computer’s work.

TEA officials have been resistant to the suggestion that the scoring engine is artificial intelligence. It may use similar technology to chatbots such as GPT-4 or Google’s Gemini, but the agency has stressed that the process will have systematic oversight from humans. It won’t “learn” from one response to the next, but always defer to its original programming set up by the state.

“We are way far away from anything that’s autonomous or can think on its own,” Rozunick said.

But the plan has still generated worry among educators and parents in a world still weary of the influence of machine learning, automation and AI.

Some educators across the state said they were caught by surprise at TEA’s decision to use automated technology — also known as hybrid scoring — to score responses.

“There ought to be some consensus about, hey, this is a good thing, or not a good thing, a fair thing or not a fair thing,” said Kevin Brown, the executive director for the Texas Association of School Administrators and a former superintendent at Alamo Heights ISD.

Representatives from TEA first mentioned interest in automated scoring in testimony to the Texas House Public Education Committee in August 2022. In the fall of 2023, the agency announced the move to hybrid scoring at a conference and during test coordinator training before releasing details of the process in December.

The STAAR test results are a key part of the accountability system TEA uses to grade school districts and individual campuses on an A-F scale. Students take the test every year from third grade through high school. When campuses within a district are underperforming on the test, state law allows the Texas education commissioner to intervene.

The commissioner can appoint a conservator to oversee campuses and school districts. State law also allows the commissioner to suspend and replace elected school boards with an appointed board of managers. If a campus receives failing grades for five years in a row, the commissioner is required to appoint a board of managers or close that school.

With the stakes so high for campuses and districts, there is a sense of uneasiness about a computer’s ability to score responses as well as a human can.

“There’s always this sort of feeling that everything happens to students and to schools and to teachers and not for them or with them,” said Carrie Griffith, policy specialist for the Texas State Teachers Association.

A former teacher in the Austin Independent School District, Griffith added that even if the automated scoring engine works as intended, “it’s not something parents or teachers are going to trust.”

Superintendents are also uncertain.

“The automation is only as good as what is programmed,” said Lori Rapp, superintendent at Lewisville ISD. School districts have not been given a detailed enough look at how the programming works, Rapp said.

The hybrid scoring system was already used on a limited basis in December 2023. Most students who take the STAAR test in December are retaking it after a low score. That’s not the case for Lewisville ISD, where high school students on an altered schedule test for the first time in December, and Rapp said her district saw a “drastic increase” in zeroes on constructed responses.

“At this time, we are unable to determine if there is something wrong with the test question or if it is the new automated scoring system,” Rapp said.

The state overall saw an increase in zeroes on constructed responses in December 2023, but the TEA said there are other factors at play. In December 2022, the only way to score a zero was by not providing an answer at all. With the STAAR redesign in 2023, students can receive a zero for responses that may answer the question but lack any coherent structure or evidence.

The TEA also said that students who are retesting will perform at a different level than students taking the test for the first time. “Population difference is driving the difference in scores rather than the introduction of hybrid scoring,” a TEA spokesperson said in an email.

For $50, students and their parents can request a rescore if they think the computer or the human got it wrong. The fee is waived if the new score is higher than the initial score. For grades 3-8, there are no consequences on a student’s grades or academic progress if they receive a low score. For high school students, receiving a minimum STAAR test score is a common way to fulfill one of the state graduation requirements, but it is not the only way.

Even with layers of quality control, Round Rock ISD Superintendent Hafedh Azaiez said he worries a computer could “miss certain things that a human being may not be able to miss,” and that room for error will impact students who Azaiez said are “trying to do his or her best.”

Test results will impact “how they see themselves as a student,” Brown said, and it can be “humiliating” for students who receive low scores. With human graders, Brown said, “students were rewarded for having their own voice and originality in their writing,” and he is concerned that computers may not be as good at rewarding originality.

Julie Salinas, director of assessment, research and evaluation at Brownsville ISD said she has concerns about whether hybrid scoring is “allowing the students the flexibility to respond” in a way that they can demonstrate their “full capability and thought process through expressive writing.”

Brownsville ISD is overwhelmingly Hispanic. Students taking an assessment entirely in Spanish will have their tests graded by a human. If the automated scoring engine works as intended, responses that include some Spanish words or colloquial, informal terms will be flagged by the computer and assigned to a human so that more creative writing can be assessed fairly.

The system is designed so that it “does not penalize students who answer differently, who are really giving unique answers,” Rozuick said.

With the computer scoring now a part of STAAR, Salinas is focused on adapting. The district is incorporating tools with automated scoring into how teachers prepare students for the STAAR test to make sure they are comfortable.

“Our district is on board and on top of the things that we need to do to ensure that our students are successful,” she said.

Disclosure: Google, the Texas Association of School Administrators and Texas State Teachers Association have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune’s journalism. Find a complete .

This article originally appeared in at .

The Texas Tribune is a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.

]]>
A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor /article/a-cautionary-ai-tale-why-ibms-dazzling-watson-supercomputer-made-a-lousy-tutor/ Tue, 09 Apr 2024 13:30:00 +0000 /?post_type=article&p=724698

With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits. 

In the annals of artificial intelligence, Feb. 16, 2011, was a watershed moment.

That day, IBM’s Watson supercomputer finished off a three-game shellacking of Jeopardy! champions Ken Jennings and Brad Rutter. Trailing by over $30,000, Jennings, now the show’s host, wrote out his Final Jeopardy answer in mock resignation: “I, for one, welcome our computer overlords.”

A lark to some, the experience galvanized Satya Nitta, a longtime computer researcher at IBM’s Watson Research Center in Yorktown Heights, New York. Tasked with figuring out how to apply the supercomputer’s powers to education, he soon envisioned tackling ed tech’s most sought-after challenge: the world’s first tutoring system driven by artificial intelligence. It would offer truly personalized instruction to any child with a laptop — no human required.

YouTube

“I felt that they’re ready to do something very grand in the space,” he said in an interview. 

Nitta persuaded his bosses to throw more than $100 million at the effort, bringing together 130 technologists, including 30 to 40 Ph.D.s, across research labs on four continents. 

But by 2017, the tutoring moonshot was essentially dead, and Nitta had concluded that effective, long-term, one-on-one tutoring is “a terrible use of AI — and that remains today.”

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

Some of the leading lights of ed tech, from to , are trying to pick up where Watson left off, offering AI tools that promise to help teach students. Sal Khan, founder of Khan Academy, last year said AI has the potential to bring “probably the ” that education has ever seen. He wants to give “every student on the planet an artificially intelligent but amazing personal tutor.”

A 25-year journey

To be sure, research on high-dosage, one-on-one, in-person tutoring is : It’s interventions available, offering significant improvement in students’ academic performance, particularly in subjects like math, reading and writing.  

But traditional tutoring is also “breathtakingly expensive and hard to scale,” said Paige Johnson, a vice president of education at Microsoft. One school district in West Texas, for example, recently spent in federal pandemic relief funds to tutor 6,000 students. The expense, Johnson said, puts it out of reach for most parents and school districts. 

We missed something important. At the heart of education, at the heart of any learning, is engagement.

Satya Nitta, IBM Research’s former global head of AI solutions for learning

For IBM, the opportunity to rebalance the equation in kids’ favor was hard to resist. 

The Watson lab is legendary in the computer science field, with and six Turing Award winners among its ranks. It’s where modern was invented, and home to countless other innovations such as barcodes and the magnetic stripes on credit cards that make . It’s also where, in 1997, Deep Blue beat Garry Kasparov, essentially inventing the notion that AI could “think” like a person.

Chess enthusiasts watch World Chess champion Garry Kasparov on a television monitor as he holds his head in his hands at the start of the sixth and final match May 11, 1997 against IBM’s Deep Blue computer in New York. Kasparov lost this match in just 19 moves. (Stan Honda/Getty)

The heady atmosphere, Nitta recalled, inspired “a very deep responsibility to do something significant and not something trivial.”

Within a few years of Watson’s victory, Nitta, who had arrived in 2000 as a chip technologist, rose to become IBM Research’s global head of AI solutions for learning. For the Watson project, he said, “I was just given a very open-ended responsibility: Take Watson and do something with it in education.”

Nitta spent a year simply reading up on how learning works. He studied cognitive science, neuroscience and the decades-long history of “intelligent tutoring systems” in academia. Foremost in his reading list was the research of Stanford neuroscientist Vinod Menon, who’d put elementary schoolers through a 12-week math tutoring session, collecting before-and-after scans of their brains using an MRI. Tutoring, he found, produced nothing less than an increase in neural connectivity. 

Nitta returned to his bosses with the idea of an AI-powered cognitive tutor. “There’s something I can do here that’s very compelling,” he recalled saying, “that can broadly transform learning itself. But it’s a 25-year journey. It’s not a two-, three-, four-year journey.”

IBM drafted two of the highest-profile partners possible in education: the children’s media powerhouse Sesame Workshop and Pearson, the international publisher.

One product envisioned was a voice-activated Elmo doll that would serve as a kind of digital tutoring companion, interacting fully with children. Through brief conversations, it would assess their skills and provide spoken responses to help kids advance.

One proposed application of IBM’s planned Watson tutoring app was to create a voice-activated Elmo doll that would be an interactive digital companion. (Getty)

Meanwhile, Pearson promised that it could soon allow college students to “dialogue with Watson in real time.”

Nitta’s team began designing lessons and putting them in front of students — both in classrooms and in the lab. In order to nurture a back-and-forth between student and machine, they didn’t simply present kids with multiple-choice questions, instead asking them to write responses in their own words.

It didn’t go well.

Some students engaged with the chatbot, Nitta said. “Other students were just saying, ‘IDK’ [I don’t know]. So they simply weren’t responding.” Even those who did began giving shorter and shorter answers. 

Nitta and his team concluded that a cold reality lay at the heart of the problem: For all its power, Watson was not very engaging. Perhaps as a result, it also showed “little to no discernible impact” on learning. It wasn’t just dull; it was ineffective.

Satya Nitta (left) and part of his team at IBM’s Watson Research Center, which spent five years trying to create an AI-powered interactive tutor using the Watson supercomputer.

“Human conversation is very rich,” he said. “In the back and forth between two people, I’m watching the evolution of your own worldview.” The tutor influences the student — and vice versa. “There’s this very shared understanding of the evolution of discourse that’s very profound, actually. I just don’t know how you can do that with a soulless bot. And I’m a guy who works in AI.”

When students’ usage time dropped, “we had to be very honest about that,” Nitta said. “And so we basically started saying, ‘OK, I don’t think this is actually correct. I don’t think this idea — that an intelligent tutoring system will tutor all kids, everywhere, all the time — is correct.”

‘We missed something important’

IBM soon switched gears, debuting another crowd-pleasing Watson variation — this time, a touching throwback: It engaged in . In a televised demonstration in 2019, it went up against debate champ Harish Natarajan on the topic “Should we subsidize preschools?” Among its arguments for funding, the supercomputer offered, without a whiff of irony, that good preschools can prevent “future crime.” Its current iteration, , focuses on helping businesses build AI applications like “intelligent customer care.” 

Nitta left IBM, eventually taking several colleagues with him to create a startup called . It uses voice-activated AI to safely help teachers do workaday tasks such as updating digital gradebooks, opening PowerPoint presentations and emailing students and parents. 

Thirteen years after Watson’s stratospheric Jeopardy! victory and more than one year into the Age of ChatGPT, Nitta’s expectations about AI couldn’t be more down-to-earth: His AI powers what’s basically “a carefully designed assistant” to fit into the flow of a teacher’s day. 

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

These notions aren’t news to those who do tutoring for a living. , which offers live and online tutoring in 500 school districts, relies on AI to power a lesson plan creator that helps personalize instruction. But when it comes to the actual tutoring, humans deliver it, said , chief institution officer at , which operates Varsity.

”The AI isn’t far enough along yet to do things like facial recognition and understanding of student focus,” said Salcito, who spent 15 years at Microsoft, most of them as vice president of worldwide education. “One of the things that we hear from teachers is that the students love their tutors. I’m not sure we’re at a point where students are going to love an AI agent.”

Students love their tutors. I'm not sure we're at a point where students are going to love an AI agent.

Anthony Salcito, Nerdy

The No. 1 factor in a student’s tutoring success is consistently, research suggests. As smart and efficient as an AI chatbot might be, it’s an open question whether most students, especially struggling ones, would show up for an inanimate agent or develop a sense of respect for its time.

When Salcito thinks about what AI bots now do in education, he’s not impressed. Most, he said, “aren’t going far enough to really rethink how learning can take place.” They end up simply as fast, spiffed-up search engines. 

In most cases, he said, the power of one-on-one, in-person tutoring often emerges as students begin to develop more honesty about their abilities, advocate for themselves and, in a word, demand more of school. “In the classroom, a student may say they understand a problem. But they come clean to the tutor, where they expose, ‘Hey, I need help.’”

Cognitive science suggests that for students who aren’t motivated or who are uncertain about a topic, only will help. That requires a focused, caring human, watching carefully, asking tons of questions and reading students’ cues. 

Jeremy Roschelle, a learning scientist and an executive director of Digital Promise, a federally funded research center, said usage with most ed tech products tends to drop off. “Kids get a little bored with it. It’s not unique to tutors. There’s a newness factor for students. They want the next new thing.” 

There's a newness factor for students. They want the next new thing.

Jeremy Roschelle, Digital Promise

Even now, Nitta points out, research shows that big commercial AI applications don’t seem to hold users’ attention as well as top entertainment and social media sites like YouTube, Instagram and TikTok. dubbed the user engagement of sites like ChatGPT “lackluster,” finding that the proportion of monthly active users who engage with them in a single day was only about 14%, suggesting that such sites aren’t very “sticky” for most users.

For social media sites, by contrast, it’s between 60% and 65%. 

One notable AI exception: , an app that allows users to create companions of their own among figures from history and fiction and chat with the likes of Socrates and Bart Simpson. It has a stickiness score of 41%.

As startups like offer “your child’s superhuman tutor,” starting at $29 per month, and publicly tests its popular Khanmigo AI tool, Nitta maintains that there’s little evidence from learning science that, absent a strong outside motivation, people will spend enough time with a chatbot to master a topic.

“We are a very deeply social species,” said Nitta, “and we learn from each other.”

IBM declined to comment on its work in AI and education, as did Sesame Workshop. A Pearson spokesman said that since last fall it has been ​​beta-testing AI study tools keyed to its e-textbooks, among other efforts, with plans this spring to expand the number of titles covered. 

Getting ‘unstuck’

IBM’s experiences notwithstanding, the search for an AI tutor has continued apace, this time with more players than just a legacy research lab in suburban New York. Using the latest affordances of so-called large language models, or LLMs, technologists at Khan Academy believe they are finally making the first halting steps in the direction of an effective AI tutor. 

Kristen DiCerbo remembers the moment her mind began to change about AI. 

It was September 2022, and she’d only been at Khan Academy for a year-and-a-half when she and founder Khan got access to a beta version of ChatGPT. Open AI, ChatGPT’s creator, had asked Microsoft co-founder Bill Gates for more funding, but he told them not to come back until the chatbot could pass an Advanced Placement biology exam.

Khan Academy founder Sal Khan has said AI has the potential to bring “probably the biggest positive transformation” that education has ever seen. He wants to give every student an “artificially intelligent but amazing personal tutor.” (Getty)

So Open AI queried Khan for sample AP biology questions. He and DiCerbo said they’d help in exchange for a peek at the bot — and a chance to work with the startup. They were among the first people outside of Open AI to get their hands on GPT-4, the LLM that powers the upgraded version of ChatGPT. They were able to test out the AI and, in the process, become amateur AI before anyone had even heard of the term. 

Like many users typing in queries in those first heady days, the pair initially just marveled at the sophistication of the tool and its ability to return what felt, for all the world, like personalized answers. With DiCerbo working from her home in Phoenix and Khan from the nonprofit’s Silicon Valley office, they traded messages via Slack.

Kristen DiCerbo introduces users to Khanmigo in a Khan Academy promotional video. (YouTube)

“We spent a couple of days just going back and forth, Sal and I, going, ‘Oh my gosh, look what we did! Oh my gosh, look what it’s saying — this is crazy!’” she told an audience during a recent at the University of Notre Dame. 

She recounted asking the AI to help write a mystery story in which shoes go missing in an apartment complex. In the back of her mind, DiCerbo said, she planned to make a dog the shoe thief, but didn’t reveal that to ChatGPT. “I started writing it, and it did the reveal,” she recalled. “It knew that I was thinking it was going to be a dog that did this, from just the little clues I was planting along the way.”

More tellingly, it seemed to do something Watson never could: have engaging conversations with students.

DiCerbo recounted talking to a high school student they were working with who told them about an interaction she’d had with ChatGPT around The Great Gatsby. She asked it about F. Scott Fitzgerald’s famous , which scholars have long interpreted as symbolizing Jay Gatsby’s out-of-reach hopes and dreams.

“It comes back to her and asks, ‘Do you have hopes and dreams just out of reach?’” DiCerbo recalled. “It had this whole conversation” with the student.

The pair soon tore up their 2023 plans for Khan Academy. 

It was a stunning turn of events for DiCerbo, a Ph.D. educational psychologist and former senior Pearson research scientist who had spent more than a year on the failed Watson project. In 2016, Pearson that Watson would soon be able to chat with college students in real time to guide them in their studies. But it was DiCerbo’s teammates, about 20 colleagues, who had to actually train the supercomputer on thousands of student-generated answers to questions from textbooks — and tempt instructors to rate those answers. 

Like Nitta, DiCerbo recalled that at first things went well. They found a natural science textbook with a large user base and set Watson to work. “You would ask it a couple of questions and it would seem like it was doing what we wanted to,” answering student questions via text.

But invariably if a student’s question strayed from what the computer expected, she said, “it wouldn’t know how to answer that. It had no ability to freeform-answer questions, or it would do so in ways that didn’t make any sense.” 

After more than a year of labor, she realized, “I had never seen the ‘OK, this is going to work’ version” of the hoped-for tutor. “I was always at the ‘OK, I hope the next version’s better.’”

But when she got a taste of ChatGPT, DiCerbo immediately saw that, even in beta form, the new bot was different. Using software that quickly predicted the most likely next word in any conversation, ChatGPT was able to engage with its human counterpart in what seemed like a personal way.

Since its debut in March 2023, Khanmigo has turned heads with what many users say is a helpful, easy-to-use, natural language interface, though a few users have pointed out that it sometimes .

Surprisingly, DiCerbo doesn’t consider the popular chatbot a full-time tutor. As sophisticated as AI might now be in motivating students to, for instance, try again when they make a mistake, “It’s not a human,” she said. “It’s also not their friend.”

(AI's) not a human. It’s also not their friend.

Kristen DiCerbo, Khan Academy

Khan Academy’s shows their tool is effective with as little as 30 minutes of practice and feedback per week. But even as many startups promise the equivalent of a one-on-one human tutor, DiCerbo cautions that 30 minutes is not going to produce miracles. Khanmigo, she said, “is not a solution that’s going to replace a human in your life,” she said. “It’s a tool in your toolbox that can help you get unstuck.”

‘A couple of million years of human evolution’

For his part, Nitta says that for all the progress in AI, he’s not persuaded that we’re any closer to a real-live tutor that would offer long-term help to most students. If anything, Khanmigo and probabilistic tools like it may prove to be effective “homework helpers.” But that’s where he draws the line. 

“I have no problem calling it that, but don’t call it a tutor,” he said. “You’re trying to endow it with human-like capabilities when there are none.”  

Unlike humans, who will typically do their best to respond genuinely to a question, the way AI bots work —by digesting pre-existing texts and other information to come up with responses that seem human — is akin to a “statistical illusion,” writes Harvard Business School Professor . “They’ve just been well-trained by humans to respond to humans.”

Researcher Sidney Pressey’s 1928 Testing Machine, one of a series of so-called “teaching machines” that he and others believed would advance education through automation.

Largely because of this, Nitta said, there’s little evidence that a chatbot will continuously engage people as a good human tutor would.

What would change his mind? Several years of research by an independent third party showing that tools like Khanmigo actually make a difference on a large scale — something that doesn’t exist yet.

DiCerbo also maintains her hard-won skepticism. She knows all about the halting early decades of computers a century ago, when experimental, punch-card operated “teaching machines” guided students through rudimentary multiple-choice lessons, often with simple rewards at the end. 

In her talks, DiCerbo urges caution about AI revolutionizing education. As much as anyone, she is aware of the expensive failures that have come before. 

Two women stand beside open drawers of computer punch card filing cabinets. (American Stock/Getty Images)

In her recent talk at Notre Dame, she did her best to manage expectations of the new AI, which seems so limitless. In one-to-one teaching, she said, there’s an element of humanity “that we have not been able to — and probably should not try — to replicate in artificial intelligence.” In that respect, she’s in agreement with Nitta: Human relationships are key to learning. In the talk, she noted that students who have a person in school who cares about their learning have higher graduation rates. 

But still.

ChatGPT now has 100 million weekly users, according to . That record-fast uptake makes her think “there’s something interesting and sticky about this for people that we haven’t seen in other places.”

Being able to engineer prompts in plain English opens the door for more people, not just engineers, to create tools quickly and iterate on what works, she said. That democratization could mean the difference between another failed undertaking and agile tools that actually deliver at least a version of Watson’s promise. 

An early prototype of IBM’s Watson supercomputer in Yorktown Heights, New York. In 2011, the system was the size of a master bedroom. (Wikimedia Commons)

Seven years after he left IBM to start his new endeavor, Nitta is philosophical about the effort. He takes virtually full responsibility for the failure of the Watson moonshot. In retrospect, even his 25-year timeline for success may have been naive.

“What I didn’t appreciate is, I actually was stepping into a couple of million years of human evolution,” he said. “That’s the thing I didn’t appreciate at the time, which I do in the fullness of time: Mistakes happen at various levels, but this was an important one.”

]]>
‘Distrust, Detection & Discipline:’ New Data Reveals Teachers’ ChatGPT Crackdown /article/distrust-detection-discipline-new-data-reveals-teachers-chatgpt-crackdown/ Tue, 02 Apr 2024 20:01:00 +0000 /?post_type=article&p=724713 New survey data puts hard numbers behind the steep rise of ChatGPT and other generative AI chatbots in America’s classrooms — and reveals a big spike in student discipline as a result. 

As artificial intelligence tools become more common in schools, most teachers say their districts have adopted guidance and training for both educators and students, by the nonprofit Center for Democracy and Technology. What this guidance lacks, however, are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat. 


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom — making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” report co-authors Maddy Dwyer and Elizabeth Laird write.

Among the middle and high school teachers who responded to the online survey, which was conducted in November and December, 60% said their schools permit the use of generative AI for schoolwork — double the number who said the same just five months earlier on a similar survey. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat. 

That doesn’t mean, however, that students aren’t getting into trouble. Among survey respondents, 64% said they were aware of students who were disciplined or faced some form of consequences — including not receiving credit for an assignment — for using generative AI on a school assignment. That represents a 16 percentage-point increase from August. 

The tools have also affected how educators view their students, with more than half saying they’ve grown distrustful of whether their students’ work is actually theirs. 

Fighting fire with fire, a growing share of teachers say they rely on digital detection tools to sniff out students who may have used generative AI to plagiarize. Sixty-eight percent of teachers — and 76% of licensed special education teachers — said they turn to generative AI content detection tools to determine whether students’ work is actually their own. 

The findings carry significant equity concerns for students with disabilities, researchers concluded, especially in the face of are ineffective.

]]>
University of Texas at El Paso To Use Faculty Survey Results For AI Strategy /article/utep-to-use-faculty-survey-results-to-enhance-campus-ai-strategy/ Thu, 14 Mar 2024 16:30:00 +0000 /?post_type=article&p=723865 This article was originally published in

A University of Texas at El Paso team plans to conduct a survey this spring and act on the data to offer UTEP instructors the necessary help to address the growing capabilities and complexities of artificial intelligence, including ChatGPT.

Jeff Olimpo, director of the campus’ Institute for Scholarship, Pedagogy, Innovation and Research Excellence, said the goal of this study will be to determine how much instructors know about AI and how comfortable they would be to incorporate the technology into their courses.

Armed with that knowledge, the InSPIRE team will develop a multi-pronged, hybrid effort to build on every level of understanding from basic tutorials to in-depth ideas to enhance instruction to include ways students can use AI in their fields of study.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


This effort is the follow-up step to InSPIRE’s spring 2023 workshops that led to the university’s initial ChatGPT guidelines. Since then, the team has incorporated other concepts used at institutions within and beyond the University of Texas System.

“We essentially created a Frankenstein of sorts,” Olimpo said.

Jeff Olimpo, director of UTEP’s Institute for Scholarship, Pedagogy, Innovation and Research Excellence (UTEP)

The latest incarnation included recommendations of what might be appropriate to include in a syllabus such as if AI is prohibited, allowed or allowed with restrictions. The team also created a guide that included a Frequently Asked Questions section that included AI restrictions, and procedures if the instructor suspected a student used AI in an assignment and did not credit the technology. The information was shared with faculty in January after it was approved by John Wiebe, provost and vice president for Academic Affairs.

Olimpo called the guidelines “brief, digestible and accessible,” and he stressed that instructors ultimately would decide what was best for their classes.

Gabriel Ibarra-Mejia, associate professor of public health sciences, was among the UTEP faculty who responded to the university’s recommendations. He said like it or not, ChatGPT (Generative Pre-trained Transformer) is part of the education equation now and he planned to embrace it to a point.

The professor said he allows students to use it in assignments as long as they cite its use and the reasons behind it such as to develop an outline or to polish the grammar or the report’s flow. What he does not want is for AI to replace thoughts and knowledge, especially from his students who may be health care professionals someday.

“I’m more concerned about how it might replace critical thinking,” said Ibarra-Mejia, who mentioned how he had received student papers where he suspected AI use because the responses had nothing to do with the question. “I’m concerned that the answers I get from a student might be from ChatGPT.”

Gabriel Ibarra-Mejia, associate professor of public health sciences at UTEP, said that he will allow students to use ChatGPT –with some restrictions — because it is an academic tool, but his concern is that it could lead to diminished critical thinking if used poorly. (Daniel Perez / El Paso Matters)

Melissa Vito, vice provost for Academic Innovation at UT San Antonio, said AI has been around for decades and that ChatGPT is part of the evolution.  She is the lead organizer of an AI conference for UT System institutions this week at her campus.

“The consensus in higher ed is that instructors need to use it, and students need to understand it and be able to use it,” Vito said.

In 2021, members of agreed that AI would influence all industries, but those tech leaders suggested that it would have the most effect on industries such as logistics, cybersecurity, health care, research and development, financial services, advertising, e-commerce, manufacturing, public transportation, and media and entertainment.

A research study released in March 2023 by the creator of ChatGPT, showed that approximately 80% of the U.S. workers could have at least 10% of their work affected by GPT, and that 19% of employees could see at least 50% of their jobs affected by it. The projected effects span all wage levels.

Melissa Vito, vice provost for Academic Innovation at the University of Texas at San Antonio (UTSA)

While unaware of any UT System mandates to use ChatGPT, she said institutions are creating opportunities for faculty to learn about it so they can explain its uses better to their students. She said the best path for higher education is to work with the AI industry to address concerns such as data privacy that could restrict access to what is produced and how it is used.

Vito referenced the January announcement of the collaboration between Among the goals of that relationship is to introduce advanced capabilities to the institution, which will help faculty and staff to investigate the possibilities of generative AI, which can create text, images and more in response to prompts.

The UTSA official said the purpose of the AI conference is to bring together administrators, faculty, staff and students with the broadest AI competencies to share their experiences and create a strong framework for how the UT System can benefit from the transformative effects of generative AI academically and socially.

Marcela Ramirez, associate vice provost for Teaching, Learning & Digital Transformation at UTSA, helped develop the conference’s workshops and panel discussions with representatives from sister institutions. They will cover ethical use, practical applications and how AI can be used to help students with critical thinking and problem-solving skills.

Ramirez, a two-time UTEP graduate who earned her BBA in 2008 and her MBA five years later, said the content will support faculty who want to update their courses with AI, and help them to be able to explain to students AI’s current limitations and future opportunities.

“What are the lessons learned,” asked Ramirez, who worked at UTEP for more than 10 years. “And what’s next?”

This first appeared on and is republished here under a Creative Commons license.

]]>