data privacy – 蜜桃影视 America's Education News Source Wed, 04 Mar 2026 21:17:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png data privacy – 蜜桃影视 32 32 California Agency Fines Company For Violating Ed Tech Privacy Law /article/california-agency-fines-company-for-violating-ed-tech-privacy-law/ Thu, 05 Mar 2026 19:30:00 +0000 /?post_type=article&p=1029424 This article was originally published in

This story was originally published by . for their newsletters.

Before they could attend school football games or school plays, high school students across California had to give their personal information over to a ticketing platform, GoFan, which then sold that data to advertisers, state privacy regulators said. The parent company PlayOn, which has contracted with roughly 1,400 California schools, repeatedly violated state privacy law in 2023 and 2024, according to a January filed by the state鈥檚 privacy protection agency.

The California Privacy Protection Agency, sometimes known as CalPrivacy, announced the order Tuesday, saying it is fining PlayOn $1.1 million for failing to give students and families a way to opt out of their data collection.


Get stories like this delivered straight to your inbox. Sign up for 蜜桃影视 Newsletter


PlayOn offers a slew of online products that coordinate ticket and merchandise sales for schools and youth sports organizations, along with other services, such as fundraising and streaming. Its subsidiaries include GoFan, MaxPreps, and NFHS Network, which are used by school districts stretching from Los Angeles and San Diego to Modoc, Mono, and Sierra counties, the order says. The company鈥檚 annual gross revenue is over $26 million.

When users tried to access tickets for school events through one of PlayOn鈥檚 platforms, GoFan, a pop-up appeared, prompting the ticket-holder to agree to the company鈥檚 privacy policy, which allowed the sale of personal data. There was no way to say no, the order said: The pop-up obscured the screen so that it was impossible to access the ticket without agreeing to the company鈥檚 terms.

鈥淪tudents trying to go to prom or a high school football game shouldn鈥檛 have to leave their privacy rights at the door,鈥 said Michael Macko, CalPrivacy鈥檚 head of enforcement, in . 鈥淵ou couldn鈥檛 attend these events without showing your ticket, and you couldn鈥檛 show your ticket without being tracked for advertising. California鈥檚 privacy law does not work that way. Businesses must ensure they offer lawful ways for Californians to opt-out, particularly with captive audiences.鈥

PlayOn 鈥渄oes not admit liability for any violation鈥 of state law, according to the disciplinary order, which effectively functions as a settlement agreement. The order also notes that the company significantly changed its privacy policy in December 2024, allowing users to opt out of data collection, bringing the company into compliance with the state law. These data privacy matters have been 鈥渇ully resolved鈥 since then, said James Dickinson, the company鈥檚 senior vice president of marketing, in an email.

The fine is the first time that the state privacy agency has gone after a company for violating the rights of students and schools, according to the press release. The agency formed in 2020 when voters backed calling for increased enforcement of data privacy laws.

Exceptions to California鈥檚 privacy law

California has some of the strongest data privacy laws in the country, including a landmark 2018 law that requires large for-profit companies to give users a relatively easy way to opt out of data collection or delete their data.

Enforcing the law can prove tricky though. Last year, found that more than 30 companies made it difficult for customers to exercise their privacy rights. While the companies were technically abiding by the law, which requires them to give customers a way to delete their information, they used special code to hide that information from Google search results.

The 2018 law also has a number of exceptions, including for non-profit organizations and for companies that buy, sell or share data from less than 100,000 California residents or households.

The state privacy agency is responsible for enforcing the law. In the past 12 months, the agency has found violations by the menswear company , the rural supply retailer . and the automaker , each resulting in fines ranging from $345,000 to $1.35 million. In January, the state said in that it fined Datamasters, a data broker, for selling the names, addresses, phone numbers, and email addresses of 鈥渕illions of people with Alzheimer鈥檚 disease, drug addiction, bladder incontinence, and other health conditions for targeted advertising.鈥 The broker also traded data on individuals鈥 perceived race, political views and banking activity.

California has additional protections regarding the collection and sale of students鈥 data, but those laws do not necessarily include apps and services used outside of the classroom, even when that technology is a de facto requirement for participation in school sports or extracurriculars. Assemblymember , a San Luis Obispo Democrat, introduced a bill this year that would expand the number of tech companies who need to abide by California education privacy rules, but the laws could still leave out many popular student services, last month.

PlayOn did not respond to questions about its compliance with California school privacy law. The PlayOn says it doesn鈥檛 collect personal information from 鈥渕inors under the age of 16 without proper consent鈥 but it doesn鈥檛 mention anything about students who are age 16 or 17.

California law prohibits companies from selling all K-12 students鈥 data, regardless of their age.

This article was and was republished under the license.

]]>
Online Censorship in Schools Is ‘More Pervasive’ than Expected, New Data Shows /article/schools-use-of-web-filtering-subjective-and-unchecked/ Thu, 23 Jan 2025 13:30:00 +0000 /?post_type=article&p=738793 This article was originally published in

Aleeza Siddique, 15, was in a Spanish class earlier this year in her Northern California high school when a lesson about newscasts got derailed by her school鈥檚 internet filter. Her teacher told the class to open up their school-issued Chromebooks and explore a list of links he had curated from the Spanish language broadcast news giant Telemundo. The students tried, but every single link turned up the same page: a picture of a padlock. 

鈥淣one of it was available to us,鈥 Aleeza said. 鈥淭he site was completely blocked.鈥 

She said her teacher scrambled to pivot and fill the 90-minute class with other activities. From what she recalls, they went over vocabulary lists and independently clicked through online quizzes from Quizlet 鈥 a decidedly less dynamic use of time. 


Get stories like this delivered straight to your inbox. Sign up for 蜜桃影视 Newsletter


 by the D.C.-based Center for Democracy & Technology shows just how often some of that blocking happens nationwide. The nonprofit digital rights advocacy organization conducted its fifth annual survey of middle and high school teachers and parents as well as high school students about a range of tech issues. About 70% of both teachers and students this year said web filters get in the way of students鈥 ability to complete their assignments. 

Virtually all schools use some type of web filter to comply with the Children鈥檚 Internet Protection Act, which requires districts taking advantage of the federal E-rate program for discounted internet and telecommunications equipment to keep kids from seeing graphic and obscene images online. A , which is now a part of CalMatters, discovered far more expansive blocking by school districts than federal law requires, some of it political, mirroring culture war battles over what students have access to in school libraries. That investigation found school districts blocking access to sex education and LGBTQ+ resources, including suicide prevention. It also found routine blocking of websites students seek out for academic research. And because school districts tend to set different restrictions for students and staff, teachers can be  because of how they complicate lesson planning.

Web filtering is  鈥榮ubjective and unchecked鈥

Elizabeth Laird, director of equity in civic technology for the center and lead author of the report, said The Markup鈥檚 reporting helped inspire additional survey questions to better understand how schools are using filters as a 鈥渟ubjective and unchecked鈥 method of restricting students鈥 access to information. 

鈥淭he scope of what is blocked is more pervasive and value-laden than I think we initially even knew to ask last year,鈥 Laird said. 

While past surveys have revealed how often students and teachers report disproportionate filtering of content related to reproductive health, LGBTQ+ issues and content about people of color, the center asked respondents this year if they thought content associated with or about immigrants was more likely to be blocked. About one-third of students said yes. 

Aleeza would have said yes, after her experience with Telemundo. The California teen said how often she runs into blocks depends on how much research she鈥檚 trying to do and how much of it she has to do on her school computer. When she was taking a debate class, she ran into the blocks regularly while researching controversial topics. An article in Slate magazine about LGBTQ+ rights gave her a block screen, for example, because the entire news website is blocked. She said she avoids her school Chromebook as much as possible, doing homework on her personal laptop away from school Wi-Fi whenever she can. 

Sign-up for the School (in)Security newsletter.

Get the most critical news and information about students' rights, safety and well-being delivered straight to your inbox.

Nearly one-third of teachers surveyed by the Center for Democracy & Technology said their schools block content related to the LGBTQ+ community. About half said information about sexual orientation and reproductive health is blocked. And Black and Latino students were more likely to say content related to people of color is disproportionately blocked on their school devices.

For students like Aleeza, the blocking is frustrating in practice as well as principle. 

鈥淭he amount that they鈥檙e policing is actively interfering with our ability to have an education,鈥 she said. Often, she has no idea why a website triggers the block page. Aleeza said it feels arbitrary and thinks her school should be more transparent about what it鈥檚 blocking and why. 

鈥淲e should have a right to know what we鈥檙e being protected from,鈥 she said.

Audrey Baime, Olivia Brandeis, and Samantha Yee, all members of the CalMatters Youth Journalism Initiative, contributed reporting for this story.

This was originally published on .

]]>
AI Tools and Student Privacy: 9 Tips for Teachers /article/ai-tools-and-student-privacy-9-tips-for-teachers/ Wed, 01 Jan 2025 17:30:00 +0000 /?post_type=article&p=737439 This article was originally published in

Since the release of ChatGPT to the public in November 2022, the number of AI tools has skyrocketed, and there are now many advocates for the potential changes AI can cause in education.

But districts have not been as fast in providing teachers with training. As a result, many are experimenting without any guidance, an .

To learn about how teachers and other educators can protect student data and abide by the law when using AI tools, Chalkbeat consulted documents and interviewed specialists from school districts, nonprofits, and other groups. Here are nine suggestions from experts.


Get stories like this delivered straight to your inbox. Sign up for 蜜桃影视 Newsletter


Consult with your school district about AI

Navigating the details about the privacy policies in each tool can be challenging for a teacher. Some districts list tools that they have vetted or with which they have contracts.

Give preference to these tools, if possible, and check if your district has any recommendations about how to use them. When a tool has a contract with a school or a district, they are supposed to protect students鈥 data and follow national and state law, but always check if your district has any recommendations on how to use the tool. Checking with your school鈥檚 IT or education technology department is also a good option.

It is also essential to investigate if your school or district has guidelines or policies for the general use of AI. These documents usually review privacy risks and ethical questions.

Check for reviews about AI platforms鈥 safety

Organizations like and review ed-tech tools and provide feedback on their safety.

Be careful when platforms say they comply with laws like the Family Educational Rights and Privacy Act, or FERPA, and the Children鈥檚 Online Privacy Protection Rule. According to the law, the school is ultimately responsible for children鈥檚 data and must be aware of any information it shares with a third party.

Study the AI platform鈥檚 privacy policy and terms

The privacy policy and the terms of use should provide some answers about how a company uses the data it collects from you. Make sure to read them carefully, and look for some of the following information:

  • What information does the platform collect?
  • How does the platform use the collected data? Is it used to determine which ads it will show you? Does it share data with any other company or platform?
  • For how long does it keep the collected data?
  • Is the data it collects used to train the AI model?

The list of questions that Common Sense Media uses for their privacy evaluations is .

You should avoid signing up for platforms that collect a broad volume of data or that are not clear in their policies. One potential red flag: vague claims about 鈥渞etaining personal information for as long as necessary鈥 and 鈥渟haring data with third parties to provide services.鈥

Bigger AI platforms can be safer

Big companies like OpenAI, Google, Meta, and others are under more scrutiny: NGOs, reporters, and politicians tend to investigate their privacy policies more frequently. They also have bigger teams and resources that allow them to invest heavily in compliance with privacy regulations. For these reasons, they tend to have better safeguards than small companies or start-ups.

You still have to be careful. Most of these platforms are not explicitly intended for educational purposes, making them less likely to create specific policies regarding student or teacher data.

Use the tools as an assistant, not a replacement

Even though these tools provide better results when you input more information, try to use them for tasks that don鈥檛 require much information about your students.

AI tools can help provide suggestions on how to ask questions about a book, set up document templates, like an Individualized Educational Program plan or a behavioral assessment, or create assessment rubrics.

But even tasks that can seem mundane can increase risks. For example, providing the tool with a list of students and their grades on a specific assignment and asking it to organize it in alphabetical order could represent a violation of student privacy.

Turn on maximum privacy settings for AI platforms

Some tools allow you to adjust your privacy settings. Look online for tutorials on the best private settings for the tool that you are using and how to activate them. , for example, allows users to stop it from using your data to train AI models.

Doing this does not necessarily make AI tools completely safe or compliant with student privacy regulations.

Never input personal information to AI platforms

Even if you take all the steps above, do not input student information. Information that is restricted can include:

  • Personal information: a student鈥檚 name, Social Security number, education ID, names of parents or other relatives, address and phone number, location of birth, or any other information that can be used to identify a student.
  • Academic records: reports about absences, grades, and student behaviors in the school, student work, and teachers鈥 feedback on and assessments of student work.

This may be harder than it sounds.

If teachers upload student work to a platform to get help with grading, for example, they should remove all identification, including the student鈥檚 name, and replace it with an alias or random number that can鈥檛 be traced back to the student. It鈥檚 also wise to ensure the students haven鈥檛 included any personal information, like their place of birth, where they live or personal details about their families, friends, religious or political inclination, sexual orientation, and club affiliations.

One exception is for platforms approved by the school or the district and holding contracts with them.

Be transparent with others about using AI

Communicate with your school supervisors, principal, parents, and students about when and how you use AI in your work. That way, everyone can ask questions and bring up concerns you may not know about.

It is also a good way to model behavior for students. For example, if teachers ask students to disclose when they use AI to complete assignments, being transparent with them in turn about how teachers use AI might foster a better classroom environment.

If uncertain, ask AI platforms to delete information

In some states, the law says platforms must delete users鈥 information if they request it. And some companies will delete it even if you aren鈥檛 in one of these states.

Deleting the data may be challenging and not solve all of the problems caused by misusing AI. Some companies may take a long time to respond to deletion requests or find loopholes in order to avoid deleting it.

The tips listed above come from the , published by the American Federation of Teachers; the report by the U.S. Department of Education鈥檚 Office of Educational Technology; and the used by Common Sense Media to carry out its privacy evaluations.

Additional help came from Calli Schroeder, senior counsel and global privacy counsel at the Electronic Privacy Information Center; Brandon Wilmart, director of educational technology at Moore Public Schools in Oklahoma; and Anjali Nambiar, education research manager at Learning Collider.

This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at . 

]]>
Stolen Providence School District Data May Be Making Its Way Online /article/stolen-providence-school-district-data-may-be-making-its-way-online/ Sun, 13 Oct 2024 13:00:00 +0000 /?post_type=article&p=733980 This article was originally published in

Providence public school officials last Friday were about to finalize a credit monitoring agreement to provide protection for district teachers and staff after a recent ransomware attack on the district鈥檚 network.

Then over the weekend, a video preview of selected data allegedly stolen from the Providence Public School Department (PPSD) showed up on a regular website. The site is accessible via any internet browser 鈥 what鈥檚 sometimes called the 鈥渃learnet鈥 鈥 unlike the dark web ransom page where cybercriminal group Medusa first alleged to .

While a forensic analysis of the breach continues, the credit monitoring agreement with an unspecified vendor was finalized as of Thursday and the district was drafting a letter to go out to the staff 鈥渧ery soon鈥 with information on how to access those services, spokesperson Jay G. W茅gimont said in an email.


Get stories like this delivered straight to your inbox. Sign up for 蜜桃影视 Newsletter


鈥淔irst and foremost, the safety and security of our staff members is of utmost importance, and the District continues to make decisions with that in mind,鈥 W茅gimont said.

鈥淲e will also continue to explore any additional services we can offer to protect the security of our staff members and students.鈥

Meanwhile, the data breach has yet to be formally reported to the Rhode Island Attorney General鈥檚 office, said spokesperson Brian Hodge. requires any municipal or government agency to inform the AG鈥檚 office, credit reporting agencies, and people affected by a breach within 30 days of the breach鈥檚 confirmation.

PPSD first used the wording 鈥渦nauthorized access鈥 to describe the breach in a Sept. 25 letter from Superintendent Javier Monta帽ez, although the Providence School Board had used the term 鈥渂reach鈥 in a public statement on Sept. 18.

Providence Mayor Brett Smiley was 鈥渆ncouraged鈥 the district was advising potentially affected staff and finalizing the credit monitoring agreement, spokesperson Anthony Vega said in a statement emailed Tuesday to Rhode Island Current.

The Providence City Council declined to comment, said spokesperson Roxie Richner in an email. Gov. Dan Mckee鈥檚 office did not respond to a request for comment.

鈥楻obert鈥 makes a video

Ransomware group Medusa first took public credit for the pirated PPSD data on Sept. 16, when it demanded a $1 million ransom to be paid by the morning of Sept. 25.

Rhode Island Current previously reported that the alleged ransom landing page did not provide access to files, but did show file and folder names, as well as partially obscured screenshots of the allegedly stolen data.

The clearnet-hosted leak includes a 24-minute screen recording in which someone clicks through an assortment of the allegedly leaked files and folders on an otherwise empty Windows desktop. The post sports a disclaimer that its author is 鈥渘ot engaged in illegal activities鈥 and showcases leaks only for 鈥減ossible information security problems.鈥

The author signs off: 鈥淭raditional thanks to The Providence Public School Department for the provided data. Do not skimp on information security. Always yours. Robert.鈥

While the uploader does not explicitly brand themself as affiliated with Medusa, the 鈥淩obert鈥 source appears to share all the same leaks Medusa does, and both sources use the same encrypted messaging address, according to threat researchers at Bitdefender.

Ransomware attacks, and Medusa鈥檚 methodology as well, have long been associated with social engineering 鈥 like getting people to click phishing links in emails. But it鈥檚 becoming more common that outdated hardware or software are to blame, said Bill Garneau, vice president of operations at CMIT Solutions in Cranston.

鈥淲hat we鈥檝e started to see in terms of ransomware is, it鈥檚 not only business email compromise,鈥 Garneau said. 鈥淭hreat actors out there are really pursuing systems that are out of compliance.鈥

That could mean equipment at the end of its manufacturer-supported lifespan, or software that needs to be patched. Garneau鈥檚 company uses a crafted by the National Institute of Standards and Technology. One of its standards is to patch devices within 30 days of the patch release, before threat actors can exploit the vulnerabilities patches are meant to fix.

鈥淚f there鈥檚 a patch available, it鈥檚 because there鈥檚 a bad guy out there that knows that there鈥檚 a vulnerability, and there鈥檚 somebody that鈥檚 knocking on doors trying to find it,鈥 Garneau said.

To insure or not to insure?

Cyber insurance policies can cover some costs incurred by attacks. But they can鈥檛 prevent future threats or suddenly make insecure networks better, Garneau noted.

鈥淚nsurance is great, right? But that鈥檚 not going to solve any problem,鈥 Garneau said.

PPSD has not responded to requests about whether the district has cyber insurance. According to Lauren Greene, a spokesperson for the Rhode Island League of Cities and Towns, no public entity would disclose that information anyway. 鈥淎s you can understand, it poses a security risk for municipalities to disclose if and what type of cybersecurity insurance that they have,鈥 Greene said in an email.

鈥淢unicipalities continue to prioritize training for their staff in order to mitigate risk and draw awareness to the constantly evolving threats,鈥 Greene added, and noted that a community鈥檚 IT staff may work across multiple areas or departments like public safety and schools.

A released Monday, however, showed that states-level IT officials and security officers are not feeling confident about the budgets for their states鈥 IT infrastructure.

鈥淭he attack surface is expanding as state leaders鈥 reliance on information becomes increasingly central to the operation of government itself,鈥 Srini Subramanian, principal of Deloitte & Touche LLP, said in an with States Newsroom. 鈥淎nd CISOs (chief information security officers) have an increasingly challenging mission to make the technology infrastructure resilient against ever-increasing cyber threats.鈥

Those challenges were reflected in the survey numbers, which found almost half of respondents did not know their state鈥檚 budget for cybersecurity. Roughly 40% of state IT officers said they did not have enough funds to comply with regulations or other legal requirements.

That finding echoes a , which scores and analyzes municipal bonds. 鈥淲hile robust cybersecurity practices can help reduce exposure, initiatives that are costly and require a shift in resources away from core services are a credit challenge,鈥 wrote Gregory Sobel, a Moody鈥檚 analyst and assistant vice president, in the report.

Moody鈥檚 also noted that one survey showed 92% of local governments had cyber insurance, a twofold increase over five years. But that popularity came with higher rates: One county in South Carolina went from paying a $70,000 premium in 2021 to a $210,000 premium in 2022. Those higher costs are also in addition to stricter stipulations on risk management practices before a policy will pay out, like better firewalls, consistent data backups and multi-factor authentication.

Douglas W. Hubbard, the CEO of consulting firm Hubbard Decision Research and coauthor of 鈥淗ow to Measure Anything in Cybersecurity Risk,鈥 told Rhode Island Current in an email that schools should exhaust the low-cost, shared or free resources available to help them manage cyber risk. Examples include (CISA) or a by the Federal Communications Commission for K-12 schools.

鈥淔or specific cybersecurity recommendations鈥here are a few things that are so fundamental that administrators don鈥檛 really even need a risk analysis to get started,鈥 Hubbard said. They include training staff and students on best practices including strong passwords or avoiding mysterious links. Multi-factor authentication is 鈥減robably the single most effective technology a school could implement,鈥 even if it involves an upfront cost, Hubbard said.

鈥淭he fundamental responsibilities of the schools should include at least using the resources which have been made available to them through the programs I mentioned,鈥 Hubbard said. 鈥淚f they aren鈥檛 doing at least that, there is room for blame.鈥

This article was corrected to show that Rhode Island state law requires municipal agencies to notify affected parties and the state Attorney General within 30 days of a data breach. The article originally stated 45 days, which is the timeframe required for individuals to report a breach. 

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Rhode Island Current maintains editorial independence. Contact Editor Janine L. Weisman for questions: info@rhodeislandcurrent.com. Follow Rhode Island Current on and .

]]>
Data Privacy Advocates Raise Alarm Over NYC鈥檚 Free Teen Teletherapy Program /article/data-privacy-advocates-raise-alarm-over-nycs-free-teen-teletherapy-program/ Thu, 12 Sep 2024 12:30:00 +0000 /?post_type=article&p=732707 This article was originally published in

New York City鈥檚 free online therapy platform for teens may violate state and federal laws protecting student data privacy, lawyers from the New York Civil Liberties Union and advocates charged in a letter Tuesday to the city鈥檚 Education and Health Departments.

, a $26 million partnership between the city Health Department and teletherapy giant Talkspace launched in late 2023, connects city residents between ages 13 and 17 with free therapists by text, phone, or video chat.

In less than a year, roughly 16,000 students have signed up, Health Department officials said. Sign-ups disproportionately came from youth who identified as Black, Latino, Asian American and female and live in some of the city鈥檚 lowest-income neighborhoods, .


Get stories like this delivered straight to your inbox. Sign up for 蜜桃影视 Newsletter


Information shared with a therapist is subject to stringent protections under the federal Health Insurance Portability and Accountability Act, or HIPAA. But before connecting with a therapist through Teenspace, teens go through a registration process that asks for personal information like their name, school, mental health history, and gender identity. Advocates are concerned such information is being improperly collected and could be misused.

For one, teens enter the registration information before securing parental consent 鈥 a possible violation of federal student privacy laws, the letter contends.

And families don鈥檛 get a chance to review the privacy policy 鈥 which discloses that registration information can be used to 鈥渢ailor advertising鈥 and for marketing purposes 鈥 before entering the registration information, advocates allege. There鈥檚 an option for teens to request that their data be deleted from the company鈥檚 platform, but it鈥檚 hard to find, according to advocates.

鈥淚t鈥檚 all very invasive,鈥 said Shannon Edwards, a parent and founder of AI For Families, an organization that seeks to help families navigate artificial intelligence, who co-authored the letter along with NYCLU and the Parent Coalition for Student Privacy. 鈥淚t鈥檚 also very unclear that parents understand what they鈥檙e getting themselves into.鈥

Advocates also pointed to the risk of a potential data breach 鈥 something the city has in recent years.

Advocates say similar about have been circulating for years and questioned whether city officials did sufficient due diligence or built in enough additional privacy safeguards before inking the contract.

鈥淚t鈥檚 the opacity of the relationship here, and the failure to make manifest what the city is doing to ensure there isn鈥檛 this data accumulation and sharing for inappropriate purposes,鈥 said Beth Haroules, a senior attorney at the NYCLU who co-authored the letter.

Health Department spokesperson Rachel Vick said the agency has 鈥渢aken additional steps to protect the data of Teenspace users and ensure information is not collected for personal gain, including stipulations that require all client data to remain confidential during and after the completion of the city鈥檚 contract and barring use of data for any purpose other than providing the services included in the contract.鈥

Client data is destroyed after 30 days if a teen doesn鈥檛 connect with a therapist, officials said.

A spokesperson for Talkspace referred questions to the Health Department.

The extent to which Teenspace is subject to state and federal laws governing student privacy in educational settings is somewhat murky, given that the contract is with the city鈥檚 Health Department, not its Education Department.

But NYCLU attorneys contend 鈥渢he City cannot absolve itself of its responsibility to provide the protections inherent in federal and state laws鈥imply because the contract sits with DOHMH instead of DOE. The service is promoted on public school websites, and it is DOE鈥檚 responsibility to ensure that student data is protected, regardless of which City agency signs the contract.鈥

Parents may be more inclined to trust the platform because it has a 鈥渟tamp of approval鈥 from the school system, Edwards added.

A Health Department spokesperson didn鈥檛 specify whether the program is subject to education privacy laws, but said it鈥檚 鈥渘ot a school based service.鈥

Teenspace has been the city鈥檚 highest-profile effort to address the ongoing youth mental health crisis.

鈥淲e are meeting people where they are with a front door to the mental health system that for too long has been too hard to find,鈥 said Ashwin Vasan, the city鈥檚 health commissioner, in May.

Some teens have praised the program, noting it鈥檚 a way to bring mental health care to young people who may not otherwise have access.

But some mental health providers have argued it can鈥檛 replace the kind of intensive care a clinician provides, especially for kids with severe mental health challenges.

Company officials shared in May that they had helped 36 teens navigate serious incidents including reports of suicide attempts and abuse 鈥 cases they referred to child protective services, in-person therapists, or hospitals.

Talkspace CEO Jon Cohen previously told Chalkbeat the company uses an artificial intelligence algorithm to scan transcripts of therapy sessions to help identify teens at risk of suicide.

Even advocates critical of Teenspace鈥檚 privacy protections acknowledge the severe shortage of mental health providers and say teletherapy can play a role in filling the gap.

鈥淲e know you cannot find providers 鈥 there is such a need,鈥 said Haroules. But advocates said the city can do more to ensure its vendors are meeting strict standards for data privacy, especially with such sensitive information.

鈥淓veryone thinks, well, mental health is important for kids, these kids of services are required 鈥 when on the other side is: 鈥楬ow are they getting to it?鈥欌 said Edwards. 鈥淚t doesn鈥檛 matter what the app is, there has to be a standard.鈥

This was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at .

]]>
L.A. Schools Probe Charges its Hyped, Now-Defunct AI Chatbot Misused Student Data /article/chatbot-los-angeles-whistleblower-allhere-ai/ Wed, 10 Jul 2024 10:30:00 +0000 /?post_type=article&p=729622 Independent Los Angeles school district investigators have opened an inquiry into claims that its $6 million AI chatbot 鈥 an animated sun named 鈥淓d鈥 celebrated as an unprecedented learning acceleration tool until the company that built it collapsed and the district was forced to pull the plug 鈥 put students鈥 personal information in peril.

Investigators with the Los Angeles Unified School District鈥檚 inspector general鈥檚 office conducted a video interview with Chris Whiteley, the former senior director of software engineering at AllHere, after he told 蜜桃影视 his former employer鈥檚 student data security practices violated both industry standards and the district鈥檚 own policies. 

Whiteley told 蜜桃影视 he had alerted the school district, the IG鈥檚 office and state education officials earlier to the data privacy problems with Ed but got no response. His meeting with investigators occurred July 2, one day after 蜜桃影视 published its story outlining Whiteley鈥檚 allegations, including that the chatbot put students鈥 personally identifiable information at risk of getting hacked by including it in all chatbot prompts, even in those where the data weren鈥檛 relevant; sharing it with other third-party companies unnecessarily and processing prompts on offshore servers in violation of district student privacy rules. 


Get stories like this delivered straight to your inbox. Sign up for 蜜桃影视 Newsletter


In an interview with 蜜桃影视 this week, Whiteley said the officials from the district鈥檚 inspector general鈥檚 office 鈥渨ere definitely interested in what I had to say,鈥 as speculation swirls about the future of Ed, its ed tech creator AllHere and broader education investments in artificial intelligence. 

鈥淚t felt like they were after the truth,鈥 Whiteley said, adding, 鈥淚鈥檓 certain that they were surprised about how bad [students鈥 personal information] was being handled.鈥

To generate responses to even mundane prompts, Whiteley said, the chatbot processed the personal information for all students in a household. If a mother with 10 children asked the chatbot a question about her youngest son鈥檚 class schedule, for example, the tool processed data about all of her children to generate a response. 

鈥淚t鈥檚 just sad and crazy,鈥 he said.

The inspector general鈥檚 office directed 蜜桃影视鈥檚 request for comment to a district spokesperson, who declined to comment or respond to questions involving the inquiry.

While the conversation centered primarily on technical aspects related to the company鈥檚 data security protocols, Whiteley said investigators probed him on his personal experiences with AllHere, which he described as being abusive, and its finances.

Whiteley was laid off from AllHere in April. Two months later, a notice posted to the said a majority of its 50 or so employees had been furloughed due to its 鈥渃urrent financial position鈥 and the LAUSD spokesperson said company co-founder and CEO Joanna Smith-Griffin had left. The former Boston teacher and Harvard graduate was successful in raising $12 million in venture capital for AllHere and appeared with L.A. schools Superintendent Alberto Carvalho at ed tech conferences and other events throughout the spring touting the heavily publicized AI tool they partnered to create.

Just weeks ago, Carvalho spoke publicly about how the project had put L.A. out in front as school districts and ed tech companies nationally race to follow the lead of generative artificial intelligence pioneers like ChatGPT. But the school chief鈥檚 superlative language around what Ed could do on an individualized basis with 540,000 students had some industry observers and AI experts speculating it was destined to fail.

The chatbot was supposed to serve as a 鈥渇riendly, concise customer support agent鈥 that replied 鈥渦sing simple language a third grader could understand鈥 to help students and parents supplement classroom instruction, find assistance with kids鈥 academic struggles and navigate attendance, grades, transportation and other key issues. What they were given, Whiteley charges, was a student privacy nightmare. 

Smith-Griffin recently deactivated her LinkedIn page and has not surfaced since her company went into apparent free fall. Attempts to reach AllHere for comment were unsuccessful and parts of the company website have gone dark. LAUSD said earlier that AllHere is for sale and that several companies are interested in acquiring it.

The district has already paid AllHere $3 million to build the chatbot and 鈥渁 fully-integrated portal鈥 that gave students and parents access to information and resources in a single location, the district spokesperson said in a statement Tuesday, and 鈥渨as surprised by the financial disruption to AllHere.鈥 

AllHere鈥檚 collapse represents a stunning fall from grace for a company that was named among the world鈥檚 top education technology companies by Time Magazine just months earlier. Scrutiny of AllHere intensified when Whiteley became a whistleblower. He said he turned to the press because his concerns, which he shared first with AllHere executives and the school district, had been ignored.

Whitely shared source code with 蜜桃影视 which showed that students鈥 information had been processed on offshore servers. Seven out of eight Ed chatbot requests, he said, were sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada. 

鈥楬ow are smaller districts going to do this?鈥

What district leaders failed to do as they heralded their new tool, Whiteley said, is conduct sufficient audits. As L.A. 鈥 and school systems nationwide 鈥 contract with a laundry list of tech vendors, he said it鈥檚 imperative that they understand how third-party companies use students鈥 information. 

鈥淚f the second-biggest district can鈥檛 audit their [personally identifiable information] on new or interesting products and can鈥檛 do security audits on external sources, how are smaller districts going to do this?鈥 he asked.

Over the last several weeks, the district鈥檚 official position on Ed has appeared to shift. In late June when the district spokesperson said that several companies were 鈥渋nterested in acquiring Allhere,鈥 they also said its predecessor would 鈥渃ontinue to provide this first-of-its-kind resource to our students and families.鈥 In its initial response to Whiteley鈥檚 allegations published July 1, the spokesperson said that education officials would 鈥渢ake any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.鈥 

In in the Los Angeles Times, a district spokesperson said the chatbot had been unplugged on June 14. 蜜桃影视 asked the spokesperson to provide documentation showing the tool was disabled last month but didn鈥檛 get a response. 

Even after June 14, Carvalho continued to boast publicly about LAUSD鈥檚 foray into generative AI and what he described with third-party vendors. 

On Tuesday, the district spokesperson told 蜜桃影视 that the online portal 鈥 even without a chatty, animated sun 鈥 鈥渨ill continue regardless of the outcome with AllHere.鈥 In fact, the project could become a source of district revenue. Under the contract between AllHere and LAUSD, which was obtained by 蜜桃影视, the chatbot is the property of the school district, which was set to receive 2% in royalty payments from AllHere 鈥渟hould other school districts seek to use the tool to benefit their families and students.鈥 

In the statement Tuesday, the district spokesperson said that officials chose to 鈥渢emporarily disable the chatbot鈥 amid AllHere鈥檚 uncertainty and that it would 鈥渙nly be restored when the human-in-the-loop aspect is re-established.鈥 

Whiteley agreed that the district could maintain the student information dashboard without the chatbot and, similarly, that another firm could buy what remains of AllHere. He was skeptical, however, that Ed the chatbot would live another day because 鈥渋t鈥檚 broken鈥

鈥淭he name AllHere,鈥 he said, 鈥淚 think is dead.鈥

]]>