Federal Trade Commission – ĂŰĚŇÓ°ĘÓ America's Education News Source Tue, 26 Nov 2024 23:09:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Federal Trade Commission – ĂŰĚŇÓ°ĘÓ 32 32 FTC: AI ‘Weapons Detection’ Co. Evolv Misled Schools About its Safety Abilities /article/ftc-ai-weapons-detection-co-evolv-misled-schools-about-its-safety-abilities/ Tue, 26 Nov 2024 19:07:43 +0000 /?post_type=article&p=736015 Updated, Nov. 26

The Federal Trade Commission has accused a company that makes AI-powered security screening systems for some 800 schools across 40 states of promoting false claims about its ability to detect weapons and keep kids safe. 

Evolv Technology, which sells AI-powered “weapons detection” systems to schools and other businesses, made deceptive claims to customers about its ability to detect all weapons accurately and efficiently, the commission alleged in . 

Schools make up half of Evolv’s business, according to the complaint, even though the publicly traded company may be best known as a security staple at stadiums and for a pilot program in New York City’s subways earlier this year that yielded dismal results.

“The FTC has been clear that claims about technology — including artificial intelligence — need to be backed up, and that is especially important when these claims involve the safety of children,” Samuel Levine, director of the commission’s bureau of consumer protection, said in a media release. “If you make those claims without adequate support, you can expect to hear from the FTC.”

Evolv’s marketing materials promote its scanners as high-tech alternatives to metal detectors, but the complaint argues the company made inaccurate assurances about the product’s ability to reduce false alarms, cut labor costs, eliminate the need for people to remove innocuous items from their pockets — and its capability to detect all weapons.

The company it had reached a settlement with the commission that did not involve any admission of wrongdoing or monetary penalties but would give certain K-12 customers a 60-day window to cancel the remainder of their current contracts.

The eligible districts account for 8% of all Evolv customers, according to a media release, and deploy 4% of its scanners. Cancelling those contracts could impact $3.9 million of its annual revenue.

“This resolution allows us to focus on a small segment of our school customers to ensure they remain satisfied with” Evolv’s scanners “and allows us to move forward without distraction,” Mike Ellenbogen, the company’s interim president and CEO, said in the statement.

Evolv has that it uses AI to scan for the unique “signatures” of tens of thousands of weapons, allowing it to distinguish “all the guns, all the bombs and all the large tactical knives” out there from everyday items like keys and laptops. 

In its release, Evolv framed the FTC inquiry as focusing on past marketing materials and not the efficacy of its AI technology, but it also said the company would “refine the way it markets its technology, highlighting capabilities and limitations.”

Kenneth Trump, president of National School Safety and Security Services 

School safety consultant Kenneth Trump, president of National School Safety and Security Services, told ĂŰĚŇÓ°ĘÓ that school leaders have increasingly turned to weapons detection systems to signal to parents that they’re taking proactive steps to keep students safe. 

“Some may have unknowingly created the very result they hoped to prevent: A quandary, as they may have to explain to their school communities why they bought technology that may not be delivering what was implied or promised to them and their school community,” he said. 

The — including its — has faced pushback for several years, particularly by IPVM, an independent security and surveillance industry research group that tests and evaluates products. Some of the schools the group researched had false alarm rates of up to 60%. 

Last year, a high school student in Utica, New York, after he was stabbed in a campus hallway by a classmate who brought a knife past an Evolv scanner without detection. The school district had spent $3.7 million on the scanners from Evolv, a Massachusetts-based company backed by big-name investors including Bill Gates and Peyton Manning. The company boasts its artificial intelligence-equipped devices can screen up to 1,000 students in 15 minutes — 10 times faster than traditional metal detectors. 

Members of law enforcement demonstrate an Evolv weapons detection scanner in the Fulton Transit Center, March 28 in Manhattan. (Getty Images)

New York City Mayor Eric Adams of Evolv’s scanners inside some New York City subway stations this year, which was met with opposition from civil rights groups who argued it was unconstitutional and impractical to screen millions of transit users daily. Over the course of a month, the scanners across 20 subway stations had 118 false positives and recovered 12 knives. They didn’t detect a single firearm. 

The company on Tuesday pointed to two incidents last month where Evolv scanners detected guns that students were attempting to bring into their and high schools.

As AI becomes a buzzword in education technology, the FTC in February the prowess of their artificial intelligence offerings, adding that “false or unsubstantiated claims about a product’s efficacy are our bread and butter.”&˛Ô˛ú˛őąč;

]]>
Feds Probe Marketing Push Behind AI ‘Weapons Detection’ Tool Used in Schools /article/feds-probe-marketing-push-behind-ai-weapons-detection-tool-used-in-schools/ Fri, 20 Oct 2023 11:15:00 +0000 /?post_type=article&p=716613 Federal officials have opened an inquiry into the marketing practices of a security company that’s landed multi-million dollar school district contracts by promising its artificial intelligence-powered weapons detection scanners can ferret out threats with unrivaled speed and precision. 

Publicly traded Evolv Technology acknowledged that the Federal Trade Commission had “requested information about certain aspects of its marketing practices” in last week, of its technology in promotions that could give customers, including schools, a false sense of security

Citing two anonymous sources, is the subject of an FTC investigation into whether its scanners — essentially next-generation metal detectors with a — employ artificial intelligence to identify weapons in the ways that it claims.


Get stories like this delivered straight to your inbox. Sign up for ĂŰĚŇÓ°ĘÓ Newsletter


It’s unclear whether Massachusetts-based Evolv’s sales pitches to the education sector are part of the federal probe. An FTC spokesperson declined to comment Tuesday. In its Oct. 12 disclosure form with the Securities and Exchange Commission, and in a statement this week to ĂŰĚŇÓ°ĘÓ, Evolv said the company was “pleased to answer” regulators’ questions. 

“When Evolv receives inquiries from regulators, our approach is to be cooperative and educate them about our company,” the statement continued. “The company stands behind its technology’s capabilities and performance track record.”

The company has that it uses AI to scan for the unique “signatures” of tens of thousands of weapons, allowing it to distinguish “all the guns, all the bombs and all the large tactical knives” out there from everyday items like keys and laptops. 

Yet the — including its — has faced pushback for several years, particularly by IPVM, an independent security and surveillance industry research group that tests and evaluates products. Conor Healy, the group’s director of government research, said that false and misleading marketing claims have been “a pattern with the company” for years. Among the inaccurate assertions, he said, is that the tool “eliminates the friction” that students experience when they pass through security everyday. 

“That has been shown to be just simply not true at all,” Healy told ĂŰĚŇÓ°ĘÓ this week. “There’s quite a lot of friction. The schools that we’ve looked at have .”&˛Ô˛ú˛őąč;

Districts have increasingly turned to “weapons detection” systems from Evolv and competing security vendors in response to fears of school shootings — anxiety that the company says “keeps both students and staff from doing their best work.”

Evolv “combines powerful sensor technology with proven artificial intelligence” to identify threats like guns in hundreds of U.S. schools. Capable of scanning more than 4,000 people an hour, Evolv says its devices are “10X faster than metal detectors,” and “help reduce opportunities for bias” by decreasing secondary screenings by humans.

Evolv extols the benefits of its scanners well beyond schools’ physical safety. While frequent false alarms by traditional metal detectors lead to “security anxiety” and “inconvenient delays,” according to the company’s website, Evolv scanners offer “a more effective and dignified solution, fostering a safer, more inclusive environment that bolsters academic achievement and staff retention.”&˛Ô˛ú˛őąč;

IPVM has accused the company of is 10 times faster than traditional metal detectors, and found the scanners . Meanwhile, IPVM has documented instances where false alarms were by water bottles, binders and laptops.

In a statement to Pennsylvania-based IPVM last month, Evolv said “we understand if any of our past statements appeared to generalize our capabilities,” which may violate an FTC rule that requires company claims to be evidence-backed. 

With AI a constant, if little understood, buzzword across many sectors right now, the FTC in February the capabilities of their artificial intelligence offerings, adding that “false or unsubstantiated claims about a product’s efficacy are our bread and butter.”&˛Ô˛ú˛őąč;

“The minute you hear the word AI in marketing, alarm bells should go off in your head,” said Healy, whose group has also done and the routinely installed in schools. 

“As far as [Evolv’s] artificial intelligence goes, it does not appear to be very intelligent,” he said, because it routinely fails to differentiate everyday school supplies like Chromebooks from weapons like guns. “What AI is actually in the system? That is something that Evolv has not told us very much about.”&˛Ô˛ú˛őąč;

Evolv has resisted calls to disclose additional information about the ways its scanners function. While scanners’ sensitivity settings can alter their performance, a company spokesperson previously told ĂŰĚŇÓ°ĘÓ that publicly sharing information about those settings “is irresponsible and puts people at greater risk.”&˛Ô˛ú˛őąč;

“We must assume any published information regarding details of a physical screening system will be studied and leveraged by a bad actor seeking to do harm,” the statement continued. The company declined to comment on the false alarm rates reported by its customer districts, which include ,, and

 â€œOur systems are designed to detect many types of weapons and components of weapons, but there is no perfect solution that will stop 100% of threats, including ours, which is why security must include a layered approach that involves people, process and technology.”&˛Ô˛ú˛őąč;

Knives became a point of conflict last year after the school district in Utica, New York, spent nearly $4 million to install Evolv scanners across 13 of its campuses. The scanners were ultimately removed after a student was stabbed multiple times with a knife during a fight in a high school hallway. The knife-wielding student had passed through an Evolv scanner with the blade in his backpack, a later investigation revealed. 

While the detectors had false alarms, including on a student’s lunch box, an Evolv scanner failed to alarm when an off-duty police officer accidentally brought a service revolver to a Utica district open house.

Meanwhile, in Buffalo, New York, Evolv scanners were credited for keeping a high school safe. Earlier this month, an to a criminal weapons possession charge after he was caught trying to bring a handgun into a high school. A school security officer reportedly found the disassembled “ghost gun” in the teenager’s backpack as he passed through a weapons detector. Buffalo schools earlier this year. A Buffalo schools spokesperson declined to comment.

As companies increasingly market products with artificial intelligence capabilities to schools, school security consultant Kenneth Trump predicts — or at least hopes — that regulation is imminent. He pointed to new rules in . The ban was adopted after an upstate school district’s decision to install surveillance cameras with facial recognition capabilities prompted an outcry. 

“The marketing claims are so off the charts by many vendors that there’s really no chance for the average school administrator to know what’s true, what’s false and really the gaps and the limitations that these products have,” said Trump, president of Cleveland-based National School Safety and Security Services. Though he expects regulators to soon reign in security companies, “up until that happens, how many school districts are going to fall victim to questionable marketing and grandiose ideas that don’t come to fruition?”

]]>
After Huge Illuminate Data Breach, Ed Tech’s ‘Student Privacy Pledge’ Under Fire /article/after-huge-illuminate-data-breach-ed-techs-student-privacy-pledge-under-fire/ Sun, 24 Jul 2022 19:00:00 +0000 /?post_type=article&p=693424 A few months after education leaders at America’s largest school district announced that a technology vendor had exposed sensitive student information in a massive data breach, the company at fault — Illuminate Education — was recognized with the of the Oscars. 

Since that disclosure in New York City schools, the scope of the breach has only grown, with districts in six states announcing that some had become victims. Illuminate has never disclosed the full extent of the blunder, even as critics decry significant harm to kids and security experts question why the company is being handed awards instead of getting slapped with sanctions. 

Amid demands that Illuminate be held accountable for the breach — and for allegations that it misrepresented its security safeguards — the company could soon face unprecedented discipline for violating , a self-regulatory effort by Big Tech to police shady business practices. In response to inquiries by ĂŰĚŇÓ°ĘÓ, the Future of Privacy Forum, a think tank and co-creator of the pledge, disclosed Tuesday that Illuminate could soon get the boot.


Get stories like this delivered straight to your inbox. Sign up for ĂŰĚŇÓ°ĘÓ Newsletter


Forum CEO Jules Polonetsky said his group will decide within a month whether to revoke Illuminate’s status as a pledge signatory and refer the matter to state and federal regulators, including the Federal Trade Commission, for possible sanctions. 

“We have been reviewing the deeply concerning circumstances of the breach and apparent violations of Illuminate Education’s pledge commitments,” Polonetsky said in a statement to ĂŰĚŇÓ°ĘÓ. 

Illuminate did not respond to interview requests. 

In a twist, the pledge was co-created by the Software and Information Industry Association, the trade group that last month as being  among “the best of the best” in education technology. The pledge, created nearly a decade ago, is designed to ensure that education technology vendors are ethical stewards of kids’ most sensitive data. Its staunchest critics have assailed the pledge as being toothless — if not an outright effort to thwart meaningful government regulation. Now, they are questioning whether its response to the massive Illuminate breach will be any different. 

“I have never seen anybody get anything more than a slap on the wrist from the actual people controlling the pledge,” said Bill FItzgerald, an independent privacy researcher. Taking action against Illuminate, he said, “would break the pledge’s pretty perfect record for not actually enforcing any kind of sanctions against bad actors.”

Jules Polonetsky

Through the voluntary pledge, launched in 2014, hundreds of education technology companies have agreed to a slate of safety measures to protect students’ online privacy. Pledge signatories, , they will not sell student data to third parties or use the information for targeted advertising. Companies that sign the commitment also agree to “maintain a comprehensive security program” to protect students’ personal information from data breaches. 

The privacy forum, which is , has long maintained that the and offers assurances to school districts as they shop for new technology. In the absence of a federal consumer privacy law, the forum argues the pledge grants “an important and unique means for privacy enforcement,” giving the Federal Trade Commission and state attorneys general an outlet to hold education technology companies accountable via consumer protection rules that prohibit unfair and deceptive business practices. 

For years, critics of providing educators and parents false assurances that a given product is safe, than a pinky promise. Meanwhile, schools and technology companies have become increasingly entangled — particularly during the pandemic. As districts across the globe rushed to create digital classrooms, few governments checked to make sure the tech products officials endorsed were safe for children, by the Human Rights Watch. Shoddy student data practices by leading tech vendors, the group found, were rampant. Of the 164 tools analyzed, 89 percent “engaged in data practices that put children’s rights at risk,” with a majority giving student records to advertisers.

As companies suck up a mind-boggling amount of student information, a lack of meaningful enforcement has let tech companies off the hook for violating students’ privacy rights, said Hye Jung Han, a Human Rights Watch researcher focused on children. As a result, she said, students whose schools require them to use certain digital tools are being forced to “give up their privacy in order to learn.” Paired with large-scale data breaches, like the one at illuminate, she said students’ sensitive records could be misused for years. 

“Children, as we know, are more susceptible to manipulation based on what they see online,” she said. “So suddenly the information that’s collected about them in the classroom is being used to determine the kinds of content and the kinds of advertising that they see elsewhere on the internet. It can absolutely start influencing their worldviews.”

But the regulatory environment under the Biden administration may be entering a new, more aggressive era. The Federal Trade Commission announced in May that it would scale up enforcement on education technology companies that sell student data for targeted advertising and that “illegally surveil children when they go online to learn.” Even absent a data breach like the one at Illuminate, the commission wrote in a policy statement, education technology providers violate the if they lack reasonable systems “to maintain the confidentiality, security and integrity of children’s personal information.”&˛Ô˛ú˛őąč;

The FTC  declined to comment for this article. Jeff Joseph, president of the Software and Information Industry Association, said its recent awards were based on narrow criteria and judges “would not be expected to be aware of the breach unless the company disclosed it during the demos.” News of the breach was . 

The trade group “takes the privacy and security of student data seriously,” Joseph said in a statement, adding that the Future of Privacy Forum “maintains the day-to-day management of the pledge.”&˛Ô˛ú˛őąč;

‘Absolutely concerning’

Concerns of a data breach at California-based Illuminate in January when several of the privately held company’s popular digital tools, including programs used in New York City to track students’ grades and attendance, went dark. 

Yet it that city leaders announced that the personal data of some 820,000 current and former students — including their eligibility for special education services and for free or reduced-price lunches — had been compromised in a data breach. In disclosing the breach, city education officials of misrepresenting its security safeguards. The Department of Education, which over the last three years, to stop using the company’s tools. 

A month later, officials at the New York State Education Department launched an investigation into whether the company’s data security practices ran afoul of state law, department officials said. Under the law, education vendors are required to maintain “reasonable” data security safeguards and must notify schools about data breaches “in the most expedient way possible and without unreasonable delay.”&˛Ô˛ú˛őąč;

Outside New York City, state officials said the breach affected about 174,000 additional students across the state.

Doug Levin, the national director of The K12 Security Information eXchange, said the state should issue “a significant fine” to Illuminate for misrepresenting its security protocols to educators. Sanctions, he said, would “send a strong and very important signal that not only must you ensure that you have reasonable security in place, but if you say you do and you don’t, you will be penalized.”&˛Ô˛ú˛őąč;

Meanwhile, Illuminate has since become the subject of two federal class-action lawsuits in New York and California, including one that alleges that students’ sensitive information “is now an open book in the hands of unknown crooks” and is likely being sold on the dark web “for nefarious and mischievous ends.”&˛Ô˛ú˛őąč;

Plaintiff attorney Gary Graifman said that litigation is crucial for consumers because state attorneys general are often too busy to hold companies accountable. 

“There’s got to be some avenue of interdiction that occurs so that companies adhere to policies that guarantee people their private information will be secured,” he said. “Obviously if there is strong federal legislation that occurs in the future, maybe that would be helpful, but right now that is not the case.”

School districts in California, Colorado, Connecticut, Oklahoma and Washington have since disclosed to current and former students that their personal information had been compromised in the breach. But the full extent remains unknown because “Illuminate has been the opposite of forthcoming about what has occurred,” Levin said. 

companies to disclose data breaches to the public. Some 5,000 schools serving 17 million students use Illuminate tools, according to the company, which was founded in 2009.

Doug Levin

“We now know that millions of students have been affected by this incident, from coast to coast in some of the largest school districts in the nation,” including in New York City and Los Angeles, Levin said. “That is absolutely concerning, and I think it shines a light on the role of school vendors,” who are a significant source of education data breaches. 

Nobody, , can guarantee that their cybersecurity infrastructure will hold up against motivated hackers, Levin said, but Illuminate’s failure to disclose the extent of the breach raises a major red flag. 

“The longer that Illuminate does not come clean with what’s happened, the worse it looks,” he said. “It suggests that this was maybe leaning on the side of negligence versus them being an unfortunate victim.”

‘A public relations tool’

When six years ago, it acknowledged the importance of protecting students’ data and said it offered a “secure online environment with data privacy securely in place.” , Illuminate touts an “unwavering commitment to student data privacy,” and offers a link to the pledge. 

“By signing this pledge,” the company wrote in a 2016 blog post, “we are making a commitment to continue doing what we have already been doing from the beginning — promoting that student data be safeguarded and used for encouraging student and educator success.”&˛Ô˛ú˛őąč;

Some pledge critics have accused tech companies of using it as a marketing tool. In 2018, argued that pledge noncompliance was rampant and accused it of being “a mirage” that offered comfort to consumers “while providing little actual benefit.”&˛Ô˛ú˛őąč;

“The pledge may be more valuable as a public relations tool than as a means of actually effecting — or reflecting — industry improvements,” according to the report. Gaps between the pledge’s public declarations and companies business practices, it concluded, “is likely to mislead consumers.”&˛Ô˛ú˛őąč;

In 2015, a software researcher found a large share of pledge signatories infrastructure to guard student data from hackers. Three years later, The New York Times published , a nonprofit that administers the widely used SAT college admissions exam. College Board, the report exposed, was selling student data to third parties in violation of the privacy pledge. In response, the College Board’s status as a pledge signatory had been placed “under review,” but as an active signatory a year later. The College Board, it said in a press release, had committed to changing its business practices. 

Still, in 2020 found the College Board was sending student data to major digital advertising platforms, including those operated by Microsoft and Google. The College Board, . 

The nonprofit is “resolute in protecting student data privacy,” a spokesperson said in a statement. “Organizations that receive data from College Board, such as high schools, districts, colleges, universities, and scholarship organizations, must adhere to strict guidelines when using that data.”

Some critics have argued the College Board should have been removed from the pledge, but the Future of Privacy Forum has held that taking such action against signatories could do more harm than good. When the forum becomes aware of a complaint against a pledge signatory, it typically works with the company to resolve issues and ensure compliance, . The think tank argued it’s best to work with noncompliant companies to improve their business practices rather than exile them from the pledge outright. Removing companies “could result in fewer privacy protections for users, as a former signatory would not be bound by the Pledge’s promises for future activities.”&˛Ô˛ú˛őąč;

Attorney Amelia Vance, a former privacy forum employee and the founder and president of Public Interest Privacy Consulting, said the pledge has nudged education technology companies to change their business practices to ensure they’re following its provisions. 

“I almost always thought of it as a way to make companies better and more aware of student privacy than something to be enforced with specific teeth,” said Vance, who declined to comment on whether Illuminate should be removed. “After all, the Federal Trade Commission and state [attorneys general] are the ones who really have the enforcement powers here.”

But self-policing efforts, like the pledge, are “only as effective as the enforcement,” said Levin, the school security expert. Otherwise, it can only serve as “a nice window dressing” for Big Tech efforts to fend off stricter state and federal regulations — provisions he said must be strengthened. 

At a minimum, he said the privacy forum should disclose companies that have been credibly accused of violating the pledge and to conduct investigations. If they find a company out of compliance, he said “it’s not clear to me that they should be allowed to re-sign the pledge.”

“If I were another signatory of the pledge, I would be quite concerned about whether or not the value of that pledge is being diminished” by including companies that violate its provisions, he said. “If it’s going to serve its purpose, there needs to be some policing.”

But to Fitzgerald, the privacy researcher, the forum’s failure to take action against bad actors has long rendered the pledge useless. 

“It’s not like the pledge finally doing what the pledge should have been doing five years ago would make a difference,” he said. “It’s never too late to start” removing companies that violate its provisions, he said, but “the fact that it hasn’t happened yet seems to indicate that it’s not going to happen.”&˛Ô˛ú˛őąč;

Disclosure: The Bill & Melinda Gates Foundation and the Chan Zuckerberg Initiative provide financial support to the Future of Privacy Forum and ĂŰĚŇÓ°ĘÓ

]]>