Kids Online Safety Act – ĂŰĚŇÓ°ĘÓ America's Education News Source Wed, 14 Aug 2024 20:14:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Kids Online Safety Act – ĂŰĚŇÓ°ĘÓ 32 32 Web Filter Refined: Teen Builds His Own, More Nuanced Tool /article/web-filter-refined-teen-builds-his-own-more-nuanced-tool/ Thu, 15 Aug 2024 16:30:00 +0000 /?post_type=article&p=731340 This article was originally published in

Like most kids, Aahil Valliani has been frustrated by the filters that his school uses to block inappropriate websites. Often, he has no idea why certain sites are blocked, especially when his web browsing is tied to his schoolwork.

Many students in this situation find a way around their districts’ web filters. They access the internet on their phones instead, or use proxy servers or virtual private networks to essentially access a different, unfiltered internet. Aahil, searching for a more systemic solution, teamed up with his younger brother and father to start a company called Safe Kids, raise almost $2 million in venture funding, and design a better filter.

As The Markup, which is part of CalMatters, reported in April, almost all schools filter the web to comply with the federal Children’s Internet Protection Act and qualify for discounted internet access, among other things. Most schools The Markup examined used filters that sort all websites into categories and block entire categories at once. Others scan webpages for certain off-limits keywords, blocking websites on which they appear regardless of the context. In both cases, the filters are blunt tools that result in overblocking and sometimes keep kids from information about politicized topics like sex education and LGBTQ resources.


Get stories like this delivered straight to your inbox. Sign up for ĂŰĚŇÓ°ĘÓ Newsletter


Aahil, now 17, points out that schools’ overly strict controls disappear as soon as kids graduate. “That’s a recipe for disaster,” he said. Kids, he contends, need to learn how to make good choices about how to use the internet safely when trusted adults are nearby so they are ready to make good decisions on their own later.

The Safe Kids filter turns web blocking into a teachable moment, explaining why sites are blocked and nudging students to stay away from them of their own accord. It uses artificial intelligence to assess the intent of a student’s search, reducing the number of blocks students see while conducting legitimate academic research. One example: if a student searches for Civil War rifles for a class assignment, Safe Kids would allow it. If a student tries to shop for an AK-47, it wouldn’t. Other filters would block both.

The filter also keeps student browsing data private, storing only categories of websites accessed, not URLs or search terms themselves. And it works through a Chrome browser extension, which means students can’t simply get around it with a proxy server or VPN while using that browser.

Safe Kids got its start during the early COVID-19 lockdowns. Sitting around the dinner table with his father, a tech entrepreneur; his mother, a self-employed fashion designer; and his younger brother Zohran, a budding computer scientist, Aahil got his family to strategize how to help all the kids getting sucked into dark corners of the web and battling the mental health consequences of their internet use.

Their idea, building off of the invasive and ineffective filters the brothers saw in school, essentially puts better training wheels on the internet. Aahil said his father did a bit of hand-holding in these early days, helping find board members and angel investors, as well as the data scientists who would train the AI machine learning model behind the filter and psychologists who could craft and test the filter’s hallmark pop-ups directing students toward more appropriate browsing. The company also spent time and money getting their designs patented. Aahil has three patents under his name and Safe Kids has five.

As Aahil and his family were preparing to chase seed funding for Safe Kids, the ACLU of Northern California was demanding the Fresno Unified School District a product called Gaggle, which districts use to monitor students’ internet use, block potentially harmful content, and step in if student browsing patterns indicate they may need mental health supports. The problem, according to ACLU attorneys, was that Gaggle amounted to intrusive surveillance, trampling on students’ privacy and free speech rights.

The Electronic Frontier Foundation levied similar accusations against another web filter called GoGuardian after getting records from 10 school districts, including three in California, that revealed the extent of the software’s blocking, tracking and flagging of student internet use during the 2022-23 school year, when Aahil was piloting Safe Kids. Jason Kelley, a lead researcher on EFF’s GoGuardian investigation, , looked into Safe Kids in response to an inquiry by The Markup. Accustomed to pointing out how bad filters are, he offered surprised praise for Safe Kids, commending its focus on privacy, its open source code that offers transparency about its model, and its context-specific blocking.

“This is, really, I think, an improved option for all the things that we are generally concerned about,” Kelley said.

So far, Safe Kids has not been able to break into the school market. Still, Aahil hopes to one day sign a contract with a school district, and he is marketing to parents in the meantime, offering them a way to put guardrails on their kids’ home internet use. While Safe Kids started out charging for its filter, Aahil said an open source, free version will be released next month.

One of the company patents is for a  “pause, reflect, and redirect” method that leans on child psychology to teach kids healthy browsing habits when they try to access an inappropriate website.

“When kids go to a site the first time, we consider that a mistake,” Aahil said. “We tell kids why it’s not good for them and kids can make a choice.”

For example, if a student tries to play games during a lesson, a pop-up would say, “This isn’t schoolwork, is it?” Students can click a “take me back” button or “tell me more” link to get more information about why a given site is blocked. When students repeatedly try to access inappropriate content, their browsing is further restricted until they address the issue with an adult. If that content indicates a student might be in crisis, the user is advised to get help from an adult, and in a school setting, a staff member would get an automated alert.

The teen expects to keep building the company, even as he shifts his focus to college admissions this fall. A rising senior at the selective Thomas Jefferson High School for Science and Technology in Alexandria, Virginia, one of the nation’s best public high schools, Aahil plans to major in business or economics and make a career out of entrepreneurship.

Safe Kids stands out in a web filtering market where products’ blunt restrictions on the web have barely become more sophisticated over the last 25 years.

Nancy Willard, director of Embrace Civility LLC, has worked on issues of youth online safety since the mid-1990s. She submitted testimony for the congressional hearings that resulted in passage of the Children’s Internet Protection Act in 2000 and describes the filtering company representatives that showed up as snake oil salesmen, selling a technology that addresses a symptom, not the root of a problem.

“We need to prepare kids to manage themselves,” Willard said. When traditional filters block certain websites with no explanation, kids don’t learn anything, and they’re often tempted to just circumvent the software.

“This approach helps increase student understanding, and hopefully there’s a way also in the instructional aspects (to increase) their skills,” she said about Safe Kids.

Students on Chromebooks in particular can’t circumvent Safe Kids and its design aims to keep them from wanting to. Now Aahil and his family just need to find buyers.

Kelley said he’s not surprised Safe Kids hasn’t been able to yet, given the “hardening” of school security and student safety efforts over the last decade. “We’ve gone from having cameras and some pretty standard filters to having metal detectors, and locked doors, and biometrics, and vape detectors in the bathrooms, and these much more strict filters and content moderating control software,” he said, “and all this is hard to undo.”

This was originally published on .

]]>
Lawmakers Duel With Tech Execs on Social Media Harms to Youth Mental Health /article/senate-grills-tech-ceos-on-social-media-harms/ Wed, 31 Jan 2024 23:20:00 +0000 /?post_type=article&p=721450 During a hostile Senate hearing Wednesday that sometimes devolved into bickering, lawmakers from across the political spectrum accused social media companies of failing to protect young people online and pushed rules that would hold Big Tech accountable for youth suicides and child sexual exploitation. 

The Senate Judiciary Committee hearing in Washington, D.C., was the latest act in a bipartisan effort to bolster federal regulations on social media platforms like Instagram and TikTok amid a growing chorus of parents and adolescent mental health experts warning the services have harmed youth well-being and, in some cases, pushed them to suicide. 

In an unprecedented moment, Meta founder and CEO Mark Zuckerberg, at the urging of Missouri Republican Sen. Josh Hawley, stood up and turned around to face the audience, apologizing to the parents in attendance who said their children were damaged — and in some cases, died — because of his company’s algorithms. 


Get stories like this delivered straight to your inbox. Sign up for ĂŰĚŇÓ°ĘÓ Newsletter


“I’m sorry for everything you’ve all gone through,” said Zuckerberg, whose company owns Facebook and Instagram. “It’s terrible. No one should have to go through the things that your families have suffered.”

Senators argued the companies — and tech executives themselves — should be held legally responsible for instances of abuse and exploitation under tougher regulations that would limit children’s access to social media platforms and restrict their exposure to harmful content.

“Your platforms really suck at policing themselves,” Sen. Sheldon Whitehouse, a Rhode Island Democrat, told Zuckerberg and the CEOs of X, TikTok, Discord and Snap, who were summoned to testify. Section 230 of the Communications Decency Act, which allows social media platforms to moderate content as they see fit and generally provides immunity from liability for user-generated posts, has routinely shielded tech companies from accountability. As youth harms persist, he said those legal protections are “a very significant part of that problem.” 

Whitehouse pointed to a lawsuit against X, formerly Twitter, that was filed by two men who claimed a sex trafficker manipulated them into sharing sexually explicit videos of themselves over Snapchat when they were just 13 years old. Links to the videos appeared on Twitter years later, but the company allegedly refused to take action until after they were contacted by a Department of Homeland Security agent and the posts had generated more than 160,000 views. The by the Ninth Circuit, which cited Section 230. 

“That’s a pretty foul set of facts,” Whitehouse said. “There is nothing about that set of facts that tells me Section 230 performed any public service in that regard.”

In an opening statement, Democratic committee chair, Sen. Dick Durbin of Illinois, offered a chilling description of the harms inflicted on young people by each of the social media platforms represented at the hearing. In addition to Zuckerberg, executives who testified were X CEO Linda Yaccarino, TikTok CEO Shou Chew, Snap co-founder and CEO Evan Spiegel and Discord CEO Jason Citron.

“Discord has been used to groom, abduct and abuse children,” Durbin said. “Meta’s Instagram helped connect and promote a network of pedophiles. Snapchat’s disappearing messages have been co-opted by criminals who financially extort young victims. TikTok has become a, quote, ‘platform of choice’ for predators to access, engage and groom children for abuse. And the prevalance of [child sexual abuse material] on X has grown as the company has gutted its trust and safety workforce.” 

Citron testified that Discord has “a zero tolerance policy” for content that features sexual exploitation and that it uses filters to scan and block such materials from its service. 

“Just like all technology and tools, there are people who exploit and abuse our platforms for immoral and illegal purposes,” Citron said. “All of us here on the panel today, and throughout the tech industry, have a solemn and urgent responsibility to ensure that everyone who uses our platforms is protected from these criminals both online and off.” 

Lawmakers have introduced a slate of regulatory bills that have gained bipartisan traction but have failed to become law. Among them is the Kids Online Safety Act, which would require social media companies and other online services to take “reasonable measures” to protect children from cyberbullying, sexual exploitation and materials that promote self-harm. It would also mandate strict privacy settings when teens use the online services. Other proposals would to report suspected drug activity to the police — some parents said their children overdosed and died after buying drugs on the platforms — and a bill that would hold them accountable for hosting child sexual abuse materials. 

In their testimonies, each of the tech executives said they have taken steps to protect children who use their services, including features that restrict certain types of content, limit screen time and curtail the people they’re allowed to communicate with. But they also sought to distance their services from harms in a bid to stave off regulations. 

“With so much of our lives spent on mobile devices and social media, it’s important to look into the effects on teen mental health and well-being,” Zuckerberg said. “I take this very seriously. Mental health is a complex issue, and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes.” 

Zuckerberg by the National Academies of Sciences, Engineering and Medicine, which concluded there is a lack of evidence to confirm that social media causes changes in adolescent well-being at the population level and that the services could carry both benefits and harms for young people. While social media websites can expose children to online harassment and fringe ideas, researchers noted, the services can be used by young people to foster community. 

In October, 42 state attorneys general , alleging that the social media giant knowingly and purposely designed tools to addict children to its services. U.S. Surgeon General Vivek Murthy warning that social media sites pose a “profound risk of harm” to youth mental health, stating that the tools should come with warning labels. Among evidence of the harms is which found that Instagram led to body-image issues among teenage girls and that many of its young users blamed the platform for increases in anxiety and depression. 

Republican lawmakers devoted a significant amount of time during the hearing to criticizing TikTok for its ties to the Chinese government, calling out the app for collecting data about U.S. citizens, including in an effort to surveil American journalists. The Justice Department is reportedly investigating allegations that ByteDance, the Chinese company that owns TikTok, used the app to surveil several American journalists who report on the tech industry. 

In response, Chew said the company launched an initiative — dubbed “Project Texas” — to prevent its Chinese employees from accessing personal data about U.S. citizens. But employees claim the company has . 

YouTube and TikTok are by far the platforms where teens spend the most hours per day, according to a 2023 Gallup survey although Neal Mohan, the CEO of Google-owned YouTube, was not called in to testify.

Mainstream social media platforms have also been exploited for domestic online extremism. Earlier this month, for example, a teenager accused of carrying out a mass shooting at his Iowa high school reportedly maintained an active presence on Discord and, shortly before the rampage, commented in a channel dedicated to such attacks that he was “gearing up” for the mayhem. Just minutes before the shooting, the suspect appeared to capture a video inside a school bathroom and uploaded it to TikTok. 

Josh Golin, the executive director of Fairplay, a nonprofit devoted to bolstering online child protections, blasted the tech executives’ testimony for being little more than “evasions and deflections.” 

“If Congress really cares about the families who packed the hearing today holding pictures of their children lost to social media harms, they will move the Kids Online Safety Act,” Golin said in a statement. “Pointed questions and sound bites won’t save lives, but KOSA will.” 

The safety act, known as KOSA, has faced pushback from civil rights advocates on First Amendment grounds, arguing the proposal could be used to censor certain content and . Sen. Marsha Blackburn, a Republican from Tennessee and KOSA co-author, said last fall the rules are important to protect “minor children from the transgender in this culture” and cited the legislation as a way to shield children from “being indoctrinated” online. The Heritage Foundation, a conservative think tank, endorsed the legislation, that “keeping trans content away from children is protecting kids.” 

Snap’s Evan Spiegel and X’s Linda Yaccarino both agreed to support the Kids Online Safety Act.

Aliya Bhatia, a policy analyst with the nonprofit Center for Democracy and Technology, said that although lawmakers made clear their intention to act, their directives could end up doing more harm than good. She said the platforms serve as “peer-to-peer learning and community networks” where young people can access information about reproductive health and other important topics that they might not feel comfortable receiving from adults in their lives. 

“It’s clear that this is a really tricky issue, it’s really difficult for the government and companies to decide what is harmful for young people,” Bhatia said. “What one young person finds helpful online, another might find harmful.”

South Carolina’s Sen. Lindsey Graham, the committee’s ranking Republican, said that social media companies can’t be trusted to keep kids safe online and that lawmakers have run out of patience.

“If you’re waiting on these guys to solve the problem,” he said, “we’re going to die waiting.” 

]]>
As Advocates and Parents Rally, Youth Online Privacy Bills on Life Support /article/as-advocates-and-parents-rally-youth-online-privacy-bills-on-life-support/ Wed, 14 Sep 2022 21:07:24 +0000 /?post_type=article&p=696557 Sen. Ed Markey was getting quizzed on the viability of new online privacy laws for children when he took a brief but awkward pause. 

The Democrat from Massachusetts, who has long championed consumer privacy and become a key adversary of tech companies like Meta for monetizing user data, joined a Zoom call Tuesday evening to rally support for two bills he said would protect kids from being manipulated by social media algorithms. But he also brought some bad news: The legislation had “stalled” in Washington despite bipartisan support. 

Advocates this week are making a push to get the bipartisan bills — the Kids Online Safety Act and the Children’s Online Privacy Protection Act 2.0 — across the finish line. In a letter on Monday, 145 groups including Fairplay and Common Sense Media urged lawmakers to pass the legislation in the interests of protecting youth mental health, now considered at an all-time low in this country. 


Get stories like this delivered straight to your inbox. Sign up for ĂŰĚŇÓ°ĘÓ Newsletter


But Markey seemed to lay out a path requiring Herculean effort. 

“Only the paranoid survive,” Markey said, adding that the legislation would pass if its supporters — and youth activists in particular — called their lawmakers and demanded they “pull this out of the pile of issues” and give it priority. “We’re going to try to get it over the finish line, but we need you to just have your energy level go higher and higher for these final couple of months and we will get it done.”

The legislative push comes a year after a Facebook whistleblower disclosed research showing that the social media app Instagram had a harmful effect on youth mental well-being, especially for teenage girls. The whistleblower, Frances Haugen, to regulate social media companies — Meta owns Facebook and Instagram — that she accused of pursuing “astronomical profits” while knowingly putting its users at risk. revealed the company knew Instagram made “body image issues worse for one in three teen girls” who blamed the social media platform for driving “increases in the rate of anxiety and depression” and, for some, suicidal thoughts. 

The would make tech companies liable if they expose young people to content deemed harmful, including materials that promote self-harm, eating disorders and substance abuse. It would also require parental controls that could be used to block adult content and to study systems to verify users’ age “at the device or operating system level.”

The , which expands a law that Markey championed in 1998 to cover older teens, would ban targeted advertisements directed at children and require companies to offer an “eraser button” that allows children and teens to remove their personal data. 

Former Facebook employee Frances Haugen (Getty Images)

But deep-pocketed tech companies, Sen. Richard Blumenthal said Tuesday, are standing in the way. 

“Our obstacles here are the big tech lobbyists,” he said. “They have armies of lobbyists. They pay them, they pay them very well. They hire them to block this legislation.”

While the legislation is designed to protect kids, some digital privacy experts say the rules could come with significant unintended consequences — and could lead to an age-verification system where all web users are made to submit documentation like a driver’s license, requiring them to hand over personal information to tech companies. 

On the Zoom call to bolster support for the bills was Vinaya Sivakumar, a high school senior from Ohio, who created her first social media profile when she was 12. What started out as being harmless, she said, quickly took a toll on her health. 

“It just snowballed into something that constantly perpetuated actions and thoughts like self-harm and eating disorders and it was really never let out of my sight,” said Sivakumar, referring to a stream of content she found harmful being fed to her by algorithms. “It almost encouraged me to make decisions that I didn’t necessarily feel were mine and my mental health was in the worst state ever.”

Kristin Bride, a mother and digital safety advocate from Oregon, implored lawmakers to pass the legislation for kids like her 16-year-old son Carson, who died by suicide in 2020 after he was “visciously bullied” by other kids on Snapchat who used third-party apps to conceal their identities. Last year, Bride , the company that owns the social media app Snapchat, and accused it of lacking safeguards to protect children from harassment. In response, Snap suspended two of the apps, Yolo and LMK. But , NGL, has since cropped up. 

“Until social media companies are held accountable for their harmful products, they will always put profit over people,” Bride said, “and kids like Carson and so many others are just collateral damage.” 

Despite the heightened focus in Washington around digital rights and tech companies’ use of user data for targeted advertising, broader digital privacy legislation has also struggled this year. which would create a national digital privacy standard and limit the personal data that tech companies can collect about users, has hit roadblocks, from House Speaker Nancy Pelosi. 

Earlier this month, Ireland’s Data Protection Commission for violating European Union data privacy laws. The commission has been investigating the company for an Instagram setting that automatically sets the profiles of teenagers as public by default. 

Meanwhile, Meta has begun to roll out , including that automatically routes new users younger than 16 to a version with limits on content deemed inappropriate.

The childrens’ safety legislation, which would strengthen rules that haven’t been updated for decades, has received support from a broad range of groups focused on youth well-being, including and the American Psychological Association and The Jed Foundation. from digital rights advocates including the Electronic Frontier Foundation. In that while lawmakers deserve credit “for attempting to improve online data privacy for young people,” the plan would ultimately “require surveillance and censorship” of children and teens “and would greatly endanger the rights, and safety, of young people online.” 

“Data collection is a scourge for every internet user, regardless of age,” the report notes, but the legislation could ultimately force tech companies to further track their users. “Surveillance of young people is , even in the healthiest household, and is not a solution to helping young people navigate the internet.”

Disclosure: Campbell Brown oversees global media partnerships at Meta. Brown co-founded ĂŰĚŇÓ°ĘÓ  and sits on its board of directors.

]]>