Big Tech Execs Face Senate Over Child Safety Concerns
By Mariana Allende | Journalist & Industry Analyst -
Thu, 02/01/2024 - 15:55
CEOs of major social media platforms underwent intense scrutiny at a recent US Senate Judiciary Committee hearing on child safety on the internet. Lawmakers grilled executives from Meta, TikTok, Discord, X, and Snapchat about their efforts to safeguard children on their platforms.
The hearing witnessed Mark Zuckerberg, CEO, Meta, and Shou Zi Chew, CEO, TikTok who have returned to Congress, along with Linda Yaccarino, CEO, X, Evan Spiegel, CEO, Snapchat, and Jason Citron, CEO, Discord making their first appearances.
The hearing titled “Big Tech and the Online Child Sexual Exploitation Crisis” revolved around concerns over online child safety, including sexual exploitation, human trafficking, and mental health issues. Lawmakers raised questions about potential legislative actions to regulate social media and the role of Section 230 of the Communications Decency Act, which “prohibits the use of any telecommunications device (currently, only the telephone) by a person not disclosing his or her identity in order to annoy, abuse, threaten, or harass any person who receives such communication.”
“You have blood on your hands,” said Senator Lindsey Graham to the leaders of the present tech companies, and criticized their platforms for hosting harmful content. Victims of online abuse, along with families affected by social media-related tragedies, attended the hearing, amplifying the urgency for stricter regulations. “I just feel like for them, our children are just casualties, pawns, in this game to make money,” said Bridgette Norring, a mother whose son died from an accidental fentanyl overdose after ordering a pill from Snapchat.
Despite increased regulatory scrutiny, discussions around artificial intelligence's (AI) role in curbing harmful content remained limited. Some senators expressed skepticism about the platforms' ability to moderate misinformation effectively.
"We are receiving reports from the generative AI companies themselves, (online) platforms, and members of the public. It's absolutely happening," said John Shehan, senior vice president at the National Center for Missing and Exploited Children (NCMEC), which serves as the national clearinghouse to report child abuse content to law enforcement.
Last year, the NCMEC reported receiving 4,700 reports concerning content generated by artificial intelligence depicting child sexual exploitation, according to Reuters. Experts and researchers have sounded alarms about the potential risks associated with generative AI technology, which can produce text and images based on prompts, potentially worsening online exploitation.
The tech company representatives outlined their commitment to enhancing platform safety but did not disclose specific measures to do so. TikTok pledged a US$2 billion investment in trust and safety efforts when in 2022 it reported a US$9.4 billion revenue, a 100% increase year-on-year. Meanwhile, X reported a significant increase in account suspensions and user reports after lax measures following Elon Musk’s purchase of the platform.
In Mexico, the Federal Attorney for the Protection of Girls, Boys, and Adolescents of the National System for the Integral Development of the Family (SNDIF) emphasized the need to recognize children as holders of rights and full-fledged digital citizens. This implies effective protection in the exercise of their rights, familiarity with the Constitution, and assurance in the digital environment.
The SNDIF urged that the general population, the media, and applications or companies, refrain from transmitting or disseminating images, videos, or audio that violate rights. However, there are no clear legislations in effect yet.
During the COVID-19 pandemic alone, the number of reports regarding materials of sexual abuse found online in Mexico increased by 117%, according to Forbes Mexico. Between 2022 and 2023, WhatsApp, Facebook, YouTube, and TikTok were the most used social networks by children and adolescents in the country. However, 25% of users had public profiles, 42% took no protective measures against online sexual abusers, mostly out of ignorance, and 60% did not know where or how to report indecent content.









