Senators, experts coalesce around transparency regulations for social media companies
At a hearing on social media’s role in spreading extremism, panelists said that outside researchers should be given access to social media algorithms and other data
Kevin Dietsch/Getty Images
Hours ahead of Facebook’s announcement that it would be rebranding itself as “Meta” and shifting its company focus toward the so-called “metaverse,” the Senate’s Homeland Security and Governmental Affairs Committee contemplated strategies for cracking down on extremism on social media platforms.
Thursday’s hearing with a panel of outside experts came amid a renewed push on Capitol Hill to regulate social media companies in the wake of a series of damaging leaks about Facebook’s internal practices and its knowledge that its site funnels users toward extremist content.
Dave Sifry, vice president of the Anti-Defamation League’s Center for Technology and Society, summarized the frustrations of many of the members of the expert panel and the committee in his opening statement.
“Self-regulation is clearly not working,” Sifry said. “Without regulation and reform, they will continue to focus on generating record profits at the expense of our safety and the security of our republic.”
The experts and several of the lawmakers present appeared to be in agreement that additional regulations would be necessary to increase the companies’ transparency.
“It’s not enough for companies to simply pledge that they will get tougher on harmful content, those pledges have gone largely unfulfilled for several years now,” Committee Chair Gary Peters (D-MI) said in his opening statement. “Americans deserve answers on how the platforms themselves are designed to funnel specific content to certain users, and how that might distort users’ views and shape their behavior, online and offline.”
Several of the panelists proposed new regulations that would force social media companies to open their internal systems and data — particularly their recommendation algorithms that push content into users’ feeds — to private researchers and scholars who could conduct oversight and publish their findings, while keeping private user data out of government hands.
“They have lost their right to secrecy,” Nathaniel Persily, a Stanford University law professor who leads the school’s Cyber Policy Center, said. “We are at a critical moment when we need to know exactly what is happening on these platforms.”
Committee Ranking Member Rob Portman (R-OH) said he and Sen. Chris Coons (D-CT) are currently working on legislation that would impose such transparency requirements “so that we can all work together on solutions to these problems that all of us have identified.”
Portman added that he sees this measure as a necessary precursor to further regulatory initiatives.
“We really don’t know what we’re trying to regulate if there is a lack of transparency as to what that design is or how these algorithms are derived,” Portman said.
Persily said he believes that increasing transparency will motivate the platforms to change how they function and eliminate alleged political bias in content moderation — an issue raised by several Republicans on the panel.
“This will change their behavior if they know that they’re being watched. It’s not just about providing a subsidy to outside researchers to figure out something for their publications,” he explained. “It’s about making sure someone’s in the room to figure out what is going on.”
Several members of the expert panel argued that companies’ opaque advertising practices are the key reason for the often-unchecked proliferation of extremist content on social media.
“Core product mechanics like virality and mechanics are built around keeping you, your friends and your family engaged,” Sifry said. “The problem is that misinformation, hate-filled and polarizing content is highly engaging. So algorithms promote that content… these platforms exploit people’s proclivity to interact more with incendiary content. Ultimately, these companies neglect our safety and security because it’s good for the bottom line.”
He added that, fundamentally, Congress must “[create] systems that actually bring about a change in [companies’] incentive systems.”
Multiple experts also said Congress should reform or eliminate portions of the much-discussed Section 230 of the Communications Decency Act, the legal provision that shields websites from legal liability for the content their users post. But not all seemed to be in agreement about what those reforms should entail.
Mary Anne Franks, a law professor at the University of Miami and president of the Cyber Rights Initiative, argued that Section 230 protections should be limited “to speech protected by the First Amendment” and denied to platforms “who exhibit deliberate indifference to unlawful content” that causes “foreseeable” harm.
Without changes to companies’ liability for user content, Franks added, “there is no real incentive for them to do anything.”
Sifry proposed conditioning liability protections on the companies acting “responsibly.”
Lawmakers and experts raised a series of other potential reforms through the hearing that seemed to find less support, including pushing platforms to focus on verifying users’ identities; mandating “circuit breakers” to slow the spread of viral extremist content; creating an independent nonprofit resource center to track extremism; implementing anti-trust legislation targeting social media companies; and passing further privacy legislation relating to advertising.
Peters is reportedly planning to call officials from Facebook, Twitter, YouTube and TikTok to also testify before the committee about extremism and their sites’ recommendation algorithms.