CEOs of Facebook, Twitter and Google testify before Congress about misinformation

Members of the Chamber’s Commerce and Energy Committee are expected to put pressure on Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey about their platforms ’efforts to curb unfounded election fraud claims and vaccine skepticism. Some opaque algorithms that prioritize user participation and encourage misinformation could also be examined, according to a committee note.

Technology platforms, which had already faced intense pressure to counter the misinformation and foreign interference leading up to the 2020 elections, came under greater scrutiny over the following months. While some of the companies implemented new steps to crack down on conspiracy theories, it was not enough to prevent hardline President Donald Trump supporters from storming the U.S. Capitol.

The hearing also marks the first time CEOs have returned to Congress since Trump was banned or suspended from their respective platforms after the Capitol riots. In their prepared comments, some of the executives address the events of January 6th.

“The Capitol attack was a horrible assault on our values ​​and our democracy, and Facebook is committed to helping law enforcement bring the insurgents to justice,” Zuckerberg’s testimony says. But Zuckerberg also adds, “We do more to address misinformation than any other company.”

The hearings coincide with legislation that is being actively considered in both the House and Senate to curb the tech industry. Some bills target the economic dominance of companies and alleged anti-competitive practices. Others approach the platforms ’approach to content moderation or data privacy. The various proposals could introduce difficult new requirements for technology platforms or expose them to greater legal liability in ways that could reform the industry.

For site executives, Thursday’s session may also be their last chance to present a case in person to lawmakers before Congress embarks on potentially radical changes to federal law.

At the heart of the upcoming political battle is Article 230 of the Communications Act of 1934, the shield of civil liability that grants legal immunity to websites for much of the content posted by its users. Members of both parties have called for updates to the law, which has been widely interpreted by the courts and attributed to the development of the open internet.

What the Biden administration means for the future of Silicon Valley

Written testimony from CEOs ahead of Thursday’s high-profile hearing outlines areas of potential ground in common with lawmakers and clues about areas where companies intend to work with Congress and areas where Big is likely to Tech recoils.

Zuckerberg plans to argue for reducing the scope of Article 230. In his written observations, Zuckerberg says Facebook favors a form of conditional liability, in which online platforms could be sued for user content if companies fail to comply with certain rules. practices established by an independent. third party.
The other two CEOs do not enter into the Section 230 debate nor discuss the role of government in such granularity. But they offer their overviews for content moderation. Pichai’s testimony calls for clearer content policies and for users to be able to appeal content decisions. Dorsey’s testimony reiterates its calls for user-driven content moderation and the creation of better settings and tools that allow users to customize their online experience.
By now, CEOs have already had a lot of testimonial experience before Congress. Zuckerberg and Dorsey recently appeared in the Senate in November with content moderation. And before that, Zuckerberg and Pichai testified in the House last summer on antitrust issues.
In the days leading up to Thursday’s hearing, companies have argued that they acted aggressively to counter the misinformation. Facebook said Monday that it removed 1.3 billion fake accounts last fall and now has more than 35,000 people working on content moderation. Twitter he said this month would begin applying warning labels to misinformation about the coronavirus vaccine and said repeated violations of its Covid-19 policies could lead to permanent bans. YouTube said this month that it has removed tens of thousands of videos containing misinformation about the Covid vaccine, and in January, following the Capitol riots, announced it would restrict channels that share false claims that doubt the outcome of the 2020 elections.

But these claims of progress are unlikely to appease committee members, whose report cites several research papers indicating that misinformation and extremism continue to spread across platforms.

.Source