WASHINGTON—This week, Representatives Brad Schneider (D-IL-10), Tony Gonzales (R-TX-23), and Brian Higgins (D-NY-26) led a bipartisan group of 13 Members of Congress in a letter to social media companies requesting an explanation as to how they moderate and flag harmful content that could indicate intentions to commit acts of violence.

As of October 10, 2022, there have been more than 530 mass shootings in the United States, including the recent shootings in Highland Park, Illinois in Rep. Schneider’s district; Uvalde, Texas in Rep. Gonzales’ district; and Buffalo, New York in Rep. Higgins’ district. In these three incidents and many others, the shooters used social media platforms to share violent content and sometimes even indicate their plans in advance.

“Our communities continue to grieve the lives lost in these tragic shootings, and we are determined to take action,” said Congressman Schneider. “Every one of us is responsible for doing what we can to keep our communities safe. After every tragedy, we learn about the shooters and often find warning signs that, if acted upon sooner, could have saved innocent lives. That’s why I implore social media companies to be transparent about the steps they are taking to moderate and flag violent threats on their platforms.”

“In the aftermath of Uvalde, it came to light that the shooter had a robust online presence where he detailed his violent inclination and plans to harm others,” said Congressman Gonzales. “That’s why I joined my colleagues in an effort to help catch these dangerous attackers before a tragic situation occurs. I will always fight to keep our communities safe and secure.”

“On May 14, a racist mass shooter targeted a neighborhood supermarket in Buffalo, killing ten of our neighbors and injuring several more. Radicalized by online platforms, he sought out our community and attempted to broadcast this violent act on social media. Despite the fact that he had previously displayed threats of violence online, his content was not removed until a mass shooting was already taking place. That is unacceptable,” said Congressman Brian Higgins. “The time to hold social media companies accountable is long-overdue. Platforms gain new followers everyday as our online and offline lives become further integrated and effective policies must be put in place to ensure that they do not continue to contribute to violence in our communities.”

The letter requests answers to specific questions regarding content moderation practices and procedures for handling troubling content flagged by users from representatives at Youtube, Twitter, Meta, Discord, Tik Tok, Yubo App, Snap, and Twitch.

Additional signers of the letter include Salud Carbajal (D-CA-24), Mike Doyle (D-PA-18), Bill Foster (D-IL-11), Kay Granger (R-TX-12), Jahana Hayes (D-CT-5), Doug LaMalfa (R-CA-01), Jimmy Panetta (D-CA-20), Elissa Slotkin (D-MI-08), Dina Titus (D-NV-01), and Zoe Lofgren (D-CA-19).


The full text of the letter is below:

Dear Industry Leaders,

We write to you in the wake of the recent mass shootings in our districts – Highland Park, Illinois, Uvalde, Texas, and Buffalo, New York. As you may know, on July 4th, as families— parents, grandparents, grandchildren—lined the streets of Highland Park, a single, evil man climbed a ladder to a store rooftop and monstrously opened fire into the crowd. Seven loving people were murdered, dozens were wounded, and an entire community was traumatized.

Sadly, and unacceptably, this tragedy is all too familiar in far too many American communities. On May 24th, a gunman entered Robb Elementary School in Uvalde, Texas and murdered 21 people, 19 of whom were children, and wounded 17 others. Ten days earlier, a shooter in Buffalo, New York, opened fire in a grocery store, murdering 10 people and injuring 3. To date, there have been over 350 mass shootings in the United States this year alone.

As Members of Congress, we continue to grieve the lives lost and the tragic impact to our communities, but it is also a call to action. As representatives, we must always be asking whether there are ways to prevent mass shootings like this. The more that we learned about the shooters, the more we learned that there were warning signs which, if properly heeded, may have prevented these tragedies.

Sources revealed that the shooter in the Highland Park incident frequently used the social media platform, Discord, to share violent materials, including videos depicting him committing mass murders. The shooter in Buffalo also used Discord to document his plans for the shooting and live streamed it on Twitch1.

In Uvalde, while friends and family may have been unaware of the shooter’s deadly plans, he had a robust online presence where he detailed his violent inclination and plans to harm others. He sent messages threatening to kidnap, rape, and kill. He was particularly active on Yubo app, where he posted images of dead cats, joked about sexual assault, and sexually harassed young women. While several people reported him for bullying and threats on Yubo app, the killer remained active on the platform.

The details from the Highland Park, Uvalde, and Buffalo shootings echo what we have heard from many prior mass shootings: shooters often make their violent intentions known online via social media. Oftentimes, this violent content is reported to your companies, usually to no avail.

As leaders within the social media industry and with a world that is increasingly online, we respectfully ask for your response to the following questions:

1) Content moderation

  • How many content moderators do you employ or contract?
  • How do you train content moderators to recognize harmful content?
  • Can users report multiple pieces of content as part of a single incident?
  • Do you offer real-time support for urgent, time-sensitive incidents or attacks?
  • Do you provide resources to users, such as warnings, when they click on outbound links to sites that contain content or activity that violates platform rules?
  • How do you engage users posting harmful content? Do you provide users that violate platform rules with a detailed explanation of why content was removed? Are there escalating penalties for repeat offenders?
  • What proactive steps do you take to ensure that users with a history of hateful and abusive behavior are unable to continue these practices on your site?

2) Flagged content

  • What is the average time it takes to review flagged content from employees, contractors, trusted flaggers, and platform users? And are protocols similar or different depending on the source?
  • Do you have an abuse reporting portal or ticketing system for users to track your response to reported incidents?
  • When you receive reports from users about troubling comments by users, how do you determine whether to bring the content to the attention of local or federal law enforcement?
  • In how many instances in the U.S. have you flagged user content for local law enforcement?

3) Process improvements

  • What actions have you taken to refine reporting mechanisms, content removal processes or other content moderations in the wake of these recent attacks?

We all have a responsibility to do what we can to keep our communities safe. I appreciate a thoughtful response on how you are doing your part and welcome the opportunity to work together on this shared goal.