Radio Free never accepts money from corporations, governments or billionaires – keeping the focus on supporting independent media for people, not profits. Since 2010, Radio Free has supported the work of thousands of independent journalists, learn more about how your donation helps improve journalism for everyone.

Make a monthly donation of any amount to support independent media.





How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict

by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism

ProPublica is a nonprofit newsroom that…

ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published. This story was co-published with the Tow Center for Digital Journalism at Columbia University.

“My sisters have died,” the young boy sobbed, chest heaving, as he wailed into the sky. “Oh, my sisters.” As Israel began airstrikes on Gaza following the Oct. 7 Hamas terrorist attack, posts by verified accounts on X, the social media platform formerly called Twitter, were being transmitted around the world. The heart-wrenching video of the grieving boy, viewed more than 600,000 times, was posted by an account named “#FreePalestine 🇵🇸.” The account had received X’s “verified” badge just hours before posting the tweet that went viral.

Days later, a video posted by an account calling itself “ISRAEL MOSSAD,” another “verified” account, this time bearing the logo of Israel’s national intelligence agency, claimed to show Israel’s advanced air defense technology. The post, viewed nearly 6 million times, showed a volley of rockets exploding in the night sky with the caption: “The New Iron beam in full display.”

And following an explosion on Oct. 14 outside the Al-Ahli Hospital in Gaza where civilians were killed, the verified account of the Hamas-affiliated news organization Quds News Network posted a screenshot from Facebook claiming to show the Israel Defense Forces declaring their intent to strike the hospital before the explosion. It was seen more than half a million times.

None of these posts depicted real events from the conflict. The video of the grieving boy was from at least nine years ago and was taken in Syria, not Gaza. The clip of rockets exploding was from a military simulation video game. And the Facebook screenshot was from a now-deleted Facebook page not affiliated with Israel or the IDF.

Just days before its viral tweet, the #FreePalestine 🇵🇸 account had a blue verification check under a different name: “Taliban Public Relations Department, Commentary.” It changed its name back after the tweet and was reverified within a week. Despite their blue check badges, neither Taliban Public Relations Department, Commentary nor ISRAEL MOSSAD (now “Mossad Commentary”) have any real-life connection to either organization. Their posts were eventually annotated by Community Notes, X’s crowdsourced fact-checking system, but these clarifications garnered about 900,000 views — less than 15% of what the two viral posts totaled. ISRAEL MOSSAD deleted its post in late November. The Facebook screenshot, posted by the account of the Quds News Network, still doesn’t have a clarifying note. Mossad Commentary and the Quds News Network did not respond to direct messages seeking comment; Taliban Public Relations Department, Commentary did not respond to public mentions asking for comment.

An investigation by ProPublica and Columbia University’s Tow Center for Digital Journalism shows how false claims based on out-of-context, outdated or manipulated media have proliferated on X during the first month of the Israel-Hamas conflict. The organizations looked at over 200 distinct claims that independent fact-checks determined to be misleading, and searched for posts by verified accounts that perpetuated them, identifying 2,000 total tweets. The tweets, collectively viewed half a billion times, were analyzed alongside account and Community Notes data.

ProPublica and Columbia University’s Tow Center for Digital Journalism identified more than 2,000 tweets by verified accounts that contained debunked claims based on out-of-context media. Quds News Network made five of those posts and continues to post about the conflict. Some of its English-language accounts on Facebook and Instagram have been suspended. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

The ongoing conflict in Gaza is the biggest test for changes implemented by X owner Elon Musk since his acquisition of Twitter last year. After raising concerns about the power of platforms to determine what speech is appropriate, Musk instituted policies to promote “healthy” debate under the maxim “freedom of speech, not reach,” where certain types of posts that previously would have been removed for violating platform policy now have their visibility restricted.

Within 10 days of taking ownership, Musk cut 15% of Twitter’s trust and safety team. He made further cuts in the following months, including firing the election integrity team, terminating many contracted content moderators and revoking existing misinformation policies on specific topics like COVID-19. In place of these safeguards, Musk expanded Community Notes. The feature, first launched in 2021 as Birdwatch, adds crowdsourced annotations to a tweet when users with diverse perspectives rate them “helpful.”

“The Israel-Hamas war is a classic case of an information crisis on X, in terms of the speed and volume of the misinformation and the harmful consequences of that rhetoric,” said Michael Zimmer, the director of the Center for Data, Ethics, and Society at Marquette University in Wisconsin, who has studied how social media platforms combat misinformation.

While no social media platform is free of misinformation, critics contend that Musk’s policies, along with his personal statements, have led to a proliferation of misinformation and hate speech on X. Advertisers have fled the platform — U.S. ad revenue is down roughly 60% compared to last year. Last week, Musk reinstated the account of Alex Jones, who was ordered to pay $1.1 billion in defamation damages for repeatedly lying about the 2012 Sandy Hook school shooting. Jones appealed the verdict. This week, the European Union opened a formal investigation against X for breaching multiple provisions of the Digital Services Act, including risk management and content moderation, as well as deceptive design in relation to its “so-called Blue checks.”

ProPublica and the Tow Center found that verified blue check accounts that posted misleading media saw their audience grow on X in the first month of the conflict. This included dozens of accounts that posted debunked tweets three or more times and that now have over 100,000 followers each. The false posts appear to violate X’s synthetic and manipulated media policy, which bars all users from sharing media that may deceive or confuse people. Many accounts also appear to breach the eligibility criteria for verification, which state that verified accounts must not be “misleading or deceptive” or engage in “platform manipulation and spam.” Several of the fastest-growing accounts that have posted multiple false claims about the conflict now have more followers than some regional news organizations covering it.

We also found that the Community Notes system, which has been touted by Musk as a way to improve information accuracy on the platform, hasn’t scaled sufficiently. About 80% of the 2,000 debunked posts we reviewed had no Community Note. Of the 200 debunked claims, more than 80 were never clarified with a note.

When clarifying Community Notes did appear, they typically reached a fraction of the views that the original tweet did, though views on Community Notes are significantly undercounted. We also found that in some cases, debunked images or videos were flagged by a Community Note in one tweet but not in others, despite X announcing, partway through the period covered by our dataset, it has improved its media-matching algorithms to address this. For tweets that did receive a Community Note, it typically didn’t become visible until hours after the post.

This last finding expands on a recent report by Bloomberg, which analyzed 400 false posts tagged by Community Notes in the first two weeks after the Oct. 7 attack and found it typically took seven hours for a Community Note to appear.

For the tweets analyzed by ProPublica and the Tow Center, the median time that elapsed before a Community Note became visible decreased to just over five hours in the first week of November after X improved its system. Outliers did exist: Sometimes it still took more than two days for a note to appear, while in other cases, a note appeared almost instantaneously because the tweet used media that the system had already encountered.

Multiple emails sent to X’s press inbox seeking comment on our findings triggered automated replies to “check back later” with no further response. Keith Coleman, who leads the Community Notes team at X, was separately provided with summary findings relevant to Community Notes as well as the dataset containing the compiled claims and tweets.

Via email, Coleman said that the tweets identified in this investigation were a small fraction of those covered by the 1,500 visible Community Notes on X about the conflict from this time period. He also said that many posts with high-visibility notes were deleted after receiving a Community Note, including ones that we did not identify. When asked about the number of claims that did not receive a single note, Coleman said that users might not have thought one was necessary, pointing to examples where images generated by artificial intelligence tools could be interpreted as artistic depictions. AI-generated images accounted for around 7% of the tweets that did not receive a note; none acknowledged that the media was AI-generated. Coleman said that the current system is an upgrade over X’s historic approaches to dealing with misinformation and that it continues to improve; “most importantly,” he said, the Community Notes program “is found helpful by people globally, across the political spectrum.”

Community Notes were initially meant to complement X’s various trust and safety initiatives, not replace them. “It still makes sense for platforms to keep their trust and safety teams in a breaking-news, viral environment. It’s not going to work to simply fling open the gates,” said Mike Ananny, an associate professor of communication and journalism at the University of Southern California, who is skeptical about leaving moderation to the community, particularly after the changes Musk has made.

“I’m not sure any community norm is going to work given all of the signals that have been given about who’s welcome here, what types of opinions are respected and what types of content is allowed,” he said.

ProPublica and the Tow Center compiled a large sample of data from multiple sources to study the effectiveness of Community Notes in labeling debunked claims. We found over 1,300 verified accounts that posted misleading or out-of-context media at least once in the first month of the conflict; 130 accounts did so three or more times. (For more details on how the posts were gathered, see the methodology section at the end of this story.)

Musk overhauled Twitter’s account verification program soon after acquiring the company. Previously, Twitter gave verified badges to politicians, celebrities, news organizations, government agencies and other vetted notable individuals or organizations. Though the legacy process was criticized as opaque and arbitrary, it provided a signal of authenticity for users. Today, accounts receive the once-coveted blue check in exchange for $8 a month and a cursory identity check. Despite well-documented impersonation and credibility issues, these “verified” accounts are prioritized in search, in replies and across X’s algorithmic feeds.

If an account continuously shares harmful or misleading narratives, X’s synthetic and manipulated media policy states that its visibility may be reduced or the account may be locked or suspended. But the investigation found that prominent verified accounts appeared to face few consequences for broadcasting misleading media to their large follower networks. Of the 40 accounts with more than 100,000 followers that posted debunked tweets three times or more in the first month of the conflict, only seven appeared to have had any action taken against them, according to account history data shared with ProPublica and the Tow Center by Travis Brown. Brown is a software developer who researches extremism and misinformation on X.

Those 40 accounts, a number of which have been identified as the most influential accounts engaging in Hamas-Israel discourse, grew their collective audience by nearly 5 million followers, to around 17 million, in the first month of the conflict alone.

A few of the smaller verified accounts in the dataset received punitive action: About 50 accounts that posted at least one false tweet were suspended. On average, these accounts had 7,000 followers. It is unclear whether the accounts were suspended for manipulated media policy violations or for other reasons, such as bot-like behavior. Around 80 accounts no longer have a blue check badge. It is unclear whether the accounts lost their blue checks because they stopped paying, because they had recently changed their display name (which triggers a temporary removal of the verified status), or because Twitter revoked the status. X has said it removed 3,000 accounts by “violent entities,” including Hamas, in the region.

On Oct. 29, X announced a new policy where verified accounts would no longer be eligible to share in revenue earned from ads that appeared alongside any of their posts that had been corrected by Community Notes. In a tweet, Musk said, “the idea is to maximize the incentive for accuracy over sensationalism.” Coleman said that this policy has been implemented, but did not provide further details.

False claims that go viral are frequently repeated by multiple accounts and often take the form of decontextualized old footage. One of the most widespread false claims, that Qatar was threatening to stop supplying natural gas to the world unless Israel halted its airstrikes, was repeated by nearly 70 verified accounts. This claim, which was based on a false description of an unrelated 2017 speech by the Qatari emir to bolster its credibility, received over 15 million views collectively, with a single post by Dominick McGee (@dom_lucre) amassing more than 9 million views. McGee is popular in the QAnon community and is an election denier with nearly 800,000 followers who was suspended from X for sharing child exploitation imagery in July 2023. Shortly after, X reversed the suspension. McGee denied that he had shared the image when reached by direct message on X, claiming instead that it was “an article touching it.”

Community Notes like this one appear alongside many false posts claiming Qatar is threatening to cut off its gas supply to the world. This note was seen more than 400,000 times across 159 posts that shared the same video clip, and it appeared on nine out of nearly 70 posts in our dataset that made this claim. (Screenshot of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

Another account, using the pseudonym Sprinter, shared the same false claim about Qatar in a post that was viewed over 80,000 times. These were not the only false posts made by either account. McGee shared six debunked claims about the conflict in our dataset; Sprinter shared 20.

Sprinter posted an image of casualties from the Hamas attack on Oct. 7, most of whom were civilians, and purported that it showed Israeli military losses during the ground war later in the month. Another post mistranslated the words of an injured Israeli soldier. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

Sprinter has tweeted AI-generated images, digitally altered videos and the unsubstantiated claim that Ukraine is providing weapons to Hamas. Each of these posts has received hundreds of thousands of views. The account’s follower count has increased by 60% to about 500,000, rivaling the following of Haaretz and the Times of Israel on X. Sprinter’s profile — which has also used the pseudonyms SprinterTeam, SprinterX and WizardSX, according to historical account data provided by Brown — was “temporarily restricted” by X in mid-November, but it retained its “verified” status. Sprinter’s original profile linked to a backup account. That account — whose name and verification status continues to change — still posts dozens of times a day and has grown to over 25,000 followers. Sprinter did not respond to a request for comment and blocked the reporter after being contacted. The original account appears to no longer exist.

Verification badges were once a critical signal in sifting official accounts from inauthentic ones. But with X’s overhaul of the blue check program, that signal now essentially tells you whether the account pays $8 a month. ISRAEL MOSSAD, the account that posted video game footage falsely claiming it was an Israeli air defense system, had gone from fewer than 1,000 followers, when it first acquired a blue check in September 2023, to more than 230,000 today. In another debunked post, published the same day as the video game footage, the account claimed to show more of the Iron Beam system. That tweet still doesn’t have a Community Note, despite having nearly 400,000 views. The account briefly lost its blue check within a day of the two tweets being posted, but regained it days after changing its display name to Mossad Commentary. Even though it isn’t affiliated with Israel’s national intelligence agency, it continues to use Mossad’s logo in its profile picture.

“The blue check is flipped now. Instead of a sign of authenticity, it’s a sign of suspicion, at least for those of us who study this enough,” said Zimmer, the Marquette University professor.

Verified Accounts That Shared Misinformation Grew Quickly During the Israel-Hamas Conflict

Several of the fastest-growing accounts that have posted multiple false claims about the conflict now have more followers than some regional news organizations actively covering it.

(Lucas Waldron/ProPublica)

Of the verified accounts we reviewed, the one that grew the fastest during the first month of the Israel-Hamas conflict was also one of the most prolific posters of misleading claims. Jackson Hinkle, a 24-year-old political commentator and self-described “MAGA communist” has built a large following posting highly partisan tweets. He has been suspended from various platforms in the past, pushed pro-Russian narratives and claimed that YouTube permanently suspended his account for “Ukraine misinformation.” Three days later, he tweeted that YouTube had banned him because it didn’t want him telling the truth about the Israel-Hamas conflict. Currently, he has more than two million followers on X; over 1.5 million of those arrived after Oct. 7. ProPublica and the Tow Center found over 20 tweets by Hinkle using misleading or manipulated media in the first month of the conflict; more than half had been tagged with a Community Note. The tweets amassed 40 million views, while the Community Notes were collectively viewed just under 10 million times. Hinkle did not respond to a request for comment.

All told, debunked tweets with a Community Note in the ProPublica-Tow Center dataset amassed 300 million views in aggregate, about five times the total number of views on the notes, even though Community Notes can appear on multiple tweets and collect views from all of them, including from tweets that were not reviewed by the news organizations.

Hinkle misleadingly claimed that China was sending warships in the direction of Israel, even though the ships had been in routine operation in the region since May. Hinkle also posted footage claiming to show Hezbollah’s anti-ship missiles, but the video is from 2019 and not related to the current conflict. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

X continues to improve the Community Notes system. It announced updates to the feature on Oct. 24, saying notes are appearing more often on viral and high-visibility content, and are appearing faster in general. But ProPublica and Tow Center’s review found that less than a third of debunked tweets created since the update received a Community Note, though the median time for a note to become visible dropped noticeably, from seven hours to just over five hours in the first week of November. The Community Notes team said over email that their data showed that a note typically took around five hours to become visible in the first few days of the conflict.

Aviv Ovadya, an affiliate at Harvard's Berkman Klein Center For Internet & Society who has worked on social media governance and algorithms similar to the one Community Notes uses, says that any fact-checking process, whether it relies on crowdsourced notes or a third-party fact-checker, is likely to always be playing catch-up to viral claims. “You need to know if the claim is worth even fact-checking,” Ovadya said. “Is it worth my time?” Once a false post is identified, a third-party fact-check may take longer than a Community Note.

Coleman, who leads the Community Notes team, said over email that his team found Community Notes often appeared faster than posts by traditional fact-checkers, and that they are committed to making the notes visible faster.

Our review found that many viral tweets with claims that had been debunked by third-party fact- checkers did not receive a Community Note in the long run. Of the hundreds of tweets in the dataset that gained over 100,000 impressions, only about half had a note. Coleman noted that of those widely viewed tweets, the ones with visible Community Notes attached had nearly twice as many views.

To counter the instances where false claims spread quickly because many accounts post the same misleading media in a short time frame, the company announced in October that it would attach the same Community Note to all posts that share a debunked piece of media. ProPublica and the Tow Center found the system wasn’t always successful.

For example, on and after Oct. 25, multiple accounts tweeted an AI-generated image of a man with five children amid piles of rubble. Community Notes for this image appeared thousands of times on X. However, of the 22 instances we identified in which a verified account tweeted the image, only seven of those were tagged with a Community Note. (One of those tweets was later deleted after garnering more than 200,000 views.)

We found X’s media-matching system to be inconsistent for numerous other claims as well. Coleman pointed to the many automatic matches as a sign that it is working and said that its algorithm prioritizes “high precision” to avoid mistakenly finding matches between pieces of media that are meaningfully different. He also said the Community Notes team plans to further improve its media-matching system.

According to annotations on Community Notes on X that we found, a note for this image was displayed on at least 7,200 posts. We found 22 tweets with this image, but only seven had a Community Note. The second image has been deleted, but not before it garnered more than 200,000 views. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

The false claims ProPublica and the Tow Center identified in this analysis were also posted on other platforms, including Instagram and TikTok. On X, having a Community Note added to a post does not affect how it is displayed. Other platforms deprioritize fact-checked posts in their algorithmic feeds to limit their reach. While Ovadya believes that continued investment in Community Notes is important, he says changing X’s core algorithm could be even more impactful.

“If X’s recommendation algorithms were built on the same principles as Community Notes and was actively rewarding content that bridges divides,” he said, “you would have less misinformation and sensationalist content going viral in the first place.”

Methodology

ProPublica and Columbia University’s Tow Center for Digital Journalism identified and analyzed more than 2,000 tweets by verified accounts that posted clearly debunked images or videos in the first month of the Israel-Hamas war. The posts, which encompassed more than 200 false claims, were published by more than 1,300 verified accounts and collectively received half a billion impressions. We then looked at Community Notes and account data associated with those tweets.

Since the metrics on tweets, accounts and Community Notes were viewed at various points in time, they may not be current; for example, the status of accounts or Community Notes may have changed and the number of impressions on tweets and notes might be different after the time frame of our analysis.

In this review, we focused on claims that could be unambiguously debunked, including those based on generative AI images that aren’t labeled as such, old pictures and videos presented as current, falsified social media posts and documents, footage from video games described as real events, doctored images and mistranslated videos. To compile our list of debunked claims, we reviewed fact checks from multiple news organizations, including BBC Verify, Logically Facts, two stories from The New York Times, The Associated Press, Agence France-Presse and Reuters. We also identified debunked claims by filtering Community Notes data by relevant keywords (Gaza, Palestine/Palestinian, Israel, IDF, Hamas/Hammas, Mossad, Iron Beam, Iron Dome), and verified the note using independent news organizations or reverse image searches to ensure that each was accurate. We did not include claims that could not be independently verified or that were contested under the fog of war.

We compiled tweets using X’s text search functionality and Google’s reverse image search. Reverse image search was able to identify both images and videos (using a frame from the video). The claims and tweets we compiled are a convenience sample, not an exhaustive survey of all media-based misinformation on X during the first month of the Israel-Hamas war: The dataset relies heavily on images that Google has indexed as well as tweets that use identical or very similar language, which allows X’s search functionality to surface them. Additionally, the accounts mentioned in the story might have tweeted more false claims than those we identified. Tweets deleted prior to our searches are not captured in our dataset. (In its response, X provided us with 18 examples of Community Notes and tweets that were not in our dataset and could not be located because the tweets were not yet indexed by Google or could not be easily found by X’s search function.)

We also analyzed the accounts that were posting these tweets, using account data collected by researcher Travis Brown from July through November 2023. We used this data to determine account status, follower count, handles and usernames.

For Community Notes, we downloaded X’s open-source datasets and filtered by notes with the above-mentioned keywords. A single tweet can have multiple Community Notes and the same note can appear alongside multiple tweets. Our analysis ensured we took both relationships into account.

X’s Community Notes data contains the current status of a note as well as the time at which that status was set. It also includes when the Community Note was created and the note’s text. For some tweets that use repurposed media (i.e. media from a tweet that’s already been debunked by Community Notes) the note appears immediately due to improvements in X’s media-matching algorithm. This means that occasionally the time of creation or visibility of a note will be before the time the tweet was posted.

Do You Have a Tip for ProPublica? Help Us Do Journalism.

Elizabeth Yaboni of the Tow Center for Digital Journalism contributed research.


This content originally appeared on Articles and Investigations - ProPublica and was authored by by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism.


Print Share Comment Cite Upload Translate Updates

Leave a Reply

APA

by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free (2023-12-20T11:00:00+00:00) How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict. Retrieved from https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/

MLA
" » How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict." by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free - Wednesday December 20, 2023, https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/
HARVARD
by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free Wednesday December 20, 2023 » How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict., viewed ,<https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/>
VANCOUVER
by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free - » How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict. [Internet]. [Accessed ]. Available from: https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/
CHICAGO
" » How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict." by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free - Accessed . https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/
IEEE
" » How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict." by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free [Online]. Available: https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/. [Accessed: ]
rf:citation
» How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict | by Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism | Radio Free | https://www.radiofree.org/2023/12/20/how-verified-accounts-on-x-thrive-while-spreading-misinformation-about-the-israel-hamas-conflict/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.