In the wake of a deadly attack by Hamas militants in Israel, there has been a significant proliferation of false claims and manipulated images, drawing attention to Elon Musk’s X platform, which has faced criticism from the European Union.
Researchers focused on combatting online disinformation have encountered challenges in tracking deceptive content on X, formerly known as Twitter, following changes implemented by Musk earlier this year. These changes have made it harder to monitor the extent of deception in real-time events, as X revoked access to a data tool that was previously available to academics before Musk acquired the platform in October last year.
Now, researchers must manually analyze thousands of links, as stated by Ruslan Trad, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab). In response, an X representative mentioned that over 500 unique Community Notes have been posted regarding the Israeli-Palestinian conflict, allowing users to add context to potentially misleading content.
X recently disclosed the removal of newly created accounts affiliated with Hamas, and it took action on tens of thousands of posts sharing graphic media, violent speech, and hateful conduct. However, the specifics of these actions were not disclosed.
Misinformation on X and Meta Platform’s Facebook included a manipulated U.S. government document falsely approving $8 billion in military funds to Israel, as reported by the Reuters Fact Check team. Meta stated that experts, including Hebrew and Arabic speakers, are monitoring the situation in real-time.
Other instances of misinformation on X included a falsely labeled video depicting Hamas militants with a kidnapped child and a video from a Bruno Mars concert mistakenly captioned as footage from an Israeli music festival attacked by Hamas.
Notably, X has come under regulatory scrutiny, with European Union Commissioner Thierry Breton warning Musk that the platform is spreading illegal content and disinformation. Musk challenged this assertion and requested a list of the alleged violations.
Under Musk’s leadership, X has introduced features such as paid account verification and revenue-sharing programs, which could incentivize the spread of provocative or false claims. Some accounts appeared to have been created recently to gain popularity and disseminate misinformation about the conflict.
Musk himself recommended following two accounts that had previously shared false claims, as reported by the Washington Post. Misinformation has been particularly prevalent on X, according to experts.
False information has also spread on platforms like Telegram and TikTok, and while Telegram mentioned its inability to verify information, TikTok did not respond to a request for comment.
Social media platforms face the challenge of moderating content to protect users while allowing real-time information dissemination, a task that becomes more complex during unexpected events like terrorist attacks. Community Notes on X have appeared after misleading narratives reached a wide audience, potentially diminishing their effectiveness in correcting false information.
X emphasized the importance of real-time information access in the public interest, while YouTube allows certain violent or graphic content if it holds news or documentary value about the conflict. Snap, the owner of Snapchat, maintains its map feature and employs monitoring teams to address misinformation and incendiary content in the region.