As the U.S. approaches another high-stakes election, tech giants Google and Meta are taking steps to control the spread of misinformation by temporarily banning political ads across their platforms. The move is seen as a response to the misinformation that has frequently marred election cycles, but some experts question its effectiveness, noting it may be too late to make a meaningful impact.
Meta recently initiated its ad ban on U.S. social issues, elections, and politics, covering both Facebook and Instagram. Originally planned to end Tuesday, the company extended the restriction into later in the week, intending to minimize the spread of misinformation during the vote-counting period. Google will soon follow suit, pausing election-related ads once the last polling station closes on election day. This ad suspension will remain in place for an unspecified time, with similar goals of curbing false information from circulating.
The idea behind these restrictions is to prevent candidates and supporters from manipulating public sentiment or falsely claiming victory before official results are confirmed. With an anticipated extended vote-counting process, these actions aim to avoid any premature declarations of success that could lead to unrest. However, despite the initiatives by Google and Meta, some experts suggest that actions taken now may fall short due to previous decisions by social media companies to downsize their trust and safety teams.
While Meta and Google are implementing temporary ad bans, X (formerly Twitter) has adopted a different approach. After Elon Musk’s acquisition of the platform, X lifted its ban on political ads last year and has not announced any intention to impose restrictions during the election period. X’s looser policies raise concerns about the spread of unverified information. X, once viewed as a leader in combatting election misinformation, had been recognized for its role in preventing the spread of politically charged falsehoods. Its earlier stance on combating misinformation even prompted larger platforms to follow suit. However, under Musk’s leadership, X has been linked to increased dissemination of misleading claims about elections and immigration, with Musk’s own posts contributing to the surge.
The impact of social media on elections has been under scrutiny for years, especially after interference in the 2016 U.S. presidential election and the January 6, 2021, Capitol attack. Following these events, major platforms invested in election integrity and content moderation teams, suspending accounts and removing posts that spread misinformation. However, in recent years, these platforms have made cuts to their trust and safety teams, relaxing previous restrictions on misinformation around politics and elections. For example, in 2022, Meta and other companies said they would no longer remove false claims that the 2020 election was stolen.
The reduction in resources and policy enforcement has led to what some industry observers call a “backslide,” with misinformation spreading unchecked across social media. Over the summer, as conspiracy theories flourished, users observed a noticeable decline in moderation. From theories about Trump’s safety to exaggerated claims surrounding hurricane responses, misinformation has steadily eroded trust. Experts now fear that even with temporary political ad pauses, platforms’ relaxed stance on fact-checking will allow misinformation to continue circulating.
The challenge posed by artificial intelligence also adds complexity to the issue. Experts are concerned that AI could amplify the problem by making it easier to generate misleading content, including fake images, videos, and audio that could give legitimacy to false claims. As AI tools become more accessible, the potential for misinformation to influence public perception grows, creating a new layer of risk for the election cycle.
In addition to pausing political ads, many platforms claim to be taking other steps to support election integrity. For instance, Meta, Google, YouTube, X, and TikTok all highlight efforts to promote verified information about voting and candidates. These initiatives include partnerships with election authorities and non-profits, directing users to accurate sources of information. However, these efforts may be limited in their reach. For example, X’s Civic Integrity Policy, which restricts content intended to interfere with elections or incite violence, allows posts that may be polarizing or biased, even if they are factually incorrect. This policy permits a wide range of content that could lead to public distrust in the election process.
While the ad bans are a step toward improving election safety, experts argue that platforms are designed to promote the most engaging content, regardless of accuracy. Since contentious posts tend to attract more engagement, organic posts with disinformation may still reach large audiences, rendering the ad bans only partially effective. TikTok, on the other hand, claims to support election integrity through its “US Elections Integrity Hub,” labeling unverified content to limit its reach and partnering with fact-checkers. YouTube and Meta have also promised to reduce the visibility of posts that fact-checkers identify as false.
Despite the platforms’ pledges to safeguard election information, the question remains whether these temporary measures can reverse the “drip, drip” effect of misinformation on public trust. As platforms continue to host polarizing discussions and allow room for controversial views, the effectiveness of ad pauses in truly limiting the spread of false information appears uncertain.