Top AI Image Generators Found Vulnerable to Misinformation Creation Ahead of Elections

top-ai-image-generators-found-vulnerable-to-misinformation-creation-ahead-of-elections

A recent study conducted by the Center for Countering Digital Hate (CCDH) has shed light on the susceptibility of leading artificial intelligence (AI) image generators to manipulation, particularly in the context of generating misleading election-related images.

The report, released by the tech watchdog on Wednesday, revealed that popular AI image generators, including Midjourney, Stability AI’s DreamStudio, OpenAI’s ChatGPT Plus, and Microsoft Image Creator, could be prompted to produce deceptive images related to US presidential candidates or voting security.

Despite efforts by some AI firms to address the risks associated with political misinformation, the study found gaps in the existing protections of these platforms. Researchers at CCDH tested 40 prompts related to the 2024 presidential election across the AI generators, including scenarios designed to produce misleading candidate-related images and images depicting election fraud or voter intimidation.

The findings indicated that in 41% of the test runs, the AI image generators produced potentially misleading images that appeared realistic and contained no obvious errors. Notably, Midjourney demonstrated the highest likelihood of generating misleading results among the tested platforms.

Examples of the generated images included a photorealistic depiction of Joe Biden conversing with a lookalike and an image of Donald Trump appearing to be arrested by multiple police officers, created by DreamStudio. ChatGPT Plus and Microsoft’s Image Creator, while successful in blocking candidate-related images, produced realistic depictions of voting issues, such as ballot tampering.

In response to the report, Stability AI, the owner of DreamStudio, updated its policies to explicitly prohibit the creation or promotion of disinformation. Midjourney also indicated ongoing evolution in its moderation systems, with updates specifically addressing the upcoming US election.

The study highlights the growing concern surrounding the potential misuse of AI tools, including text and image generators, to spread misinformation and manipulate public opinion, particularly in the lead-up to elections. Lawmakers, civil society groups, and tech leaders have voiced alarms over the potential for such tools to sow confusion and undermine democratic processes.

This revelation comes amidst a broader push by tech companies to combat harmful AI content ahead of elections, with Microsoft and OpenAI among a group of firms pledging to detect and counter harmful AI content, including deepfakes of political candidates.

CCDH emphasized the need for AI companies to collaborate with researchers to prevent misuse and called on social media platforms to invest in identifying and mitigating the spread of potentially misleading AI-generated images.

As the use of AI tools for content generation continues to expand, ensuring their responsible and ethical use remains a critical challenge, particularly in safeguarding the integrity of democratic processes against the spread of misinformation.