X’s New Terms Spark Controversy Over AI and Data Use

x’s-new-terms-spark-controversy-over-ai-and-data-use

X, formerly known as Twitter, is under fire after unveiling new terms of service that allow the platform to use user-generated content to train its artificial intelligence (AI) models. These changes, set to take effect on November 15, require users to accept that their posts, photos, and other content can be analyzed and used for machine learning purposes.  

The revised terms grant X a global, non-exclusive, royalty-free license to use all content shared on the platform for AI training. This includes not only public posts but also private content, as the updated policy no longer distinguishes between the two. As a result, the platform can freely incorporate all user activity into its AI systems, including Grok, its AI chatbot.  

This shift has raised concerns among artists, creatives, and everyday users. Many fear their work will contribute to AI technologies that could potentially replace human creators in the future. Others worry that personal information embedded in their tweets and photos may now be used without their control. Some users have already started deleting personal photos from their profiles, fearing misuse of their data.  

In addition to privacy concerns, the new terms introduce legal changes that affect how disputes with the platform will be handled. Any legal issues related to the terms will now be addressed in the US District Court for the Northern District of Texas or in state courts in Tarrant County, Texas. This decision has raised eyebrows since Tarrant County is more than 100 miles away from X’s headquarters near Austin.  

Grok, X’s AI chatbot, has already faced backlash for spreading misinformation about the 2024 election and generating graphic, misleading images of politicians. Other tech giants like Google and Microsoft have faced similar controversies with AI tools, but X’s approach is drawing particular attention for its clear terms. Unlike other platforms that leave room for interpretation, X’s new terms remove any ambiguity about how user content can be used.  

Previously, users had the option to opt out of data sharing for AI training by adjusting their privacy settings. Under the old terms, private account posts were excluded from AI training. However, the updated policy makes no such distinction, and it remains unclear if users will still be able to opt out after November 15.  

These sweeping changes put X in line with other major platforms using content for AI training but have raised new concerns about transparency and user control. Legal experts suggest it’s common for companies to grant themselves more leeway in their terms than reflected in user settings, leaving some uncertainty about whether users will retain any meaningful control over their data.  

With the clock ticking toward the November 15 deadline, users must decide whether to accept these terms or leave the platform altogether. The new terms not only impact privacy but also reflect a broader trend of AI’s growing influence and the legal complexities that come with it. As the debate continues, many users are weighing their options in a rapidly evolving digital landscape.