Back
Enforcement
11.09.2024

Data empowerment victory: X stops raiding consumers’ information to train its AI

X is the latest platform to stop training generative AI models on consumers’ data, ordered by data authorities and backed by a complaint from Euroconsumers.

In July 2024, X enabled its new AI chatbot tool ‘Grok’ to access users’ data to train it up. They claimed their “state of the art AI assistant” could be used “across a wide range of tasks, whether you’re seeking answers, collaborating on writing, or solving coding tasks”. But Grok’s training data grab to fuel the tool didn’t last long. 

Just two weeks after it was switched on, X announced it would suspend feeding user data to its AI training models. By the 4 September, this pause became permanent after they agreed to adhere to the undertaking issued by the Data Protection Commission (DPC) in Ireland.

Euroconsumers X complaint to the Irish DPC sums up the many ways in which X disregarded the GDPR, violating user privacy rights by automatically using personal data for AI training without proper consent or transparency.

Second AI data training u-turn this summer

This second u-turn by a social media giant is proof that Europe’s data rules and data enforcers can stand up to the AI data raid. Previously, Euroconsumers’ members lodged a complaint against Meta for similar infringements which led to them halt the rollout of its AI assistant. They planned to use data from across Meta’s different services to train up its GenAI tools. 

The data used to train the models that sit behind these services will be the bread and butter of all social networks: people’s activities, connections, location, photos, employment, family and views. 

The way these tech companies want to use people’s data as training fodder for GenAI will ring alarm bells for anyone who cares about privacy, empowerment and control over our most personal information. 

Meta and X aren’t alone, LinkedIn’s AI consumer data raid is the subject of another Euroconsumers LinkedIn complaint who are keeping a watchful eye on all the developments. 

Euroconsumers’ AI data raid files 

Experts from Euroconsumers’ member organizations have been meticulously combing through the terms of use and privacy policies to check the legal bases of data use and assessing the companies’ communication and opt-out processes.  

These have led to complaints being put to authorities in Italy, Belgium, Spain and Portugal.  Here are some of the problematic practices our experts found: 

1  Stealthy introduction of data training for GenAI

Companies have been quietly introducing significant changes to the way they capture and use people’s data to build their new GenAI products: 

 

X set up automatic data sharing to train its GenAI chatbot, so people’s posts were being used to train the tool without their knowledge or agreement. 

 

On LinkedIn, users have to dig deep into its Privacy Policy to find out that this type of data processing is happening. After several clicks you can finally learn that every photo, personal information, post, invitation, comment, and even private messages between users are being used to train their GenAI tools.   

 

Finally, Meta relied on a notification to let people know that almost every word posted on their sites would be fed into training programmes for Meta’s generative AI projects.  For data processing changes of this scale, you might expect company information campaigns, or emails to reach out and engage users.  

2 Inadequate consent and objection mechanisms

 

The basis on which consumers’ data was processed has not always been obvious, or in line with the expectations of the law – although companies disagree

 

The GDPR makes it clear that companies and platforms must have a proper legal basis for processing people’s data, as well as providing processes to challenge how their data is being captured and used. 

 

Agreement for data use relies on people being able to make sense of how it would be used. Instead, Meta described an AI-enabled service that ‘‘helps solve complex problems, sparks imaginations, and brings new creations to life”. 

 

Effectively, people were asked to hand over their data for purposes that were difficult to understand. Meta also made it very hard for people to object to their data being used for GenAI training, with a clunky process on different forms with multiple steps and actions. 

 

X went one better and didn’t even inform people it was using their data to train up AI making it impossible to understand or agree, and also pretty difficult to change the settings to stop collection.

3 Dubious legitimate interest tests

 

X, Meta and LinkedIn used the more flexible basis of ‘legitimate interests’ as a basis for processing data to train their GenAI. The legitimate interest basis is designed to cover any reasonable purpose, providing commercial interests don’t override individuals’ interests. 

 

In the case of LinkedIn, these interests are very loosely defined as “enabl[ing] economic opportunities and help[ing] our members and customers to be more productive and successful..” among other equally vague things. 

 

That is frustrating enough, but LinkedIn also initially denied consumers their rights to object to the processing of their data in this way. In fact, when our legal researcher made their first attempt to object, they were advised to simply close their account instead.

Uneven enforcement approach: what about LinkedIn? 

Our teams have also noticed that Data Protection Agencies across the EU are not  not  handling enforcement consistently across different  market players. 

All three companies are social networks and all three have adopted very similar practices and breaches of European law. But so far only Meta and X have had action taken against them. 

Marco Scialdone, Euroconsumers’ Head of Litigation who launched the X and LinkedIn complaints and worked on Meta’s complaint has expressed his concern: 

❜❜

LinkedIn continues to remain off the radar despite the circumstances being exactly the same as other platforms.  It is evident that intervening with one market player and not doing so (or doing so with months of delay) with respect to another distorts the normal course of competitive dynamics.”

This raises the possibility that if some large online platforms receive attention from data protection authorities and others do not, competitiveness in digital markets will be impacted.

My data is always mine

Consumers need consistency in how companies use their data in the new AI economy. 

After this summer’s line of individual complaints, it’s now time for firms to bring in proper data management processes so consumers’ have real choice about whether they want to share their valuable data for others’ GenAI plans. 

The Irish Data Protection Commissioner has requested that the European Data Protection Board (EDPB) create guidance for industry on how personal data in AI models is used to could help to instill a more consistent Europe-wide regulation of this area more broadly. 

Until then, Euroconsumers and its members will be closely watching other attempts to unlawfully raid consumers’ personal data for AI training. 

The AI market is extremely competitive at the moment, stakes are high and the pressure is on to launch the latest AI breakthrough.

But the desire of companies to steal a march on their rivals should never override consumers’ autonomy, needs and expectations. Consumers say My Data is Mine – it is not for companies to download by stealth and deny people’s rights to control how it is used.