Big Tech’s divisive ‘personalization’ attracts fresh call for profiling-based content feeds to be off by default in EU


Another policy tug-of-war could be emerging around Big Tech’s content recommender systems in the European Union where the Commission is facing a call from a number of parliamentarians to rein in profiling-based content feeds — aka “personalization” engines that process user data in order to determine what content to show them.

Mainstream platforms’ tracking and profiling of users to power “personalized” content feeds have long raised concerns about potential harms for individuals and democratic societies, with critics suggesting the tech drives social media addiction and poses mental health risks for vulnerable people. There are also concerns the tech is undermining social cohesion via a tendency to amplify divisive and polarizing content that can push individuals towards political extremes by channelling their outrage and anger.

The letter, signed by 17 MEPs from political groups including S&D, the left, greens, EPP and Renew Europe, advocates for tech platforms’ recommender systems to be switched off by default — an idea that was floated during negotiations over the bloc’s Digital Services Act (DSA) but which did not make it into the final regulation as it did not have a democratic majority. Instead EU lawmakers agreed to transparency measures for recommender systems, along with a requirement that larger platforms (so called VLOPs) must provide at least one content feed that isn’t based on profiling.

But in their letter the MEPs are pressing for a blanket default off for the technology. “Interaction-based recommender systems, in particular hyper-personalised systems, pose a severe threat to our citizens and our society at large as they prioritize emotive and extreme content, specifically targeting individuals likely to be provoked,” they write.

“The insidious cycle exposes users to sensationalised and dangerous content, prolonging their platform engagement to maximise ad revenue. Amnesty’s experiment on TikTok revealed the algorithm exposed a simulated 13-year-old to videos glorifying suicide within just one hour.’ Moreover, Meta’s internal research disclosed that a significant 64% of extremist group joins result from their recommendation tools, exacerbating the spread of extremist ideologies.”

The call follows draft online safety guidance for video sharing platforms, published earlier this month by Ireland’s media commission (Coimisiún na Meán) — which will be responsible for DSA oversight locally once the regulation becomes enforceable on in-scope services next February. Coimisiún na Meán is currently consulting on guidance which proposes video sharing platforms should take “measures to ensure that recommender algorithms based on profiling are turned off by default”.

Publication of the guidance followed an episode of violent civic unrest in Dublin which the country’s police authority suggested had been whipped up by misinformation spread on social media and messaging apps by far right “hooligans”. And, earlier this week, the Irish Council for Civil Liberties (ICCL) — which has long campaigned on digital rights issues — also called on the Commission to support the Coimisiún na Meán’s proposal, as well as publishing its own report advocating for personalized feeds to be off by default as it argues social media algorithms are tearing societies apart.

In their letter, the MEPs seize on the Irish media regulator’s proposal — suggesting it would “effectively” address issues related to recommender systems having a tendency to promote “emotive and extreme content” which they similarly argue can damage civic cohesion.

The letter also references a recently adopted report by the European Parliament on addictive design of online services and consumer protection which they say “highlighted the detrimental impact of recommender systems on online services that engage in profiling individuals, especially minors, with the intention of keeping users on the platform as long as possible, thus manipulating them through the artificial amplification of hate, suicide, self-harm, and disinformation”.

“We call upon the European Commission to follow Ireland’s lead and take decisive action by not only approving this measure under the TRIS [Technical Regulations Information System] procedure but also by recommending this measure as an mitigation measure to be taken by Very Large Online Platforms [VLOPs] as per article 35(1)(c) of the Digital Services Act [DSA] to ensure citizens have meaningful control over their data and online environment,” the MEPs write, adding: “The protection of our citizens, especially the younger generation, is of utmost importance, and we believe that the European Commission has a crucial role to play in ensuring a safe digital environment for all. We look forward to your swift and decisive action on this matter.”

Under TRIS, Member States are required to notify the Commission of draft technical regulations before they are adopted as national law in order that the EU can carry out a legal review to ensure the proposals are consistent with the bloc’s rules.

The system means national laws that seek to ‘gold-plate’ EU regulations are likely to fail the legal review. So the Irish media commission’s proposal for video platforms’ recommender systems to be off by default may not survive the TRIS process, given it appears to go further than the letter of the relevant law — in the case of the DSA. Although the proposal is actually being made under a different piece of EU legislation, the AVMSD (aka the Audiovisual Media Services Directive). And it’s less clear whether there is a formal requirement for the Coimisiún na Meán to get the Commission’s sign off on its guidance. The AVMSD is an EU directive, not a regulation.

The AVMSD was amended in 2018 to include an article that contains powers letting Member States use “appropriate means” to ensure audiovisual media services which are provided by media service providers under their jurisdiction do not contain any incitement to violence or hatred against people based on protected characteristics. Which appears to offer broad powers to regulate against risks of video sharing platforms being used to whip up anti-immigrant hate, for instance.

Any measures applied by Coimisiún na Meán to the likes of TikTok or YouTube under the AVMSD would only have effect locally in Ireland — a single EU Member State. But such regulatory action could still make for a very interesting experiment in whether flipping feed defaults away from profiling reduces online toxicity.

Returning to the DSA, the pan-EU regulation does put a requirement on larger platforms (aka VLOPS) to assess and mitigate risks arising out of their recommender systems. So it’s at least possible platforms could decide to switch these systems off by default themselves as a compliance measure to meet their DSA systemic risk mitigation obligations. Although none have yet gone that far — and, clearly, it’s not a step any of these ad-funded, engagement-driven platforms would choose as a commercial default.

The Commission, which confirmed receipt of the MEPs’ letter, declined public comment on it (or on the ICCL’s report) when we asked.

Instead, a spokesperson pointed to what they described as “clear” obligations on VLOPs’ recommender systems set out in Article 38 of the DSA — which requires platforms provide at least one option for each of these systems which is not based on profiling. But we were able to discuss the profiling feed debate with an EU official who was speaking on background in order to talk more freely. Our source agreed platforms could choose to turn profiling-based recommender systems off by default as part of their DSA systemic risk mitigation compliance but confirmed none have gone that far off their own bat as yet.

So far we’ve only seen instances where non-profiling feeds have been made available to users as an option — such as by TikTok and Instagram — in order to meet the aforementioned (Article 38) DSA requirement to provide users with a choice to avoid this kind of content personalization. However this requires an active opt out by users — whereas defaulting feeds to non-profiling would, clearly, be a stronger type of content regulation as it would not require user action to take effect.

The EU official we spoke to confirmed the Commission is looking into recommender systems in its capacity as an enforcer of the DSA on VLOPs — including via the formal proceeding that was opened on X earlier this week. Recommender systems have also been a focus for some of the formal requests for information the Commission has sent VLOPs, including one to Instagram focused on child safety risks, they told us. And they agreed the EU could force larger platforms to turn off personalized feeds by default in its role as an enforcer, i.e. by using the powers it has to uphold the law.

But they suggested the Commission would only take such a step if it determined it would be effective at mitigating specific risks. The official pointed out there are multiple types of profiling-based content feeds in play, even per platform, and emphasized the need for each to be considered in context. More generally they made a plea for “nuance” in the debate around the risks of recommender systems.

The Commission’s approach here will be to undertake case-by-case assessments of concerns, they suggested — speaking up for data-driven policy interventions on VLOPs, rather than blanket measures. After all, this is a clutch of platforms that’s diverse enough to span video sharing and social media giants but also retail and information services — and even (most recently) porn sites.

The risk of enforcement decisions being unpicked by legal challenges if there’s a lack of robust evidence to back them up is clearly a Commission concern.

The official also argued there is a need to gather more data to understand even basic facets relevant to the recommender systems debate — such as whether personalization being defaulted to off would be effective as a risk mitigation measure. Behavioral aspects also need more study, they suggested.

Children especially may be highly motivated to circumvent such a limitation by simply reversing the setting, they argued, as kids have shown themselves able to do when it comes to escaping parental controls — claiming it’s not clear that defaulting profiling-based recommender systems to off would actually be effective as a child protection measure.

Overall the message from our EU source was a plea that the regulation — and the Commission — be given time to work. The DSA only came into force on the first set of VLOPs towards the end of August. While, just this week, we’ve seen the first formal investigation opened (on X), which includes a recommender system component (related to concerns around X’s system of crowdsourced content moderation, known as Community Notes).

We’ve also seen flurry of formal requests for information from the Commission to platforms in recent weeks, after the latter submitted their first set of risk assessment reports — which indicates EU staffers tasked with oversight of VLOPs are unhappy with the level of detail provided so far. That implies firmer action could soon follow as the bloc settles into its new role of regional Internet sheriff. So — bottom line — 2024 is shaping up to be a significant year for the EU’s policy response to bite down on Big Tech. And for assessing whether or not the Commission’s enforcement delivers the kind of needle moving results digital rights campaigners are hungry for.

“These are issues that we are questioning platforms on under our legal powers — but Instagram’s algorithm is different from X’s, is different from TikTok’s — we’ll need to be nuanced in this,” the official told TechCrunch, suggesting the Commission’s approach will spin up a patchwork of interventions, which might include mandating different defaults for VLOPs depending on the contexts and risks across different feeds. “We would prefer to take an approach which really takes the specifics of the platforms into account each time.”

“We are now starting this enforcement action. And this is actually one more reason not to dilute our energy into kind of competing legal frameworks or something,” they added, making a plea for digital rights advocates to get with the regulatory program. “I would rather we work in the framework of the DSA — which can address the issues that [the MEPs’ letter and ICCL report] is raising on recommender systems and amplifying illegal content.”

There is another EU law in play around recommender systems too, of course: The General Data Protection Regulation (GDPR). And, as the ICCL report points out, recommender systems that are based on profiling individuals by processing sensitive data about them — such as political or religious views — raise other legal flags, as individuals must provide explicit consent for the use of their special category data. This means consent cannot be bundled into general T&Cs; platform users must be asked specifically if their political views or other sensitive data can be used to determine what content to show them.

None of the major social media platforms make such an ask — yet it looks clear users’ sensitive data is being processed by content sorting algorithms. Years of glacial GDPR oversight on Big Tech by another Irish regulator, the Data Protection Commission, is in the frame there.

The ICCL argues profiling-based recommender systems should be off by default in order to comply with both Article 9 of the GDPR and Article 6a(1) of the AVMSD. “People – not algorithms – should decide what they see on digital platforms,” it urges.

This report was updated with additional context about the AVMSG and the GDPR

Elon Musk’s X faces first DSA probe in EU over illegal content risks, moderation, transparency and deceptive design

Meta faces more questions in Europe about child safety risks on Instagram




Atoms Lanka Solutions