Loading Now
×

EU dials up scrutiny of major platforms over GenAI risks ahead of elections

EU dials up scrutiny of major platforms over GenAI risks ahead of elections

EU dials up scrutiny of major platforms over GenAI risks ahead of elections


The European Commission has sent a series of formal requests for information (RFI) to Google, Meta, Microsoft, Snap, TikTok and X about how they’re handling risks related to the use of generative AI.

The asks, which relate to Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, are being made under the Digital Services Act (DSA), the bloc’s rebooted ecommerce and online governance rules. The eight platforms are designated as very large online platforms (VLOPs) under the regulation — meaning they’re required to assess and mitigate systemic risks, in addition to complying with the rest of the rulebook.

In a press release Thursday, the Commission said it’s asking them to provide more information on their respective mitigation measures for risks linked to generative AI on their services — including in relation to so-called “hallucinations” where AI technologies generate false information; the viral dissemination of deepfakes; and the automated manipulation of services that can mislead voters.

“The Commission is also requesting information and internal documents on the risk assessments and mitigation measures linked to the impact of generative AI  on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors and mental well-being,” the Commission added, emphasizing that the questions relate to “both the dissemination and the creation of Generative AI content”.

In a briefing with journalists the EU also said it’s planning a series of stress tests, slated to take place after Easter. These will test platforms’ readiness to deal with generative AI risks such as the possibility of a flood of political deepfakes ahead of the June European Parliament elections.

“We want to push the platforms to tell us whatever they’re doing to be as best prepared as possible… for all incidents that we might be able to detect and that we will have to react to in the run up to the elections,” said a senior Commission official, speaking on condition of anonymity.

The EU, which oversees VLOPs’ compliance with these Big Tech-specific DSA rules, has named election security as one of the priority areas for enforcement. It’s recently been consulting on election security rules for VLOPs, as it works on producing formal guidance.

Today’s asks are partly aimed at supporting that guidance, per the Commission. Although the platforms have been given until April 3 to provide information related to the protection of elections, which is being labelled as an “urgent” request. But the EU said it hopes to finalize the election security guidelines sooner than then — by March 27.

The Commission noted that the cost of producing synthetic content is going down dramatically — amping up the risks of misleading deepfakes being churned out during elections. Which is why it’s dialling up attention on major platforms with the scale to disseminate political deepfakes widely.

A tech industry accord to combat deceptive use of the AI during elections that came out of the Munich Security Conference last month, with backing from a number of the same platforms the Commission is sending RFIs now, does not go far enough in the EU’s view.

A Commission official said its forthcoming election security guidance will go “much further”, pointing to a triple whammy of safeguards it plans to leverage: Starting with the DSA’s “clear due diligence rules”, which give it powers to target specific “risk situations”; combined with more than five years’ experience from working with platforms via the (non-legally binding) Code of Practice Against Disinformation which the EU intends will become a Code of Conduct under the DSA; and — on the horizon — transparency labelling/AI model marking rules under the incoming AI Act.

The EU’s goal is to build “an ecosystem of enforcement structures” that can be tapped into in the run up to elections, the official added.

The Commission’s RFIs today also aim to address a broader spectrum of generative AI risks than voter manipulation — such as harms related to deepfake porn or other types of malicious synthetic content generation, whether the content produced is imagery/video or audio. These asks reflect other priority areas for the EU’s DSA enforcement on VLOPs, which include risks related to illegal content (such as hate speech) and child protection.

The platforms have been given until April 24 to provide responses to these other generative AI RFIs

Smaller platforms where misleading, malicious or otherwise harmful deepfakes may be distributed, and smaller AI tool makers that can enable generation of synthetic media at lower cost, are also on the EU’s risk mitigation radar.

Such platforms and tools won’t fall under the Commission’s explicit DSA oversight of VLOPs, as they are not designated. But its strategy to broaden the regulatory impact is to apply pressure indirectly, through larger platforms (which may act as amplifiers and/or distribution channels in this context); and via self regulatory mechanisms, such as the aforementioned Disinformation Code; and the AI Pact, which is due to get up and running shortly, once the (hard law) AI Act is adopted (expected within months).



Source link