Elections without manipulation: Google, Meta, Microsoft and others announced a boycott of deepfakes

Teacher

Professional
Messages
2,670
Reaction score
814
Points
113
20 tech giants are forming a united front against AI propaganda.

At least 20 major tech companies, including Google, Meta*, Microsoft, and OpenAI, have signed a new agreement to regulate generative AI. Organizations promise not to allow the distribution of deepfakes during the 2024 election campaigns. How do I implement this?

The new "Technology Treaty on Countering the Unfair Use of AI" was presented on Friday at the 55th Munich Security Conference.

According to Microsoft vice president Brad Smith, the purpose of the agreement is to guarantee voters the right to choose those who, in their opinion, will fairly govern the country, without deception and manipulation.

The agreement focuses on combating "false AI content" — fake audio, video and images of politicians, as well as fake information about the order and timing of voting.

The contract contains eight specific obligations, which can be divided into three main categories:
  1. Companies promised to limit the use of legal tools for creating deepfakes.
  2. Universal methods for detecting and responding to deepfakes will be developed.
  3. The companies intend to increase public awareness and resistance to disinformation in the run-up to the elections.

In line with its commitments, Microsoft has already created a dedicated online resource called Microsoft-2024 Elections . The service allows candidates to report any fakes and propaganda materials.

According to preliminary estimates, more than 4 billion people in different countries will vote in the elections this year. Already, generative AI is actively used, which influences political processes and views of voters, and sometimes even dissuades people from participating in voting.

The latest study by experts from the European Union, conducted last month, revealed more than 750 cases of targeted propaganda distribution by foreign agents.

Particular attention was drawn to the incident on January 23, when residents of New Hampshire received robocalls on behalf of Joe Biden. In the message, the president urged to leave the house and not participate in the primaries.

The case prompted the Federal Communications Commission to ban unwanted AI-generated robocalls across the country. The White House, in turn, promised to use cryptographic verification methods to prevent such initiatives.

An important step in the fight against the spread of fake content created using generative technologies was the introduction of watermarks or metadata, which will help the public to recognize artificially created content and verify its sources.

Nick Clegg, President of Global Affairs at Meta Platforms, stressed the importance of the joint effort: "Of course, each platform can develop its own mechanisms for detecting forgeries, identifying sources, labeling and other things. But without a common strategy and interaction, we risk finding ourselves in a situation where everyone acts alone, creating a mosaic of disparate approaches, which will significantly reduce the effectiveness of the overall effort."

Participants want to make a special focus on fake audio and video, which people are more likely to trust — the public is often very skeptical about the text.

Full list of signatories: Adobe, Amazon, Arm, ElevenLabs, Google, IBM, Reflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.
 
Top