Tech Giants Join Forces to Combat Election-Related Deepfakes

In a landmark move, major tech companies, including Microsoft, Meta, Google, Amazon, Adobe, and IBM, have united to combat the growing threat of election-related deepfakes. 

The accord, signed at the Munich Security Conference, established a common framework for responding to AI-generated misinformation.

Thirteen additional companies, spanning AI startups and social media platforms such as TikTok and Snap, have also pledged to join the initiative.

Their focus is on detecting and labelling misleading political deepfakes to safeguard the integrity of information on their platforms.

While the accord’s measures are voluntary, critics argued that they may lack effectiveness, dismissing them as potential virtue signaling.

Brad Smith, vice chair and president of Microsoft, stressed the importance of collaboration and multistakeholder action to protect elections from potential abuse.

In the absence of a federal law addressing deepfakes in the U.S., some states, including Minnesota, have taken steps to criminalize them, particularly in the context of political campaigning.

The Federal Trade Commission (FTC) sought to modify a rule covering the impersonation of politicians, and the Federal Communications Commission (FCC) is working to outlaw AI-voiced robocalls.

In the European Union, regulations like the AI Act and Digital Services Act are being leveraged to address deepfake concerns, mandating clear labeling of AI-generated content.

Deepfake creation has surged, experiencing a 900% year-over-year increase, according to Clarity, a deepfake detection firm.

Widespread concerns about the impact of misleading deepfakes on elections are reflected in polls, with a significant percentage of Americans expressing worry and anticipating a rise in false information during the upcoming 2024 U.S. election cycle.

Subscribe to our newsletter for latest news and updates. You can disable anytime.