The United States is one of many countries around the world preparing for a major election cycle in 2024. With the emergence of publicly available artificial intelligence tools, there has been a significant increase in political deepfake content, requiring voters to acquire new skills to distinguish what is real. On February 27, Senate Intelligence Chairman Mark Warner stated that America is less prepared against election fraud for the upcoming 2024 elections compared to the previous election in 2020.
Deepfake Content Continues to Hit the Headlines
This situation is largely due to an increase in deepfake content generated by artificial intelligence in the US over the past year. According to data from identity verification service SumSub, there was a 1,740% increase in deepfake content in North America, and a tenfold increase in the number of deepfakes detected worldwide in 2023.
Between January 20-21, citizens of New Hampshire reported receiving robocalls with the voice of US President Joe Biden telling them not to vote in the primary on January 23. A week later, this incident led US regulators to ban artificial intelligence-generated voices used in automatic phone scams and to make them illegal under US telemarketing laws.
However, as with any type of fraud, if there is a will, there is a way. As the US prepares for Super Tuesday on March 5, when the largest number of US states hold primaries and party elections, concerns about misinformation and fraud generated by artificial intelligence remain high.
Expert Comments on the Issue
Pavel Goldman Kalaydin, head of artificial intelligence and machine learning at SumSub, stated that despite the current tenfold increase in the number of deepfakes worldwide, he expects this to rise even further as the election season approaches. The expert emphasized that there are two types of deepfakes to be aware of: those from tech-savvy teams using advanced technology and hardware like high-quality GPUs and generative AI models, which are generally harder to detect, and those from lower-level fraudsters using tools commonly found on consumer computers. Kalaydin commented on the issue:
“It is important for voters to carefully examine the content in their streams and to be cautious against video or audio content. Individuals should prioritize verifying the source of information and differentiate between content from reliable media and unknown users.”
According to the AI expert, there are several criteria to watch out for in deepfake content:
“If the content has unnatural hand or lip movements, artificial background, irregular motion, changes in lighting, differences in skin tones, unusual blinking patterns, or poor synchronization of lip movements with speech, it is likely produced with deepfake technology.”
However, Kalaydin warned that the technology will continue to advance rapidly and that soon it will be impossible for the human eye to detect deepfake content without specialized detection technologies.