According to one tech writer, the 2024 presidential election cycle will mark the first one in which AI technology is nearly smart enough to pass as a candidate without detection. Will IT behemoths and establishment press develop a robust enough screening mechanism in time?
During a recent edition of Pod Save America, cohost and former Obama administration worker Tommy Vietor said that he had met “a very significant buddy over the holiday,” before delivering an exclusive audible signal from President Joseph Biden. The video was replete with Bidenisms (“malarkey,” “folks,” and “create an economy from the bottom up and the centre out”) and gags (“Did Joe Biden find it amusing when Favreau claimed my South Carolina gathering sounded like a celebration at a bargain funeral service? Joe, on the other hand, did not.”)
“It was obviously a hoax,” Vietor stated afterwards. “But, this will be extremely harmful in future elections.” Deepfakes are AI-generated recordings of politicians that, as colleague Pod Save America presenter Jon Favreau remarked, are already posing a new challenge in political journalism. On the night of Chicago’s mayoral race, “Chicago Lakefront News,” a Twitter handle pretending as a legitimate news company, published an AI voiceover of contender Paul Vallas discussing public safety—a phoney video that apparently had thousands of views before being removed.
PolitiFact, an independent fact-checking organisation run by the Poynter Institute, previously refuted a changed video of Senator Elizabeth Warren that utilised an interview with MSNBC in which Warren appeared to declare Republicans shouldn’t be permitted to cast ballots. Fake Biden addresses have recently been popular on social media, with approximations of the president discussing anything from rap music to drugs and video games. Yet, some people are misusing technology, such as a deepfake video of Biden denigrating transgender women.
It’s simple to understand how this pattern could pose a significant challenge for news organisations, particularly in the run-up to a presidential election, when reporters must regularly evaluate and assess opposition research and rely on tips of recorded phone audio or videos— often at a rapid pace, in a surroundings where confusion can easily spread. It’s unclear how—or if—major news sites intend to adjust their vetting process; some major media institutions, including The Wall Street Journal and The Washington Post, have declined to comment on how they expect to deal with AI-generated political material in 2024.