The effects of AI, even in the infancy of its entry into pop culture, are being sharply felt – AI can mimic or undress your favorite celebrities, take over repetitive intellectual work like reviewing documents in bulk, and even snitch on those who are less than honest at self-checkouts. Much like during the dawn of the internet, some warn AI will make the sky fall, while others believe that the purpose of humanity is to bring forth a Skynet. With an election nearing, politicians and journalists are bracing for AI-generated misinformation being aimed at an increasingly-tribal populace for whom facts are easily shaded, if not outright denied.
The AI Arms Race Has Arrived
Firstly, we need to make our peace with it – AI isn’t just coming, some of it is already here. AI is currently an arms race and, while few would argue that being able to nuke the world ten times over would be a good thing for society, that’s where we ended up with nuclear weapons. Several companies have already essentially downloaded the entire internet as a data set to train AI, and those who lag behind will eventually face legal hurdles once legislation or court cases develop to stop or slow access to the same data, costing them a drastic first-mover advantage.
Google, Microsoft, Meta, Amazon, and other companies not known for being shy about opening Pandora’s box (in fact I think they developed a special crowbar for it in Silicon Valley) are all directly competing on AI research and development. Data drawn from the search habits of younger internet users is showing that they are starting to ask ChatGPT instead of the Google search bar – there will still be oldies like me using ChatGPT as a supplement for their “Googling,” but much as Google won out as the search bar of choice and redefined the user experience of the internet, the dominant AI search tool will likely be the next generation’s go-to for information (as well as advertising space) in the information age.
The usual guardrails of regulation will take years to catch up, and countries are arguably incentivized to give these companies broad leverage – in addition to the profitable research and development that can happen in whatever country wins that regulatory race to the bottom, the US would not wish to cede supremacy in AI to China or Russia.
Right now, this competition is a juggernaut, with companies inevitably slamming down on the gas before they are compelled to install brakes. Looking to the consequences for bad behavior we have seen from social media companies, while they were dragged before Congress and shamed to make a couple of 1-minute-long clips for re-election ads, that was years after those fairly-transparent problems were identified. Notably, nothing that Congress or regulators have done changed who won the fight for social media market share, and any fine or lawsuit would be a drop in the bucket during an investor meeting.
Analogizing AI to social media, it will only be the survivors of this back-alley, technological knife fight who will face any consequences, and they will likely soothe themselves by buying a couple of Hawaiian islands if they are even still the relevant executive officers. These would be the same executives, i.e. Mark Zuckerberg and Elon Musk, who have repeatedly and knowingly understaffed teams to regulate safety and police false claims, such as those made around the Covid vaccine during the pandemic. Even when those laws and regulations are developed, bad actors outside of US jurisdiction will likely remain unaffected.
While several social media and other companies have publicly made statements about labeling AI-generated material, X/Twitter has made no such promise, and even those who have are yet to develop tools to reliably do so.
What Can AI Currently Do?
Recent news has highlighted the fake Biden robocall heading into the New Hampshire primary, with additional AI-generated material posing as Donald Trump or Ron DeSantis’ voice being used in ads. AI-generated images from a potential dystopian future have been used in a campaign against Joe Biden, and a fake voice can be synced to a video of a candidate moving their lips, attributing fake quotes to electoral candidates.
While fully-fabricated DeepFake videos are still a bit off and can often be told apart from the authentic, they are rapidly improving – a recent DeepFake account of Tom Cruise (@DeepTomCruise), from a lookalike who needed only minor changes to the video, has become a viral TikTok phenomenon – how long until a SuperPAC or new Fancy Bear operation in Russia adds that to their playbook?
A Broken Media Landscape
When looking to how the Trump Administration treated facts, the quote “who you gunna believe, me or your lying eyes,” immediately comes to mind. This Administration did not create the mistrust of news media, rather, they stomped on the fault lines to deepen and exploit them – “fake news” indeed.
Currently, AI media is still developing, but the tools have become widely disseminated. In this environment, a small amount of disinformation, like a phone call in the voice of a candidate, could sway opinion.
Brandolini’s law, a.k.a. the “bullshit asymmetry principle,” postulates that the effort at debunking misinformation is far greater than the relative ease of creating it. Considering that we may still be years away from some mechanism to detect AI as the creator or editor of media, when that first politically relevant, AI-generated bombshell lands, how many will eagerly believe it? Even if some media was objectively disproven via mainstream news, how many will draw their news only from other “true believers” on 4-Chan, Parlor, or Truth Social, adding new misinformation in the same vein as Q-Anon, Pizzagate, and already-existent election lies?
The Takeaway
AI, like every other tool, is morally neutral – like any technology, it is a tool to enable individuals to do a thing. The “Red Queen” hypothesis will likely win out, that is, that the environment (in our case, society and how it absorbs information) will evolve along with the technology. However, there will be a lag time between the development of this new technology and the methods society uses to address it, and this election will take place within said lag time.
AI has already been used in robocalls, and the FCC has only recently taken action to declare them illegal – how effective will the FCC be at preventing further fraudulent calls? With billions of dollars changing hands leading up to the election, how much monetary incentive will there be behind skirting such rules or venturing into a grey area and using AI-generated media to malign candidates?
Should AI not be weaponized, the mere existence of things like “Deep Fakes” will still undermine trust in reporting as “fake news” may become a more believable label. This will allow bad actors to cast more doubt in an environment where large groups still distrust the most thoroughly vetted election results in US history.
Sources
- This article is an opinion piece by Attorney Ryan Campbell, Esq