(National Sentinel)Â Scary:Â What would happen if a hostile foreign power could harness the technology to be able to produce high-quality audio and video of people saying things they never said or doing things they have never done? What kind of an impact could that have on, say, our democratic and electoral processes?
It would be devastating and could perhaps even destroy the tenuous civil society that remains after years of Democrats attempting to sow distrust in our systems by claiming elections were “stolen” either by Russians or sinister Republicans.
And yet, thanks to artificial intelligence, the day when an adversary can create such false scenarios using so-called “deep fakes” technology are fast approaching, theÂ Washington Times reported Sunday:
U.S. leaders say Vladimir Putin used a familiar cyber playbook to â€œmuck aroundâ€ in the midterm elections last month, but intelligence officials and key lawmakers believe a much more sinister, potentially devastating threat lies just down the road â€” one that represents an attack on reality itself.
Policy insiders and senators of both parties believe the Russian president or other actors hostile to the U.S. will rely on â€œdeep fakesâ€ to throw the 2020 presidential election cycle into chaos, taking their campaign to influence American voters and destabilize society to a new level.
PhonyÂ AI-created video will be nearly indistinguishable from real footage, according to analysts, and will be capable of mimicking voices, speaking patterns, facial expressions, and surroundings to a degree so realistic it will be hard to refute.
â€œWe are heading to an era where deep fakes technology is going to cause real chaos,â€ said Sen. Ben Sasse, R-Neb., in addressing military, intelligence, and national security officials Friday during the annual Texas National Security Forum.
Without question, this technology is the next major threat to American elections and, very likely, our democratic processes because the possibilities are limitless.
â€œItâ€™s going to destroy human lives, itâ€™s going to roil financial markets, and it might well spur military conflicts around the world,â€ Sasse said.
Privately, military and intelligence officials are saying the same thing.
â€œWhen deep fakes technology produces audio or video of a global leader saying something or ordering some attack that didnâ€™t happen,” Sassed continued, according to the Times. “Youâ€™re going to have to actually have flesh-and-blood humans who have a little bit of a reservoir of public trust who can step to a camera together and say, â€˜I know that looked really real on your TV screen. But it wasnâ€™t real.â€
Without question, the pool of people a majority of Americans trust is shrinking. In fact, in today’s hyper-partisan political climate, it would be difficult — and perhaps impossible — to find a single U.S. official outside of the military whom most Americans would trust enough to believe if they were being told ‘what you just saw was fake.’
And our adversaries are well aware of our political divide. And they will use our freedom and liberties to infiltrate our system.
In 2016, analysts say that Russia launched “influence operations” via U.S. social media like Facebook, Twitter, and Google, using a host of bots and planted news stories. Moscow undertook similar efforts during the 2018 midterms — though clearly, the bigger threat last month was Democrat vote tampering.
That’s all about to change, say intelligence analysts in the know regarding AI and its potential.
â€œOur adversaries donâ€™t conduct information warfare as much as a war on information, undercutting legitimacy of all comers, including governments,â€ Gen. Raymond A. Thomas, head of U.S. Special Operations Command, told the national security conference last week.
In fact, the Times reported, the technology has already been used. FakeÂ phony sex videos of “Wonder Woman” actress Gal Gadot have been produced, for example.
And again, the possibilities to undermine our political processes are limitless.
“What comes next? We can expect to see deep fakes used in other abusive, individually targeted ways, such as undermining a rivalâ€™s relationship with fake evidence of an affair or an enemyâ€™s career with fake evidence of a racist comment,â€ University of Texas law professor Robert Chesney and University of Maryland law professor Danielle Citron wrote in a recent post for Lawfareblog.com in February.
â€œBlackmailers might use fake videos to extract money or confidential information from individuals who have reason to believe that disproving the videos would be hard,â€ they added. â€œAll of this will be awful. But thereâ€™s more to the problem than these individual harms. Deep fakes also have potential to cause harm on a much broader scale â€” including harms that will impact national security and the very fabric of our democracy.â€
In October, the Council on Foreign Relations called deep fakes “disinformation on steroids.”
“Deep fakes are a profoundly serious problem for democratic governments and the world order. A combination of technology, education, and public policy can reduce their effectiveness,” CFR noted, adding:
Rapid advances in deep-learning algorithms to synthesize video and audio content have made possible the production of â€œdeep fakesâ€â€”highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did. As this technology spreads, the ability to produce bogus yet credible video and audio content will come within the reach of an ever-larger array of governments, nonstate actors, and individuals. As a result, the ability to advance lies using hyperrealistic, fake evidence is poised for a great leap forward.
For example, a credible deep fake audio file could emerge purporting to be a recording of President Donald J. Trump speaking privately with Russian President Vladimir Putin during their last meeting in Helsinki, with Trump promising Putin that the United States would not defend certain North Atlantic Treaty Organization (NATO) allies in the event of Russian subversion. Other examples could include deep fake videos depicting an Israeli soldier committing an atrocity against a Palestinian child, a European Commission official offering to end agricultural subsidies on the eve of an important trade negotiation, or a Rohingya leader advocating violence against security forces in Myanmar.
What’s the good news?
The U.S. military’s secretive research and development institute — theÂ Defense Advanced Research Projects Agency (DARPA) — has spent $68 million over the past few years researching ways to identify deep fakes and counter them. But obviously, the technology is evolving, as is the threat.
Never miss a story! Sign up for our daily email newsletter â€”Â Click here!