Enable breadcrumbs token at /includes/pageheader.html.twig

Disinformation in the Age of AI

CISA tackles election security against AI threats with training, policy and international cooperation to combat misinformation.
By Diego Laje and Nuray Taylor

Technology empowers malicious actors with the tools necessary to avoid the protective measures cyber warriors put in place.

The Cybersecurity Infrastructure Security Agency (CISA) recently published a guide offering steps to combat foreign malicious actors in U.S. elections. According to the toolkit, the season sees the usual suspects: China, Russia and Iran.

Along with the known online threats, such as creating networks of fake accounts to pose as Americans and influencing narratives through online platforms, the latest involves the use of generative artificial intelligence (AI) to enable their efforts.

“The most powerful toolset right now is probably the open models, which are available [online],” said Daniel Miessler, founder of Unsupervised Learning, a technology company.

Miessler spoke about a site that offers hundreds of AI tools without guardrails against deception or hate speech.

The point: create an atmosphere of distrust in the American election process.

Examples of malign tactics listed in the guidance include disguising proxy media—that is a media outlet run by a malicious actor but posing as a balanced journalism outfit—voice-cloning public figures, manufacturing false evidence of an alleged security incident and leveraging social media platforms.

“To address risks to the 2024 election cycle from the misuse of generative AI, CISA has developed a no-cost and voluntary training for election officials,” shared Cait Conley, senior advisor, CISA.

Most recently, the agency hired election security advisors in each of its 10 regions around the country, Conley told SIGNAL Media.

Still, as technology empowers malicious actors, the odds are stacked against the defense.

“The Russian Internet Agency was basically a full office building of 500 people who spoke fluent English, understood American culture well enough to masquerade on social media like Americans,” said Jim Richberg, former Office of the Director of National Intelligence cyber chief and current head of cyber policy and global field chief information security officer at Fortinet.

“Now, thanks to gen AI, two to three people who just speak enough English to be able to detect AI hallucinations can have that same level of effort,” Richberg said.

 

 

 

 

 

Similarly, these tactics have and will continue to affect smaller elections, according to Richberg.

Studies reveal that effective false content spreads faster than real information online.

“False news was 70% more likely to be retweeted than the truth,” concluded a Massachusetts Institute of Technology paper. A similar lesson applies to all social media. The same work concludes that online bots have minimal effect on the virilization of a false narrative; people—and their emotions—are what makes a story grow in relevance.

Miessler identified the need for a service that leverages AI to trace the source and context of potential online aggressors and their narratives to help counter false news. But this service should be trusted by all political actors.

“This would be like a massive national infrastructure improvement because if there’s nothing in the center to trust, things can go really sideways in this upcoming election,” Miessler said.

While technology can act as a firewall to malpractice, smart human users must act as the first line of defense.

“You can generate near infinite content at very low cost that is pushing any agenda you have, and it’s going to sound really compelling and intelligent,” said Joseph Thacker, principal AI engineer at AppOmni. “The barrier to entry to do that is so much lower now.”

However, as AI evolves, videos have begun to look too realistic for many human beings to discern what’s real. “The technology has gotten good enough that the human says, ‘It’s a coin toss whether it’s real or not,’” Richberg added.

Realistically, it is now very easy to create bots that follow prompts with malicious agendas and spread misinformation all over the internet, including search engine-optimized content.

AI can also now solve CAPTCHAs, calling for necessary identity verification, Thacker told SIGNAL Media.

As the world progresses, Thacker sees humans becoming more skeptical of many online sources, needing further verification. “Or are these models going to get so good that they’re basically always right?” he asked rhetorically.

At some point, government and other actors may need to step in.

“Just like with any other technology, I think AI is definitely one of those where there needs to be policy, laws that need to keep pace with both the good things and potentially bad things that can happen with this technology,” added Gaurav “GP” Pal, CEO and founder of stackArmor.

The policy response has been efficient however, Pal explained. The AI Bill of Rights and the White House Executive Order 14110, which outlines the safety, security and trustworthy development and use of AI, in addition to foundational pillars such as the National Institute of Standards and Technology AI Safety Institute, are proof of such. “We obviously need to do more, but the good news is everybody recognizes what those threats are, and people are working towards solutions.”

Every major state in the United States has or is working to pass an AI safety bill, which is a step in the right direction, Pal told SIGNAL Media.

“Industry has a very big role to play,” he went on. “We need to allow for us to try and see if existing mechanisms can be repurposed and reused to meet some of the emerging threats.” One answer could be closer than expected. Commercial news media, such as major newspapers, have existed for centuries and those same profit-seeking organizations gave their audiences the tools to be informed citizens.

“We need to be reframing the capital discussion as well, where there is a lot of money that we can be making by investing in solutions, which we refer to as IIQT. How do we invest in solutions which strengthen information, integrity, quality and transparency? Those are the ingredients for trust. We did it 100 years ago in relation to food, drugs, electricity, finance and accounting,” said Matt Abrams, founding partner of Democracy Capital Corporation, a boutique lending firm.

The logic behind investing in ideas that will improve information quality and rebuild trust, according to Abrams, will result in a bigger economy.

“Healthier discourse and dialogue that creates healthier economic and business capital environments are better economic returns,” Abrams explained.

New solutions may not necessarily be most successful or efficient to meet threats at a rapid pace.

The Ministry of Information Technology in India, for example, recently announced that foundational models must obtain authority to operate from the ministry to avoid deliberate or mistakenly false information.

“That’s one way to do it, where the regulator, the government in this case, has a role to play,” Pal stated.

A lesser-known threat is data colonialism, says Silvia DalBen Furtado, Knight Center Graduate Research Associate at the University of Texas.

“Most of these data sets have much more content in English-based languages than other languages, much more content produced in the U.S. in the global North and not from the global South,” she said, explaining bias formed through generated content.

This is a double-edged sword, as it makes AI models work better in English, helping content creation in the language, regardless of whether it is malicious.

There is inequality in information due to most AI technologies being developed in the United States and China, Furtado explained. “We need more AI developed from other spaces and countries,” she stated.

Still, the production of reliable content is where eyeballs will turn to search for news, and this has always been a reliable source of income for those who invest in content and brand.

“We can immediately direct ad dollars to one of the stickiest places that there is for consumers, and that’s in news,” Abrams told SIGNAL Media in an interview.