Enable breadcrumbs token at /includes/pageheader.html.twig

Information Warfare Requires Personalized Weaponry

A digital alter ego could protect individuals in the never-ending cyber war.

Up until the digital age, wars involved a limited number of combatants with clear identities battling within distinct boundaries visible on a map. These conflicts ended either with a victor or as a stalemate. But today’s information warfare does not fit this traditional model. Instead, it comprises an unlimited number of potential combatants, many with hidden identities and agendas.

Cyberspace is a theater of operations that is nowhere and everywhere. Within this domain, information warfare will not and in fact cannot come to any conclusion. This conflict closely resembles an incurable disease that can be managed so the patient can lead a productive life but is never completely cured.

Current weapons in the disinformation battle have so far not shown significant results. Consequently, it is necessary to take a fresh look at the problem in a way that recognizes that influence operations—domestic and foreign—have become a permanent part of the information landscape.

It also is time to recognize this situation as a war on democracy unlike any war before it. It is a war that, up until now, the U.S. government has largely chosen to ignore—other than providing a lot of talk and congressional hearings that have not led to substantial action, leaving Americans largely unprotected.

This situation leads to three questions: What actions are needed? What legal, policy and cultural limitations constrain those possible actions? Who is, or should be, responsible to take action?

One approach would be to create a digital alter ego (DAE). This personalized artificial intelligence (AI) program would exclusively serve its owner. It would travel with that individual throughout the owner’s lifetime to turn the tables on cyber interlopers. Transfer of information ownership would be under the person’s control.

Creating a DAE would start with a PC, tablet or mobile phone that an individual would use for all online transactions; however, rather than sharing that data with outside organizations, the device would record the user’s activities exclusively for private use. Users would have all of their online behavior at their fingertips. Furthermore, the DAE could apply the same kind of data analytics technical capabilities used by businesses, organizations and governments. Unlike today’s environment in which companies easily and readily turn this analysis into a sales tool behind the scenes, information would not be shared without the owner’s consent.

For example, a DAE could analyze the private data to determine the user’s vulnerabilities or the individual’s susceptibility to certain types of information manipulation. It also could observe what outside sources the user is being exposed to and who has recorded user activity, then warn the owner when an outside source is trying to manipulate behavior.

An important function of a DAE would be to turn the whole idea of distribution of disinformation warnings and debunking on its head. For example, fact-checking sites such as Snopes and the Poynter network exist, but getting accurate information into the hands of the people who need it is difficult because so many are unaware of the sites.

The DAE addresses the disinformation distribution problem with a hunter/gatherer/interpreter solution. It would know about these tools and hunt for relevant alerts, then bring them back in a form to which the user can relate.

As a user interacts with the personal DAE, the alter ego learns more about the individual and continues improving its ability to locate and deliver relevant information to its user. It builds and continually improves a model of the user.

A DAE would be able to tell what information is most relevant to the user in three ways. First, users can directly tell their personal DAEs what is pertinent to them. Second, DAEs can infer interests based on users’ online activity. Third, DAEs can infer topics users should be interested in even though they might not know it. All of these interactions remain between the individual and the DAE. Most significantly, if a user decides to sell some of his or her personal data, the DAE could help estimate the potential vulnerabilities a sale would introduce.

One of the most important features of a DAE is its transparency. It will be able to explain to its owner every recommendation it makes, including why to trust one story and why to be suspicious of another.

A DAE would have a great deal of knowledge built into its base version before it becomes personalized to its owner. It could be created by a nonprofit organization that is supported by some combination of government, foundation and private donors. The base version must be built in a way that is completely open and transparent to anybody who wants to examine it. That applies to the technology as well as the knowledge that comes built into it before personalization.

The concept of a DAE is based on a number of AI technologies developed over the years, including extensive work on intelligent agents, natural language processing, machine learning and user modeling. While a comprehensive DAE is not available today, limited and useful versions are achievable in the near term. As AI technology improves, DAE technology can be improved. The use of a limited version now would help point the way to additional research that needs to be done.

This unconventional tactic may seem drastic, but fighting for a functioning democracy today calls for radical thinking because the potent weapon of disinformation poses widespread problems.

For example, Park Geun-hye was elected South Korea’s president in 2012. However, in 2017, Won Sei-hoon, former director of the South Korean National Intelligence Service (NIS), was sentenced to four years in prison for using NIS resources to help Park win the election. Won allegedly employed as many as 30 teams of experts in psychological warfare to generate and spread disinformation in the form of 1.2 million posts on Twitter and political discussion groups on the Internet. In addition, the Defense Ministry’s Cyberwarfare Command posted 23 million tweets to tip the scale in Park’s favor. They tried to manipulate the voters, pushing hard-liner Park as a superior and nationalist candidate while running a smear campaign against her liberal pro-North Korean opponent.

Park won the election by a narrow margin of 1 million out of 30 million votes, but in 2018, she was impeached and sentenced to 24 years in prison for bribery, extortion, abuse of power and other criminal charges.

Disinformation distribution at this scale would seem to pay, except in this case someone blew the whistle on the operation after the election. The action also had real-world consequences. Not only did Park and Won go to jail, hundreds of thousands of voters turned out for protests to demand justice. In South Korea, citizens had been disinformed and democracy had failed.

This situation was long in the making and not just in South Korea. Several legal, policy and cultural features of society have helped bring about today’s environment. Social media and the Internet are powerful tools for disseminating disinformation and manipulating the public into taking actions that work against its own good.

Unfortunately, the U.S. government can’t effectively take advantage of social media and the Internet because of poorly conceived U.S. policies and antiquated laws. For example, U.S. Law 50 U.S. Code Section 3093(f) effectively prohibits the intelligence community from action “intended to influence United States political processes, public opinion, policies, or media.”

However, social media and the Internet make it impossible to guarantee U.S. citizens will not be inadvertently exposed to information operations, and this rule is broadly applied as a basis for banning any type of useful action. While this code made sense when influence operations were limited to print, radio and TV, it is unenforceable in today’s global instantaneous information world.

To address the ubiquitous information-sharing environment, a fundamental change is needed in how and what data should be communicated between the government and the people. Today, with every online interaction, a variety of businesses and organizations are collecting volumes of personal information. They use this data to calculate how to sell products or convince the public of ideas and take certain actions.

All of this information collection is not necessarily for the public good and can, in fact, be harmful. Organizations use analytics of all kinds to determine how best to manipulate customers into helping a company meet its financial goals. Consumers don’t even know what businesses know about them, putting customers at a distinct disadvantage.

Outside of individual media literacy training, most suggestions about how to fight disinformation and public manipulation have focused on placing the burden of regulating and safeguarding users’ data on social media platform owners. While platform-centric solutions have merit, other alternatives are possible. Many details need to be worked out regarding the development of a DAE, but it is a fresh idea to defend the democratic system by providing a highly individualized weapon in the endless information war.

Rand Waltzman is the deputy chief technology officer and a senior information scientist at the RAND Corporation. From 2011-2015, he created and managed the Social Media in Strategic Communications program as a program manager at the Defense Advanced Research Projects Agency. The views expressed here are his own.

Enjoying The Cyber Edge?