The Cyber Edge Home Page

  • Credit: Azret Ayubov/Martial Red/Le_Mon/Shutterstock
     Credit: Azret Ayubov/Martial Red/Le_Mon/Shutterstock

Understanding Russian Information Operations

September 1, 2018
By Timur Chabuk and Adam Jonas


The tools needed to fight this war are available, but what’s required is the will.


Russia’s ability to evolve its use of information operations to leverage social media and the cyber domain continues to shock and challenge the world community. The country’s actions, especially during the 2016 U.S. elections, have brought cyber information operations out of the shadows and into the limelight. Now, state and nonstate actors are frequently using similar techniques to influence the public and achieve political goals once only attainable through armed conflict.

Both Russia and the United States actively engaged in information operations (IO) during the Cold War, each depicting the other in unflattering ways through print or traditional broadcasting. Today, well-placed posters have given way to more technologically enabled methods, including microtargeting social media posts and engineering online echo chambers, to sway public opinion.

These innovative applications of the latest technologies aren’t a fad and will not fade because they are relatively easy to implement. Nations can now influence thinking more quickly and precisely on a personal handheld device anywhere at any time. An elaborate mixture of human and autonomous online actors called bots can spread information and disinformation quickly and target it to desired audiences. The weaponization of these bots allowed Russia, in many cases anonymously, to use information to influence Americans’ participation in democracy and sow confusion by bombarding citizens with often-divisive false information.

Russia has long been an innovator in the use of information and misinformation by its military. The former Soviet Union used propaganda at home to strengthen domestic views of the state and information operations abroad to foster unrest among its adversaries and create favorable conditions on the battlefield.

Current Russian tactics aren’t much of a departure from military and intelligence tools of the past; they are an adoption of new tools at their disposal. For example, in 1984 the KGB posed as Ku Klux Klan members in Los Angeles and published inflammatory material in an attempt to exacerbate existing racial tensions. Today, Russian bad actors use Facebook ads both to promote the social movement Black Lives Matter and simultaneously label the organization as a dangerous threat.

To understand Russian IO, it is important to understand the worldview Russia promotes. The country views itself as part of a direct conflict between two major civilizations: the Atlantic civilization, which stands for liberalism and the relentless pursuit of global dominance and comprises the United States, NATO and other western allies, and the Eurasian civilization, which stands for “a multipolar world” and promotes “true liberty” through ethnic nationalism. This contrived ideology serves Russia’s strategic interests by enhancing its influence of the Russian diaspora in nations of the former Soviet Union and by creating division in multicultural Western democracies.

Whether it is trying to influence nations such as the United States, England and France or regional neighbors such as the Baltic States, Russia has tended to use online tools to cause confusion, manifest ethnic tensions and erode trust in democratic institutions.

For example, Russia employed Facebook advertising for both the Black Lives Matter and Blue Lives Matter groups in the United States. In this case, Russia wasn’t meddling to achieve pro-Russian policy or even policy that directly hurt the United States domestically. Instead, it was acting to exploit existing social divisions and create general distrust. Recent hearings by the House Intelligence Committee found the Kremlin-linked Internet Research Agency responsible for a total of 3,519 of known paid ads on Facebook that directly reached more than 11.4 million U.S. users.

Despite congressional reports to the contrary, this new craft of hybrid warfare has largely not been doctrinal and is still being developed. While some major IO campaigns are coordinated, many are not. Instead, a wide range of separate government and nongovernment actors who opportunistically attempt to sway public opinion using a variety of tactics and narratives carries them out.

The public within Russia, former Soviet nations and Western states also are important actors in the social media information environment. These populations are each made up of numerous organizations and groups. These actors are constantly reshaping the social media information environment through a constant churn of posting, sharing, liking and directly engaging with each other.

One widely recognized way of understanding Russian disinformation operations is by categorizing them into four classes of tactics: dismiss, distort, dismay and distract. Dismiss tactics undermine the belief in facts that are contrary to Russian interests by simply denying their truth, even in the face of clear supporting evidence. Distort tactics seek to modify information by cherry-picking facts and adding lies to the otherwise true information. Dismay tactics are used to intimidate and scare the public by activating emotionally charged fears and anxieties through overblown threats and rhetoric. When these tactics fail, distraction techniques focus on changing the subject by promoting wildly sensational fake news stories and attention-grabbing headlines.

Because near-peer states such as Russia have demonstrated how much relatively small but well-coordinated capital investments can have disproportionate effects on an adversary, it is imperative the U.S. government rise to the occasion and utilize existing, often open-source tools and methodologies to tackle this threat.

A particularly relevant discipline the U.S. Army has implemented to fight improvised explosive device networks and aid in the targeting of terrorists called social network analysis (SNA) may offer some solutions for combating online influence campaigns.

SNA is a social science method that maps and quantifies the relationships between human actors. Social media platforms are rich in data that can be analyzed to determine how people communicate and what words, memes or hashtags they share. SNA can quantitatively help identify who has direct influence online and who’s influencing the influencers. It can detect subgroups within a broader network that may represent social/ideological divides and discern if certain members of the network are disproportionately targeted or elevated by bot activity.

These methods can be employed using widely available open-source coding packages. SNA tools also are contained in free or relatively inexpensive software packages, many of which have been approved for Army systems.

SNA techniques also can help identify which human voices are being promoted or trolled by bots the most, enabling users to clarify their intentions and what actors may be using them. Through monitoring upticks and patterns in online behavior, analysts can better understand how actors such as Russia tie their online activity to tactical activities off the electromagnetic spectrum.

In addition, the actual content of social media posts contains a wealth of often-overlooked data. Integrated multi­media content that integrates video, audio, text overlays and even special effects are becoming increasingly ubiquitous as technology that allows users to produce such content becomes more widespread. The volume of social data is often too large to examine content manually, but automated algorithms can provide support.

Topic detection algorithms can analyze large sets of social media posts and automatically collate posts with similar content into topics and then characterize those topics. In this way, topic detection breaks down the problem of understanding a mass of social media data into the more manageable task of understanding a smaller number of topics. By reviewing the results, analysts can monitor large swathes of social media content and optionally drill down on any particular topics that are of interest.

In addition, sentiment analysis algorithms, which characterize posts as exhibiting positive versus negative sentiments, can be applied to social media posts to assess the overall tone or disposition expressed in the data. However, new algorithms take deeper approaches such as characterizing the intensity of the emotion expressed as well as wider ranges of emotions.

Sentiment identification is a challenging task; many studies show that even humans are often only 70 percent accurate. However, machine learning-based approaches, in which sentiment analysis engines are trained within specific domains rather than performing general sentiment analysis, have been shown to be effective.

To organize and track information operations, analysts will need to arrange collected content into various themes or narratives. For example, an analyst may want to track a particular news story to identify trends and spikes in activity. Machine learning algorithms can assist in organizing posts or topics into themes by learning from small sets of analyst-labeled data. The commercial sector is working quickly to fill these capabilities gaps, but leveraging these innovations in time to counter future attacks will prove difficult.

In addition to content, useful information can be gleaned from the behavior of social media users. By examining activity patterns, analysts can assess where their accounts may be located. When conducting outbound messaging, understanding when target users are likely to be online based on past activity can enable analysts to more effectively schedule outbound messages.

Machine learning techniques also can be used to analyze the behavioral dynamics of social media users and identify similar users. This can be helpful in identifying bot accounts that are coordinating messaging, and even multiple accounts that are being operated by the same information operations agents under false identities.

Russia has been quick to evolve its IO use to leverage social media and the cyber domain in ways largely unforeseen before. It will likely continue to use these tactics to maintain domestic power, create political discord among adversaries and support more traditional military maneuver.

Much like the bots and other tools employed in Russia’s evolved IO, the capabilities to identify and potentially counter such attacks are fairly inexpensive and available now to the Army. Methodologies such as SNA, content analysis and behavioral analysis must be quickly leveraged within the Army to adequately counter Russia and others who weaponize information against the United States.

Despite America’s military might, Russia’s use of online information operations on the United States and its allies has widely been regarded as a success. Without mastering new online detection methods and thinking about maneuver more broadly, it is unlikely, if not impossible, for United States forces to match Russia or others who employ similar online information operations in the near future.

Timur Chabuk, Ph.D., director, intelligent information systems, Perceptronics Solutions Inc., leads a portfolio of research and development projects focused on developing novel capabilities for social media monitoring and analysis. Adam Jonas is an intelligence analyst at Threat Tec LLC. He focuses on social network analysis and is a subject matter expert in support of the U.S. Army Training and Doctrine Command Operational Environment Center’s Network Engagement team.

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Departments: 

Share Your Thoughts: