Enable breadcrumbs token at /includes/pageheader.html.twig

Sarcasm Detection Enters the Information Warfare Fight

Artificial intelligence takes a different slant against adversarial narratives.

With social media platforms representing one of the main conduits for adversarial propaganda, researchers are examining how information spreads across the digital environment and how it spills into action outside of the online presence. A recent breakthrough in sentiment analysis—with an algorithm that can detect sarcasm—from the University of Central Florida as part of the Defense Advanced Research Projects Agency’s SocialSim project, aids in this understanding and in defense.

Researchers are employing neural networks, machine learning and social media data sets to conduct basic research on developing of possible solutions to counter information warfare. The proliferation of information warfare against the United States has created a great need for counter capabilities to protect against adversarial interference.

The sarcasm detector is an important first step, says Brian Kettler, program manager, Defense Advanced Research Projects Agency’s (DARPA’s) Information Innovation Office (I2O), who is currently leading the Computational Simulation of Online Social Behavior program, the formal name for the SocialSim effort.

“Sarcasm can really trip up sentiment analysis,” the program manager states. “If you’re trying to understand how people are engaging with a particular narrative, if they are outraged by it or they are accepting of it, sarcasm could make it appear one way, but it really is a different way. Understanding sarcasm, I think it’s a basic capability, so that you’re not tripped up by it when you are doing things like sentiment analysis on a particular text.”

Two years ago, Kettler took over running the project that began in 2017 and immediately pivoted SocialSim to address information warfare, given the urgent need.

“It started out mostly looking at cybersecurity information, information about vulnerabilities and how that would spread on social media,” he explains. “[Instead], I wanted to focus more on the information war that we are in, the information influence, the information that our adversaries may be pushing and counterinfluence operations. I wanted to look at how various narratives spread online and understand how misinformation might be moving.”

DARPA, University of Central Florida (UCF), three other academic partners, and one commercial company responsible for curating social media data are building solutions to model how information spreads online. “It is how we model an online population with publicly available data,” Kettler notes. “We are not really interested in modeling specific users; we’re looking at an aggregate, and how we can model how much a piece of information is going to be engaged with online, and how many people are likely to retweet something, that sort of thing.“

The researchers examined Twitter postings, YouTube comments and DK—essentially a Russian Facebook-like platform. At first, the researchers applied different algorithmic models across the various social media platforms to generalize the models, Kettler says.

“And now, we’re looking at how we can take these models and bring in other signals, besides just looking at the online information. We’re not just trying to predict the online behavior by looking online, because we all know that [behavior] online has a big impact offline and vice versa. We are trying to make these models much more robust. And it is about how we can really understand enough about human behavior online, and then build simulations that will say how various kinds of information will move over time, which could have a lot of useful applications.”

As part of the effort, DARPA issues a new challenge every six months to the research teams to develop and test different capabilities with new data sets, explains Ramya Akula, a graduate research assistant at the Complex Adaptive Systems Laboratory in UCF’s Department of Computer Science. She is working on the SocialSim team led by Ivan Garibay, associate professor of industrial engineering and management systems at UCF and director of the lab.

The latest challenge question has the teams delving more into the information operations that happen on social networks, Akula says. “It is basically about getting to the information propagation and what kind of information is exchanged or the sensitivity of the information,” she notes.

Kettler points out that UCF’s research team has achieved “a couple of very interesting things” as part of its efforts. First, the team uniquely structured its deep neural network, which processes the algorithms—a multiheaded neural network.

UCF’s algorithm begins by converting text from social media platforms into digits that are understandable by the algorithm, she clarifies. “That’s the first phase,” Akula says. “Then the heart of our architecture is basically the self-attention method. We take the different projections of the same input and then we try to learn all the different combinations of these inputs that are coming in together. The reason why we do that is to understand the relationship between two words in a sentence. Especially when you have a really long sentence, understanding the context or understanding the semantics behind it is easy for humans, but it’s hard for a machine to understand that. That’s why we need to look at each and every possibility of those combinations.”

She cites an example of the word apple. If text indicates someone is “‘taking an apple to work,’ that could mean an Apple gadget or an apple fruit,” Akula suggests. “Then these self-attention layers are made through different heads and different layers, so that’s like a bunch of layers, a bunch of heads that are all trying to understand all of the probable combinations of ‘apple.’”

The system then computes an attention rate for each word and sends the data through the gated recurrent unit (GRU), a special kind of neural network that UCF is using to learn the relationship between two words, Akula offers.

“When a word has a higher attention rate, that means it has some emphasis,” the researcher clarifies. “And using this attention rate, the patterns are learned, and then that computed information is sent to the GRU neural networks, which is then connected to a fully connected layer, or a basic neural network and then it computes the probability score.”

The probability score indicates the likelihood that a sentence contains sarcasm. Initially successful in detecting sarcasm in words, the algorithm then had to be applied to symbols or emojis. “In our case, we got really good results in detecting most sarcasm and nonsarcasm when we have words, but there are some situations where there are question marks, the algorithm was a little confused to detect what it means,” Akula acknowledges. “But otherwise, when there is pure text or when there are hash symbols or when there are emojis, those kinds of things, it was able to detect really well.”

In fact, UCF’s artificial intelligence tool has proven to be accurate 99 percent of the time, Kettler says.

“The accuracy was much better than a lot of the different other attempts to build classifiers that pick up sarcasm, which on a single sentence, that is probably comparable to most humans or maybe even better,” the DARPA program manager notes.

The next steps, Kettler says, are to increase the accuracy of the models when applied to other signal sources. “To validate a model, we need to say given a particular narrative how the model says the information is going to move over time, and then we compare that to how it actually moves over time,” he states. “So, we have a very well-defined ground truth in the actual spread of information. And in addition to these models, it is to help you have a better understanding of how people are engaging with information.”

Kettler expects the SocialSim research efforts to finish by the end of the calendar year. After that, the program manager will look to lead and blaze a trail towards other information warfare solutions for the nation.