Enable breadcrumbs token at /includes/pageheader.html.twig

Taking Video Security to the Next Level

MITRE researchers explore technology that will help spot patterns in crowds.

Someone’s always watching. In malls, stadiums, train stations, parking garages, airports—security cameras are everywhere. But with so much information flowing in, it can be challenging for the people in the control rooms monitoring activity to catch every little detail. And surprisingly, most mainstream video security technology lacks sound, color or both. That’s where Chongeun Lee, a MITRE engineer specializing in biometrics, comes in.

Lee is the principal investigator on the LinkBioMan technology project, part of MITRE's internal research program. A team of researchers, which includes Monica Carley-Spencer, Chris Pike, Benjamin Skerritt-Davis, Haluk Tokgozoglu and Amanda Vu, is contributing its expertise in video analytics, biometrics, machine learning, human language technology and computational auditory perception to create sensors that can spot irregularities in videos.

“The goal of the research is to create a decision framework that can process audio and video in real time and recommend a timely action—alert or no-alert—based upon the fusion of multimodal data,” Lee says. “The outcome is intended to be used for alerting operators ... monitoring multiple feeds of the one that requires attention and action.”

To begin its work, the research team selected compelling test cases. Researchers decided to compare public protests and riots with concerts to determine the differences in alert context. Then they designed and implemented a decision framework, trained audio and video classifiers leveraging existing open source tools, and developed ontology on the selected test cases.

A lot of work still needs to be done. Having focused mainly on nonperson entities thus far, next fiscal year the team plans to add soft biometrics such as gender, clothes and hair color to crowd behaviors; implement user error feedback and correction; and augment the decision framework with temporal tracking of events, Lee says. With continued funding, LinkBioMan should be fully developed by the end of fiscal year 2019.

Of course, there have been challenges along the way—two in particular stick out in Lee's mind. The first was finding and collecting datasets that do not come with restrictions requiring legal team involvement or copyright and sharing limitations. The second challenge was hardware. “Our research requires usage of many algorithms for real-time results, and we had to switch to graphic processing units to meet the hardware demand,” Lee adds.

The benefits of the LinkBioMan program, whose formal project title is, "Linking Soft Biometrics to Semantic Description of an Event," will be immense. Users will be able to identify and prevent hazards related to public safety based upon real-time surveillance feeds and respond more rapidly and accurately to natural or man-made emergencies. The technology also could be used to conduct analysis for city planning such as deciding the locations of crosswalks and streetlights.

Transitioning LinkBioMan to commercial and government customers like the U.S. Defense Department and financial institutions won’t be a stretch. “We are intentionally building our system to be modular, flexible and tailorable for individual needs,” Lee emphasizes.

If all goes according to plan, the research will help detect, alert and react to unusual activities in crowded open, public areas that are more difficult to monitor; warn and help intelligence agents or soldiers on the battlefield in real time to anticipate threatening events by associating people's traits with a particular set of circumstances; and be used as an investigative tool to speed up and pinpoint evidence of fraudulent activities among an overwhelming amount of data, Lee concludes.