Enable breadcrumbs token at /includes/pageheader.html.twig

Will Concerns Break Open Source Containers?

The hot new tech comes with many security questions.

Open source containers, which isolate applications from the host system, appear to be gaining traction with IT professionals in the U.S. defense community. But for all their benefits, security remains a notable Achilles’ heel for a couple of reasons.

First, containers are still fairly nascent, and many administrators are not yet completely familiar with their capabilities. It’s difficult to secure something you don’t completely understand. Second, containers are designed in a way that hampers visibility. This lack of visibility can make securing containers extremely taxing.

Layers upon layers

Containers are comprised of a number of technical abstraction layers that are necessary for auto-scaling and the development of distributed applications. They allow developers to scale application development up or down as necessary and ship these applications at a faster rate. However, they also make gaining true visibility into their workings a challenge. This becomes particularly problematic when using an orchestration tool like Docker Swarm or Kubernetes to manage connections between different containers. Without proper visibility, it can prove difficult to tell what is happening with those connections.

Containers can also house different types of applications, from microservices to service-oriented applications. Some of these may contain vulnerabilities, but that can be impossible to know without proper insight into what is actually going on within the container.

In other words, containers have a lot of potential security blind spots—and IT administrators hate blind spots.

Protecting from the outside in

While blind spots are fairly common in traditional network environments, administrators have become adept at getting around them through the use of security and information event management and network performance monitoring tools. These allow for continuous network analysis and monitoring, scouring for potential security issues and alerting managers whenever a potential red flag begins to wave.

Such solutions are ideal for network security geared toward identifying software vulnerabilities and detecting and mitigating phishing attacks, but are insufficient for container monitoring. Containers require a form of software development life-cycle monitoring on steroids, and we are not quite there yet.

Security needs to start outside the container to prevent bad stuff from getting inside. There are a few ways to do this.

Scan for vulnerabilities

The most important thing administrators can do to secure their containers is to scan for vulnerabilities in their applications. Fortunately, this can be done with network and application monitoring tools that administrators likely already have in their arsenal. For example, server and application monitoring solutions can be used as security blankets to ensure applications developed within containers are free of defects prior to deployment.

Properly train employees

Agencies can also ensure their employees are properly trained and that they have created and implemented appropriate security policies. The days of freely developing applications without a care for security have long since gone, and everyone needs to be cognizant of how their actions impact the security of their organizations. Thus, developers working with containers need to be as acutely aware of their agencies’ security policies as their IT counterparts. They need to understand those policies and take necessary precautions to adhere to and enforce them.

Containers also require security and accreditation teams to examine security in new ways. Security is commonly viewed from a physical, network or operating system level; the components of software applications are seldom considered, especially in government off-the-shelf products. Today, agencies should train these teams to be aware of approved or unapproved versions of components inside an application.

Get CIOs on board

Education and enforcement must start at the top. Federal chief information officers (CIOs) must be involved to ensure their organizations’ policies and strategies are aligned. This will prove to be especially critical as containers become more mainstream and adoption continues to rise. The standards in place today may no longer be applicable as technology evolves. It will be necessary to develop and implement new standards and policies for adoption, either by a committee of experts or through the initiative of a single agency (as the Department of Homeland Security has done with the Continuous Diagnostics and Mitigation program). CIOs will need to be at the forefront to make sure everyone in their organization is applying whatever policies are constructed.

Open source containers are the hot new thing, and like every hot new thing, they come with just as many questions as they do benefits. Those benefits are real, but so are the security concerns. Agencies that can address those concerns today will be able to arm themselves with a development platform that will serve them well, now and in the future.

Paul Parker is chief technologist, Federal & National Government, SolarWinds.