The next step in the transformation of the U.S. Defense Department systems architecture will be networks defined by software instead of by hardware. Software-based network controls will extend the scope of what currently is limited only to data center operations.
Traditionally, switches and routers have been set separately from what was managed as computing inside the data center. Special-purpose devices were installed to solve specific problems of network management. This resulted in complexity and inflexibility. For example, to change networking data centers, operators had to reconfigure switches, routers, firewalls or Web authentication portals. This required updating virtual local area networks, quality-of-service settings and protocol-based tables with dedicated software tools. Network topology, as well as different software versions, had to be taken into account. Consequently, the networks remained relatively static because operators sought to minimize the risk of service disruption from hardware changes.
Enterprises today operate multiple Internet protocol networks for voice, data, sensor inputs and video. While existing networks can provide custom-made service levels for individual applications, the provisioning of network resources largely is manual. Operators configure each vendor’s equipment and adjust parameters, such as bandwidth, on a per-session, per-application basis. Because of the static nature, networks cannot adapt to changing traffic, application and user demands. With an estimated 15,000 networks in place, the Defense Department has difficulty managing such a proliferation of options.
The explosion of mobile devices, server virtualization and the advent of cloud services now are driving networking firms to re-examine how to make the communications control more flexible. Hierarchical networks constructed with hard-wired Ethernet devices arranged in multiple tree structures cannot sustain the new workloads. A static architecture is ill-suited for the computing and storage needs of current computing and carrier environments.
Within an enterprise environment, communications traffic patterns are changing. In contrast to client-server applications where the bulk of the communication occurred between one client and one server, today’s virtual applications access different databases, creating a geographic diversity of machine-to-machine traffic before returning data to the end user. Users are changing network traffic patterns instantly as they push for access to dispersed content. Applications require access from any type of device, connecting from anywhere, at any time, by any access method.
Managers of enterprise data centers are proceeding to adopt a utility computing model, which includes private, public cloud and hybrid clouds. This results in traffic that is distributed over a dispersed area. It now is a requirement that the management of highly adaptable networks can change configuration without any delay.
Enterprises have embraced a wide range of cloud services, resulting in an unprecedented growth of these services. Enterprise business units want to access applications, infrastructure and diverse resources from multiple locations. To add to the complexity, the planning for cloud services must be done in an environment of increased security and auditing, along with changing business reorganizations, consolidations and mergers that can change switching without delay. Instant access is necessary to secure rapid scaling of computing, storage and network resources with a common suite of configuration tools. At present, it may take many weeks before even a small change in the pattern of communication can be altered. New equipment must be bought, relocated and tested before wide-area traffic can be allowed to flow. Response time in seconds is the new requirement for software-defined network (SDN) changes.
Handling today’s “big data” datasets requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of huge datasets is fueling a constant demand for additional network capacity in interconnected data centers. Operators then face the task of instantly scaling a network to a previously unimaginable size, maintaining any-to-any connectivity amid increased demands for improved uptime and faster responses.
The Open Networking Foundation (ONF), through SDN, is transforming networking architecture by relocating switching and routing functions from hardware to software in five ways. First is to centralize management and control of networking devices from multiple vendors into network control centers. Second is to improve automation and management of applications by using common application programming interfaces (APIs) to virtualize the underlying networking details. Third is to deliver new network capabilities without the need to reconfigure individual devices or to wait for vendor releases. Fourth is to program applications using common programming environments. And the fifth is to increase network reliability and security as a result of centralized and automated management of network devices, as well as applying uniform policies and operating with fewer errors.
Networking technologies so far have operated with a discrete set of protocols designed to connect individual servers through routers and switches over short distances, link speeds and topologies. That will have to change. The static nature of the old networks stands in stark contrast to the dynamic nature of the SDN environment. Applications will be distributed across multiple virtual machines, which directly exchange traffic flows with each other. Traffic will migrate to optimize and to rebalance workloads continually, causing the physical endpoints of existing flows to change. Such migration challenges traditional networking, from addressing schemes and namespaces designation to a change that is based on software design.
Paul A. Strassmann is the distinguished professor of information sciences at George Mason University. The views expressed are his own and not necessarily those of SIGNAL Magazine.