Enable breadcrumbs token at /includes/pageheader.html.twig

Joint Force Digital Interoperability Remains Elusive

A departmentwide approach is necessary to achieve the long-sought goal.

Despite substantial research and investments, widespread interoperability continues to elude the Defense Department, the joint force and their partners. Some of the hurdles are inherent in the current acquisition and budgeting process. Others loom because of long-standing approaches to operation and training. Progress has been made, but the goal has not yet been attained.

Ultimately, interoperability is not the responsibility of any one organization or community—it is a responsibility shared across almost all communities, including requirements, acquisition, testing and training. The joint force can achieve digital interoperability, but only if leadership recognizes its importance and demands it from every community and organization.

Almost three decades of Government Accountability Office (GAO) reports document the interoperability challenge and its impact on mission performance. In 1986, Richard Davis of the GAO stated, “[The Defense Department’s] inability to achieve interoperability is primarily related to its decentralized management structure, which permits each service a large degree of autonomy over its programs.” This is as true today as it was 28 years ago, because no single organization is responsible for defining and funding joint and coalition interoperability requirements.

When the responsibility for solving and funding interoperability is shared across a multitude of stakeholders with competing goals and objectives, achieving and maintaining interoperability becomes a significant challenge. Because current Defense Department acquisition and budgeting processes are program-centric by statute, program managers often are fiscally constrained to addressing their individual program’s immediate interoperability requirements. As a result, they coordinate point-to-point digital communication solutions with other programs rather than coordinating with a broader, multiprogram, multinational partnership to design a system of systems (SoS) solution that would provide more comprehensive benefits to the end users.

Today, the United States rarely fights as a single nation. More often the military operates side by side with mission partners, which drives these partners to design systems that are interoperable with U.S. counterparts. Coalition interoperability requirements are addressed only within the context of open foreign military sales (FMS) cases, which does not reflect an SoS context nor accounts for U.S. interoperability with foreign systems that are not purchased from U.S. vendors. Additionally, in today’s fiscally constrained environment, when a program actually identifies and documents broad joint and coalition interoperability requirements, these often fall below the cut line or funding threshold during prioritization.

Even when interoperability requirements and funding are in place, the Defense Department faces the very difficult challenge of synchronizing life cycles. Multiple systems that are expected to operate as an SoS to provide a capability or execute a mission all have independent program and upgrade timelines. True interoperability requires a constant effort to integrate new, emerging technologies into networks with existing legacy communications and systems. Often when a new version of a system is fielded, interoperability across the entire SoS becomes degraded or broken. This is complicated further when the multiple systems span different domains—such as air-ground or land-maritime—because this normally involves gateways between networks that are optimized for use in single domains.

If Moore’s Law holds true into the future, and significant technology advancements are realized every two years, this evolution is disconnected significantly from the typical air or maritime platform upgrade life cycle of only every four to eight years. The potential result is a missed opportunity to move the most capable technologies into the hands of the warfighter by failing to leverage the natural cycle of technological advances.

Several processes, tools and mandated policies within the Defense Department aim to promote and improve interoperability. These include requirements validation, standards compliance and interoperability certification.

With requirements validation, all aspects of system development—including funding, schedule, design, fielding and training—are driven by validated requirements. So, interoperability relies on effective requirements definitions. However, most requirements documents are program-centric and focus on individual service context with little consideration or funding for the joint and coalition operational environments in which the solution will likely be used. Leadership must encourage requirements officers to consider not only their own service operational and doctrinal processes but also the implications of joint and coalition doctrinal processes and operating environments.

In standards compliance, the use of military and industry standards—versus proprietary interfaces—is mandatory for digital interoperability. The Defense Department Information Technology Standards and Profile Registry (DISR) provides a list of current approved standards for the department. However, standards provide only a point of departure for interoperability. Compliance with a standard does not always ensure interoperability, because two systems independently can be certified as compliant with the same standard but not be interoperable because of varied implementation choices and optional features. Leadership must encourage system designers to ensure not only standards compliance but also that the choices and optional features implemented correspond with those of applicable joint and coalition systems.

For interoperability certification, the Chairman of the Joint Chiefs of Staff Instruction (CJCSI) 6212.01F, Net-Ready Key Performance Parameter (NR-KPP), defines a comprehensive approach to the acquisition of information technology systems to achieve interoperability. The majority of the instruction centers on required Department of Defense Architecture Framework (DoDAF) views and documentation of a program or system’s interfaces that must be accomplished to receive an NR-KPP certification. Additionally, the instruction requires the Joint Interoperability Test Command (JITC) to conduct testing to validate that a program did implement what was documented.

However, this process, which provides programs autonomy to select standards implementation options independently, does not ensure digital interoperability. Leadership must ensure that interoperability certification takes into account the entire SoS context to be an accurate indicator of interoperability.

Requirements validation, standards compliance and NR-KPP documentation sometimes are considered a time sink for program managers rather than what they could be: a collective set of tools that, when applied together, can provide a rigorous methodology to identify interoperability problems at the beginning of a life cycle. If technical requirements, architectures and interoperability documentation are not completed until very late in the life-cycle development process, they do not serve their intended purposes.

So, in addition to changing the approach to existing processes, the Defense Department could emphasize three additional areas to improve interoperability further: SoS coordination; mission-based test and evaluation; and system training.

Using a common overarching SoS architecture to include joint and coalition environments can enable program representatives with knowledge of their individual system architectures—capabilities as well as limitations—and programmatics to collaborate and effectively design an end-to-end communications solution that maximizes emerging technologies while considering legacy constraints. Interoperability within an SoS always will be constrained by the least common denominator. For example, a program that cannot afford, or does not have room for, a new radio will not be able to migrate to the latest waveform. So, to maintain existing interoperability, the SoS is constrained to the existing waveform. This type of collaboration is as much a technical effort as a programmatic one. While contracting rules and International Traffic in Arms Regulations (ITAR) can make SoS coordination difficult, if joint and coalition coordination is accounted for at the beginning of a life cycle rather than toward the end, then the benefits and return on investment will be significant.

The Defense Department has several successful examples of this type of coordination, such as the Air Operations Community of Interest (AO COI) and the Digitally Aided Close Air Support (DACAS) Working Group. However, these mission-focused efforts are the exception rather than the rule. Defense leadership must consider ways to incentivize services and programs to institutionalize mission-focused collaboration with joint and coalition partners.

In the current constrained fiscal environment, services and programs increasingly are forced to minimize their external coordination and focus only on meeting their internal requirements within schedule and budget. However, if interoperability is viewed through a departmentwide lens, then constrained fiscal times are exactly when leadership must encourage external coordination to reduce overall defense spending and maximize capability delivered to the warfighters.

For mission-based test and evaluation, interoperability testing should include opportunities for low-risk assessments in realistic joint and coalition environments throughout a program’s life cycle. Program managers (PMs) typically are risk-averse when it comes to test and evaluation; no one wants to fail a test. Therefore, assessments that include external participants often occur only after the PM has a very high confidence of meeting all measures.

SoS interoperability, however, is not the result of a single program’s performance and requires a different approach. Regularly scheduled, mission-based risk-reduction events—which are nonattributable—as well as mission-based interoperability test tools that assess the end-to-end interoperability, have proved to be very effective in achieving interoperability across an SoS. These events and tools provide programs with early feedback on their progress toward SoS interoperability at a point in the life cycle when it still is relatively inexpensive to make fixes or code changes. Leadership must encourage participation in low-risk interoperability assessment events and invest in test tools that are more comprehensive than current standards compliance tools.

System training is important because proficiency is gained through familiarization and training. The individuals joining the military today have grown up in the digital age and are comfortable using digital communications and technology. However, anecdotally, endless examples persist of fielded communications systems failing because of planning shortfalls or configuration errors. Providing operators with increased hands-on time during training and rehearsal to become proficient at the proper and intended usage of a system will reduce the number of operator errors experienced during operations. With the scheduled drawdown in overseas operations—as well as the widespread prevalence of fielded technology and digital communications—now is a perfect time for leadership to invest in increased system training.

Marsha Mullins is a systems engineer in the Joint Fires Division of the Joint Staff J-6 Deputy Directorate for C5 Integration.

Comment

Permalink

Ms. Mullins has written a very good article which emphasizes the need for cross service interoperability but I think there is a need to point out what I feel are some oversights from Ms. Mullins article. From a Systems Engineering perspective one has to acknowledge the critical role that requirements definition plays in program execution. Key among the steps necessary to establish initial requirements are the need to define the mission, environment, and the performance criteria that the system will operate in and to. Sighting DACAS as an example of cross service interoperability could be misleading because in that instance you have two services executing the same mission in practically the same environment. In the below photograph if it weren't for the camouflage pattern you couldn't tell the Soldiers from the Marines. Therefore the solution set is going to be near if not exactly identical. One of the most sited reasons for program failure is improper definition during the requirements phase of mission, environment, or both. After all even though the Marines and Soldiers have different management structures they were able to arrive at an interoperable system because they had developed a system using the same requirements derived from the mission, environment, and performance criteria.

After one has established a set of requirements then one will develop a architectural framework to satisfy the requirements. Then one defines a preliminary design from that architecture which is a set of models representing various aspects of the system to satisfy the requirements. The StdV-1 (list of standards derivative of the architectural models within the architectural process) is at the end of the system architecture development process. Key models preceding it are the OV-2 (what system node needs to talk to what other system node and why), the OV-3 (a compilation of OV-2's in tabular format plus information germane to different aspects of information exchange between nodes), the OV-4 (system organizational structure), the SV-2 (how should inter-node communication be accomplished per mission and environment requirements), and the SV-6 (the OV-3 plus all the SV-2 material which should leave you with a picture of information transport within the system). If one has established an un-validated and un-verified technical solution prior to developing a requirements and architectural baseline from the environment and mission than one is breaking the continuity of the Systems Engineering process and extending the risk that should be associated with a program.

If it is intended to integrate systems together they must have a common reference point or architecture to trace a validated technical characteristics/ functions to a verified requirement. If from the architecture one establishes a design then, from the relationships established in the architecture, one can find the best manner to bridge the differences between systems to exchange information in a manner most consistent with the requirements for interoperability. As Ms. Mullins points out in her article systems even with the same standards and protocols can differ greatly in performance depending on how their used in the environment and mission. However if the interface between systems is completely understood and quantified (via the SV-2 and OV-2) than transferring information between the systems should be a simple manner of bridging the differences between a connection interfaces. We must trace how and why the standards and protocols are used with respect to the architectural models (SV-6, SV-2, OV-4, OV-3,& OV-2). If however one tries to pound a square peg into a round hole than each time the square peg impacts the edges of the round hole it will distort in an infinite number of variations depending on the applied force and initial conditions. Likewise if exposed to an infinite number of non-validated or non-verified environments, missions, and performance requirements; technical solutions will distort into an un-ending number of unique, non-standard, non-interoperable, probably proprietary, and marginally performing solution sets.

Permalink

I couldn't agree more, Mike. In the paragraph where I briefly discuss requirements validation, I didn't go into detail about the Coordinated Implementation process. The CI process starts with and relies on an initial "Architecture Driven Analysis" of the mission area where the architectures you describe above are developed, validated, and analyzed to capture the Joint/Coalition environment requirements (as opposed to system-centric architecture). Thanks for taking the time to read my article and comment.

Permalink

I would challenge the DoD to not only think in terms of mission based testing, but to also think in terms of what many in the Cloud community refer to as "brutal standardization." We have seen this type of philosophy begin to manifest in things like Defense Enterprise Email (DEE) or the Defense Enterprise Portal Service (DEPS) in which all the Services receive the same capabilities, greatly reducing interoperability risk and saving the Department huge sums of money in servers, software and support. With the Joint Information Environment (JIE) set to tackle brutal standardization at the enterprise level for infrastructure and the National Information Exchange Model (NIEM) providing a solution for data (at Federal, State, Local and Tribal levels no less!) we are approaching the ability to develop on-the-fly missions that will keep up with an ever changing adversary. Ms. Mullins has it right about today's military growing up in the digital age. By the time someone has developed an SV-6 to show the exchanges to make an online purchase, these guys have already bought their Apple TV and are watching the latest episode of Big Bang Theory.

Comments

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.