Review of “Select Controls for the Information Security of the Ground-Based Midcourse Defense Communications Network,” Defense Department, Office of Inspector General, February 24, 2006.
n testimony to the Senate on May 10, 2006, Lt. Gen. Henry “Trey” Obering, head of the Missile Defense Agency (MDA), spoke glowingly about the communications network being established for the system tasked with protecting the U.S. mainland against an intercontinental ballistic missile attack. According to Obering, “The global command and control foundation that we’ve established is unmatched in the world.” But the Defense Department’s own Office of Inspector General (IG) would probably disagree. Just three months before Obering’s boasts, the IG took the defense system’s command and control network to task.
The ground-based midcourse defense (GMD) system is an ambitious long-term project that consists of interceptors in Alaska and California; sensors in California and the Pacific Ocean (and soon in Fylingdales, Britain, and Thule, Greenland); and several command centers across the continental United States, as well as Alaska and Hawaii. Eventually, it will have a dedicated satellite network. The system crosses over 11 time zones, through three combat commands, and includes three branches of the military. The GMD Communications Network (GCN) must link all these elements together–an incredibly complex, and essential, task.
Given that the GCN controls the Bush administration’s missile defense system, the flagship of its national security plan, one might think that the network itself would be secure. But indeed just the opposite appears to be true. In its audit, the inspector general revealed that MDA officials “had not fully implemented information assurance controls required to protect the integrity, availability, and confidentiality of the information in the GCN.” As a result, “Missile Defense Agency officials may not be able to reduce the risk and extent of harm resulting from misuse or unauthorized access to or modification of information of the GCN and ensure the continuity of the network in case of an interruption.” In other words, the system could be hacked–outsiders could enter into the network, change or delete data, and/or share classified information–and MDA would not know about it, be able to respond effectively, or apparently prevent it from happening again.
The report attributes these failings to a cascade of human errors. The GCN was officially intended to be built to meet information security standards dating from 1985. As if aiming for standards created years before the information revolution took place wasn’t bad enough, MDA implemented a set of standards from an entirely different directive. Contractors for the GCN told auditors that it would have been too costly to go back and modify the system. To this, the report rather acidly noted, “Security requirements cannot simply be waived based on cost.”
Further degrading the stability and security of the network, the GCN’s two types of equipment–encrypted and unencrypted–were built by two different contractors who apparently worked at cross-purposes and did not follow a common set of security procedures. “Information assurance” (IA) officers were often unaware of their responsibilities or even that they had special duties. IA officers are charged with making sure that users of the system have the correct level of clearance, that those accessing the system actually have a need to do so, and that the users are aware of network security standards. Curiously, many of the officers were unaware of their IA responsibilities until MDA started developing IA policies in June 2005, after the National Security Agency had completed its own audit of the system, but well after the GCN’s creation in January 2001.
The GCN is supposed to have an automated audit of its network–a security feature that most basic office networks have. However, MDA officials told the investigation team that their equipment was incapable of supporting an automated audit. Instead, they claimed that their contractors did weekly manual exams. But the contractors complained that manual audits were so “cumbersome and time-consuming” that they rarely did them–and even then, the contractors acknowledged that such audits were not guaranteed to detect all security violations.
An undated draft version of the IG’s audit was far more scathing than the final report, noting that the system had category I deficiencies (defined as problems which “must be corrected before the system can become operational or continue to operate”) and category II deficiencies (those which “must be corrected within a specified time period in order to continue system operations”). “MDA officials should immediately cease operations until all category I and category II issues are mitigated,” the draft report advised, and prepare a plan of action “to identify the solution, schedule, security actions, and milestones necessary to correct the security weaknesses.”
Overall, the two reports came to the same conclusions, but the draft version was more specific in its criticisms and more drastic in its suggested plan of action to deal with the network security vulnerabilities. By contrast, the final version of the report simply warns that hackers could defeat the GCN and that the MDA cannot ensure the sanctity of the GMD information and systems. This is not unexpected, as the draft version may have been deemed a little too sensitive for public consumption. Or perhaps there are those in the Pentagon who would prefer softer criticism of a program already plagued by technical delays and cost overruns. Even so, the final watered-down assessment raised some eyebrows. Federal Computer Weekly ran a story on the report on Thursday, March 16, 2006. By the following Monday, the IG issued a statement: “The Missile Defense Agency requested that we remove this report from our website pending a security review.” The report is now back on the IG’s website, but its temporary absence speaks to the gravity of the network’s security vulnerabilities.
The IG’s report, while perhaps embarrassing to the MDA, could not have been much of a surprise. As early as April 2003, the MDA recognized that there were weaknesses in its software network. In a report to the MDA Southeastern Software Engineering Conference, then-Brigadier General Obering briefed the audience about the MDA’s experience with excessive schedule pressure, changing requirements, inadequate test specifications, and insufficient engineering. Obering spoke specifically about a limited understanding of the software and the absence of a software architect. He even presented ways in which he said the MDA was fixing the problems. If the MDA had followed through with those fixes, the IG’s office might very well have come to a different set of conclusions.
But in the problem-plagued quest for national missile defense, securing the GCN from external meddling is not even the sole issue–or even the most troublesome–facing the MDA. The final IG report underlines the importance of password control in noting that MDA officials believed “the greatest risk to the GCN system was the insider threat.” Unfortunately, if the MDA’s track record in network security is anything to judge by, it’s far from certain that GCN will be secure either from the inside or the outside.