or Login to see your representatives.

Access Candidates' and Representatives' Biographies, Voting Records, Interest Group Ratings, Issue Positions, Public Statements, and Campaign Finances

Simply enter your zip code above to get to all of your candidates and representatives, or enter a name. Then, just click on the person you are interested in, and you can navigate to the categories of information we track for them.

Public Statements

Hearing Of The Defense Acquisition Reform Panel Of The House Armed Services Committee - Measuring Performance: Developing Good Acquisition Metrics

Statement

By:
Date:
Location: Washington, DC

Hearing Of The Defense Acquisition Reform Panel Of The House Armed Services Committee

Subject: Measuring Performance: Developing Good Acquisition Metrics

Witnesses: J. David Patterson, Executive Director, National Defense Business Institute, University Of Tennessee; David Fitch, Director, At&L Leadership Learning Center Of Excellence, Defense Acquisition University; Daniel A. Nussbaum, Visiting Professor, Department Of Operations Research, Naval Postgraduate School

Chaired By: Rep. Robert Andrews (D-Nj)

Copyright ©2009 by Federal News Service, Inc., Ste. 500, 1000 Vermont Ave, Washington, DC 20005 USA. Federal News Service is a private firm not affiliated with the federal government. No portion of this transcript may be copied, sold or retransmitted without the written authority of Federal News Service, Inc. Copyright is not claimed as to any part of the original work prepared by a United States government officer or employee as a part of that person's official duties. For information on subscribing to the FNS Internet Service at www.fednews.com, please email Carina Nyberg at cnyberg@fednews.com or call 1-202-216-2706.

REP. ANDREWS: (In progress) -- very diligent preparation, we've had the opportunity to review the testimony ahead of time. It looks like we're going to have a very engaging and meaningful discussion this morning.

This is the last of our series of hearings looking at the first question that we're going to be looking at in our inquiry. The members will recall that we're proceeding on a series of questions, the first of which is, can we design a series of metrics that accurately measure the difference, if any, between the price paid by the taxpayers and the value received by the taxpayers and the war- fighters for the systems and services that we are buying.

We're going to proceed after today's hearing with our second category of questions, which is really hypotheses as to what's gone wrong. We're going to have a series of panels talk about their theories and analyses of why we have a difference between the price paid and the value received.

The first of our two panels on this metrics question dealt with the sort of measuring the orthodox algorithms that are used. In the major weapon system side, we had a panel on that. And on the services side, we had a panel on that.

The purpose of today's panel is to bring in some people who we think perhaps look at the whole question through a different prism. They're willing to give us some new perspectives through which we can analyze the difference between the price we pay and the value that we receive.

We have three witnesses with a wealth and breadth of experience in the acquisition field, but also who I think benefit from what I would call a healthy distance from the daily responsibilities for that function so they can give us a perspective that marries experience with a fresh perspective. And hopefully we'll be able to use the testimony of these witnesses to go forward, draw some conclusions about the metrics to use in our work, and then proceed with our work in analyzing the various hypotheses given for why we have suffered these cost overruns.

I also wanted to take a moment and thank Mr. Conaway, who can't be with us this morning but who's ably represented by Mr. Coffman, and all the members of the panel for their excellent contribution to the work on the WASTE TKO bill, which has passed the House. We'll be meeting with the Senate in formal conference at 4:30 this afternoon to, I believe, put the finishing touches on that conference report, so we'll be able to proceed to the floor in short order on that and get the bill on the president's desk. Each member of the panel has made a very significant and welcome contribution to that effort.

Having said that, as the chairman and Mr. McHugh have said, we think that that effort covers about 20 percent of the landscape. It looks at major weapons acquisition, which is certainly needed to be examined, but there are so many other areas that fall outside of that that the panel has to pursue, as well as, frankly, reviewing the early-stage implementation of the law that we believe the president will sign this week.

So we are, by no means, at the conclusion of our work. We're really at the outset of it. And one thing we'd ask the witnesses to do this morning is to think about the fact that although the Congress is about to pass major legislation dealing with major weapon systems, we've not yet addressed a hardware that doesn't fall into the major weapon systems category and the 55 to 60 percent of acquisition that is services that we have to look at on behalf of those who wear the uniform and on behalf of the taxpayers.

We're glad that you're here.

And at this time I want to turn to Mr. Coffman for his opening statement, and we'll proceed with the witnesses after that.

REP. MIKE COFFMAN (R-CO): Good morning, Mr. Chairman, ladies and gentlemen.

I would like to extend a welcome on behalf of Ranking Member Conaway as well. He is sorry he could not be with us today.

I would like to thank the chairman for allowing me to take a few introductory remarks in Representative Conaway's place.

Today's hearing is an appropriate follow-on to the panel's two hearings which focused on how DOD and GAO currently assesses performance on major weapon systems and service contracts. The purpose of this hearing is to think outside the box in how we should measure performance and about what we should be measuring and less about how we are doing it today.

Our previous hearings revealed that current measures of performance tend to break down if the program baseline is unrealistic. We want to know if the program can be corrected -- if the problem can be corrected. Furthermore, are there metrics beyond cost and schedule performance that are of value, such as how closely does the delivery system meet actual war-fighter needs? Does the time of delivery of operational capability satisfy war-fighter needs? How do we determine whether optional or tradable capabilities requested by the war-fighter are affordable?

Mr. Chairman, we have a distinguished panel of witnesses in front of us today. I have looked at their written testimony and I look forward to their testimony.

There are a couple of points I would like to highlight. Although Mr. Dillard is not testifying today, he did submit a written statement that I would like to briefly comment on. He states that, quote, "The proliferation of autonomous fiefdoms within the department continues to increase, with each being a stovepipe of oversight expertise, imposing unique reporting requirements, assessments and reviews."

He goes on to state that in regards to adding additional acquisition workforce professionals that, quote, "These new people should not be housed in the Pentagon, but instead where the execution of programs occur," end quote. Mr. Chairman, I think this is an important point that I believe we need to follow closely.

Another point or observation I would like to make is in regards to requirements in joint programs. We have many programs out there that have the word, quote-unquote, "joint" in front of them, but they are joint in name only. I understand that the current requirements generation process is called the Joint Capabilities Integration Systems, or JCIS. It seems to me that in order to have a truly joint program, that the requirements for that program must be borne joint. Yet I do not believe our current system or, as Mr. Dillard describes, current fiefdoms, foster such an environment. So I would be interested in hearing from our witness in this regard.

I also encourage our witnesses to share their views on existing laws and regulations that are particularly helpful or not helpful to the department's efforts to obtain the best value and capability for our war-fighters. We have heard from the department in two separate hearings about how they currently measure performance and value on contracts. What we haven't heard enough about is how they should be measuring value. Consequently, your input will be greatly appreciated.

With that, I conclude and again thank you, fellow members, and thank you, Mr. Chairman. I look forward to the witnesses' testimony.

REP. ANDREWS: Mr. Coffman, thank you.

Without objection, the statement of any other member of the panel will be included in the record. And without objection, the statement of each of the three witnesses plus the supplemental force statement will be included in the record of the hearing.

I think each of you are veterans at this process. You know that we generally have a five-minute rule where we ask for succinct summaries of your written testimony. We're going to be a little bit more liberal with that this morning. But then we're going to turn as quickly as we can to the questions from the panel and get into the give and take.

I'm going to read a brief biography of each witness, and then, Mr. Patterson, we're going to begin with you.

Mr. Dave Patterson is the executive director of the National Defense Business Institute, where he is establishing -- which he's establishing at the University of Tennessee in the College of Business Administration. It's an institution inspiring business innovation for both government and industry by providing practical, sound assistance and creating economically efficient and effective defense business and acquisition programs.

Prior to taking his current duties, he was the principal deputy undersecretary of Defense comptroller. As the principal deputy, he was directly responsible for advising and assisting the undersecretary of Defense with development, execution and oversight of the DOD budget, exceeding $515 billion, with annual supplemental requests of more than $160 billion.

From August 2003 to June 2005, Mr. Patterson held duties as the special assistant to the deputy secretary of Defense. In that capacity, he was responsible for managing the deputy secretary of Defense's personal staff, as well as providing direction and advice to the Office of the Secretary of Defense staff on a wide range of national security operations and policy subjects.

He served in the Air Force from 1970 to 1993, retiring at the rank of colonel. During that time, he held responsible leadership and management positions with assignments at the air wing level as a C-5A aircraft commander and deputy operations group commander at major command headquarters, U.S. Air Force, the Office of the Chairman, Joint Chiefs of Staff, the Office of the Secretary of Defense and the Inspector General. In 1986 he was the Air Force fellow at the American Enterprise Institute and served in Vietnam, flying O-2As as forward controller.

Thank you for your service to our country, Mr. Patterson; glad to have you with us.

Mr. Fitch, Mr. David P. Fitch, enlisted in the Navy in 1966. He was commissioned in 1968 after graduation from San Jose State College and Aviator Officer Candidate School in Pensacola, Florida. In 1969, after fixed and rotary wing flight training, he was designated a naval aviator and a distinguished naval graduate. He retired from the Navy in 1998 as a captain following a career that included three operational and major acquisition command assignments.

Mr. Fitch culminated his tour in the Navy as the major program manager for the International and Joint Multifunctional Information Distribution System. He was a 1997 recipient of both the David Packard Excellence in Acquisition award and the Department of Defense value engineering award.

In 2001, after nearly three years in the defense industry, Mr. Fitch joined the faculty of the Defense Acquisition University, teaching and consulting acquisition executives and program management and other acquisition disciplines.

In 2006, he led a major independent study of the Coast Guard deepwater program. In July of 2008, he assumed the position of director, AT&L Leadership Learning Center of Excellence, after nearly seven years as the dean of the Defense Systems Management College.

Mr. Fitch holds degrees in business and industrial management from San Jose State College, an M.S. in education from the University of Southern California, and he is a graduate with highest distinction from the Naval War College in Newport, Rhode Island.

Thank you, Mr. Fitch, for your service to our country, and we're glad you're with us this morning.

Dr. Daniel Nussbaum, from 2004 to the present, has been a visiting professor at the Naval Postgraduate School Operations Research Department. From 1999 to 2004, he was a principal at Booz Allen Hamilton, responsible for a broad range of cost, financial and economic analyses with clients across the government and commercial spectrum.

From 1996 to 1999, he was the director of the Naval Center for Cost Analysis at the Office of the Assistant Secretary of the Navy for Financial Management and Comptroller here in D.C. From 1987 to 1996, he was division head of the Naval Center for Cost Analysis. From 1982 to 1986, he was deputy director and acting director for operations research and cost analysis divisions of the Naval Air Systems Command in Washington, D.C.

Again, a very distinguished servant of our country; three excellent people with experience and fresh insight. We're happy to have you with us.

Mr. Patterson, we'd like you to begin with your testimony.

MR. PATTERSON: Thank you, Mr. Chairman, members of the Defense Acquisition Reform Panel. I'm very pleased to be here this morning to participate in a discussion of a question that has clearly captured the attention of the current administration and Congress: How should Congress assist the Department of Defense in improving its acquisition of weapons and services so that it can meet the needs of the war- fighter in the field while still being a good steward of the taxpayers' dollars?

The first consideration for judging the success of an acquisition program is whether it fielded a weapon system or information system or service in time to make a positive impact for the war-fighter. A system or service fielded too late to need may as well not have been bought at all. The phrase "too little, too late" can mean lost lives.

But before we look at the measures of the acquisition system merit, there is another consideration central to this discussion. When Secretary Gates made his budget announcement on April 6, 2009, I believe he was speaking from frustration that was as much about what has been the persistent problem with the acquisition system that we depend on that is simply not responsive to the immediate war-fighter needs as it was about winnowing bloated, failed and unnecessary programs.

Implicit in that expression of frustration is a clear lack of confidence in a system that actually produces programs with uncertainty and instability. The most dramatic improvement metric will be when the senior leadership in the administration, Congress and the Department of Defense have seen such improvements -- results, not words -- that they can say they have renewed confidence in the stability, predictability and effectiveness of the defense acquisition system.

The Defense Acquisition Performance Assessments report contended that program stability and predictability were singularly and uniquely crucial to managing programs that were on cost, on schedule and performing. To that end, in the time I have, allow me to describe two areas of improvement for measuring program effectiveness worthy of attention.

First, Major Defense Acquisition Programs, or MDAPs, often start at milestone B, the beginning of engineering and manufacturing development, with critical staff positions vacant. Percentage of critical staff positions filled at milestone B is an easy and important metric to be observed. It makes little difference to implement programs to raise the level of skills of the program staff if they're missing in action.

Second, the Acquisition Strategy Document -- that is, to lay out how the weapon system is to be acquired -- the initial road map, if you will -- is often flawed in that it focuses more on presenting a case for required capabilities and quantities than on laying out the reasoning for acquisition competition methodologies.

For example, how the prime contractor participants in an MDAP competition will select subcontractors and how the winner of the competition will manage the subcontractors to gain improved efficiencies and effectiveness are generally given little consideration.

Creation of the Acquisition Strategy Document is one of, if not the most, important tasks that government acquisition program management can undertake. The strategy should establish the template for all the activities that will take place throughout the source selection process -- engineering, manufacturing, development and follow-on production and fielding.

More important, it establishes how the program management team is thinking about the numerous events and activities that a program will encounter. The defense acquisition executive should establish a common set of strategy elements that all the military department service acquisition executives must include in MDAP acquisition strategy documents. Additionally, a set of standards or metrics by which the strategy elements can be evaluated as effective must be part of that process.

In closing, I'd be remiss if I didn't acknowledge the progress that has been made by the department in improving the acquisition system over the last four years. Though it is the General Accountability Office's headline that the 96 major acquisition programs have grown in cost by $296 billion that gets attention, those numbers belie an equally worthy but overlooked statistic published in the very same report: The average increase in unit cost of the 20 MDAP programs with less than five years since development started is only 1 percent, compared with an average 55 percent increase in acquisition unit cost of 25 programs in the group with five to nine years since program development start. There has been important improvement that should be recognized.

With that, it's my privilege to be here. Thank you very much. And I'd be happy to take any questions that you have.

REP. ANDREWS: Thank you, Mr. Patterson.

Mr. Fitch.

MR. FITCH: Chairman Andrews, Congressman Coffman, members of the panel, thank you for the opportunity to appear here today. I will address the subject of acquisition performance metrics and your questions about how to increase the realism of program baselines, making trades between affordability and performance in how to assess the value of systems that do not necessarily -- please recognize that my opinions do not necessarily reflect the views of the Defense Acquisition University, the Department of Defense or the administration.

REP. ANDREWS: That's why we're interested. (Laughter.)

MR. FITCH: I suspected as much.

Measurement of acquisition performance must encompass both strategic and tactical elements. As emphasized in a recent Defense Science Board report titled "Creating a DOD Strategic Acquisition Platform," the management, execution and oversight of acquisition programs is moot if we aren't spending taxpayer dollars to buy the right capabilities, if we aren't demonstrating strategic choice. It is as important to decide what capabilities we won't buy as well as what we will buy.

I believe one of the root causes of funding instability is what I described as too many programs chasing too few dollars. Too many programs chasing too few dollars is one of the root causes, I believe, of overly optimistic cost estimates.

The recently implemented material developed decision process provides a framework for making strategic investment decisions. The MDD is the formal point of entry into the acquisition process. MDD will increase integration of the three major acquisition support systems -- requirements, resources and acquisition.

Improving requirements management, an initiative supported by the Congress, includes providing training to requirements writers and managers to ensure that they have a sufficient understanding of the critical elements of acquisition, such as systems engineering and testing. The goal is to improve collaboration between acquisition and requirements communities to exploit cost and performance trades and improve acquisition outcomes.

Having a formal requirements and acquisition process is not unique to the DOD. We can learn important lessons from commercial industry. If you compare the DOD acquisition system with the process to develop electronic games, there are marked similarities and differences.

Notably, the process to get games on the shelves for the December holiday season starts with a precise clarity of what will be developed by when and includes a corporate commitment to the resources required for the project. Precise clarity is a result of intense interaction between the people that (divine ?) the capabilities of the game and the people that will design and test the software.

Turning to the subject of tactical acquisition metrics, the most effective tools and templates incorporate metrics, both quantitative and qualitative. The question was raised, are there metrics beyond cost and schedule performance that are of value? The answer is yes. And an ongoing example is the probability of program success metrics, POPS, that are currently being developed and deployed across the services and other federal agencies, such as the Department of Homeland Security.

Starting with a blank sheet of paper, a group of DAU faculty comprised of experienced program managers and other functional experts asked themselves questions, including "What conditions facilitate the success of programs? What metrics are leading indicators of derailment?"

The resulting tool, POPS, uses a structured process to continually assess and display key elements of planning, resourcing, execution and external influences that promote or negatively impact program success. Still evolving, POPS is being used in the Army, Air Force, Navy, Marine Corps and Coast Guard. Timely, accurate and transparent metrics, integrated in the management and oversight process, will produce better program outcomes.

Another question has been asked, "Can we ensure improved realistic baselines?" Again, I believe the answer is yes, and there are ongoing initiatives that will yield more realistic baselines. These include increased emphasis on technology readiness before starting major development, emphasis on improved cost estimating, which I'm sure that we'll talk about today, and competitive prototyping, which is included in the new DOD 5002 and pending legislation.

Prototyping increases the opportunity to identify and assess affordability and capability -- (inaudible). Competitive prototyping also allows the government to observe the performance of competing industry teams before making a down-select for engineering and manufacturing development.

No matter how thoughtfully we plan and discipline source selection, a paper-only source selection process is never (as) good as demonstrated performance. The ultimate assessment of whether we have delivered value and needed capability of the war-fighter is feedback from the field, from the war-fighter.

At various times I have seen photos of messages written with felt-tip pens on trucks and personnel carriers that have been returned to the government or industry depots. One of those signed messages -- a picture is in my written testimony -- reads, "This truck saved my life as well as five others, 2 April '08, at 2300 (Lima ?), Basra, Iraq."

Those kinds of testimonies, those kinds of results, are what those of us that are in the acquisition aim to achieve on a daily basis by our efforts and commitment.

Before equipment is fielded, it undergoes rigorous levels of development and operational testing and the new DOD 5000 has increased emphasis on arrear testing.

Mr. Chairman, thank you very much for the opportunity to participate in today's discussion. As the secretary of Defense has said, there is no silver bullet. It will be a combination of initiatives and collaboration among all the stakeholders to create an acquisition system that successfully produces successful acquisition programs.

I look forward to your questions.

REP. ANDREWS: Mr. Fitch, thank you very much for your testimony.

Dr. Nussbaum.

MR. NUSSBAUM: Mr. Chairman, distinguished members of the panel, I would like to thank you for this opportunity to discuss my thoughts on how to improve the acquisition and cost estimating processes in the Department of Defense. These ideas are mine alone.

As the chairman said, I'm a member of the faculty at the Naval Postgraduate School in Monterey, California and I've spent the last thirty years mainly doing and more recently teaching and researching in the defense acquisition management system, with a focus on cost estimating. I was a previous director of the Naval Center for Cost Analysis and past president of the Society of Cost Estimating and Analysis.

All my experiences in cost estimating confirm that three things are necessary for sound cost estimating: Acceptance of the underlying uncertainties in predicting the future; accurate and plentiful historical data; and professionally trained and certified personnel.

On uncertainty: There's intrinsic uncertainty in all estimates. It derives from several sources, mainly that we're usually designing or building or operating something that is substantively different from what we did before. The difference could be in the product or the economic conditions or the programmatic conditions.

An estimate reflects our knowledge at a point in time when we freeze the problem and base the cost on the configuration and programmatics as they're understood at that time. From that baseline, many things can change that could also change the cost estimate including labor rates, overhead rates, schedules, enhancements to the capabilities or quantities, changes when particular technical solution to a problem doesn't work as planned and we need an alternative technical solution.

On data: A hallmark and necessary characteristic of a sound cost estimate is that it is based on historical program performance from similar or related on-going or past programs. Historical data is variable -- not every aircraft costs the same as every other aircraft -- and the measurement of this variability is accomplished through statistical constructs. Things like "standard error of the estimate"; things like "confidence intervals".

We assume in our community of cost estimating that the patterns of the past will repeat in the future. But these patterns are almost always statistically grounded patterns, modeled with the powerful and subtle techniques known collectively as "regression". Further, we know of no alternative approach to using the past as a guide to the future, if we want a scientific -- that is a reproducible and auditable -- approach.

Not all estimating is done by government employees. There really are three subcommunities. One is the government in-house estimators; one is employees of the large vendors who also design, develop and build what we buy -- Boeing, Northrop, Lockheed Martin, for example; and thirdly, there are support contractors or consultants to the government. Those are the three communities. And surely, we need to increase the capacity and quality, the numbers and training of the government estimators, but so do we need to enhance the professionalism of the other two communities.

There are currently no undergraduate curricula in cost estimating. And there are only four educational institutions that I am aware of that teach at least one course in cost estimating and those are The Naval Postgraduate School in Monterey, where I am; The Air Force Institute of Technology at Wright Patterson in Ohio; Defense Acquisition University -- diverse locations, with the capital campus at Fort Belvoir; and the Massachusetts Institute of Technology, which offers one elective course in cost estimating within its engineering curriculum.

The recent separation of the business, cost estimating and the financial management career field into two separate cost estimating and financial management tracks is a very welcome development and should be supported. But note that DAU support is largely limited to military and DOD, not the other two communities.

The Society of Cost Estimating and Analysis -- we say SCEA -- whose membership includes approximately one-third of all cost estimators supporting DOD -- is a central and indispensable player in the training, initial certification and periodic recertification of cost estimators.

I note with pleasure that the executive director of SCEA, Mr. Elmer Clegg, is in the room.

SCEA has collected a body of cost estimating knowledge that it provides to members of the cost estimating community, provides training in cost estimating, has developed and offers an examination and experienced-based certification program. Lockheed Martin, Boeing, Northrop Grumman -- other vendors -- use SCEA's training and certification as their standard.

I appreciate very much what this committee seeks to accomplish, Mr. Chairman. This concludes my prepared statement and I would be pleased to answer your questions.

REP. ANDREWS: Well, thank you, Dr. Nussbaum.

And I thank each of the three witnesses for their testimony. As I say, we've had the chance to review the written testimony. We're now going to get to the questioning phase.

Mr. Patterson, I was intrigued by your reference in your written testimony to two kinds of requirements costs, which you express as customer requirements and derived requirements. What's the difference between the two?

MR. PATTERSON: Customer requirements are established in key performance parameters, which are requirements that are established for a particular weapons system that the weapons system must then perform against.

Derived requirements are those requirements that the customer did not ask for, but that -- as the name would suggest -- derive as a consequence of the design process -- oh, look, we could do this better if only we -- and of course, that's taken to the program manager. The program manager will say, okay, we can do that. Just bring money. And they do.

REP. ANDREWS: Now, I notice on page seven of your testimony that you say that a recent study prepared by the Monitor Company Group were based on selected acquisition report data estimates that approximately 33 percent of the cost growth from 2000-2007 -- I think I read this -- is attributable to this second category of requirements that you're talking about, right? The derived?

MR. PATTERSON: That's correct. Yes, sir.

REP. ANDREWS: Now, how was that number reached? Where did that 33 percent come from?

MR. PATTERSON: That comes from the selected acquisition reports. In fact, they lay that out in the SARs and explain in each of the SARs where the --

REP. ANDREWS: Well, one of the things that we want to do is to make sure that we discriminate between derived changes that are beneficial and those that may be superfluous. How would you suggest that we might do that?

In other words, I don't want to leave the impression that we're saying or the report's saying that oh, that 33 percent was waste.

MR. PATTERSON: Oh, absolutely not!

REP. ANDREWS: How do we draw the line between beneficial cost- effective derived requirements and not so beneficial derived requirements?

MR. PATTERSON: Well, let me give you a real-life example.

REP. ANDREWS: Sure.

MR. PATTERSON: In 1993, I came to the C-17 program. It was a troubled, problem-plagued program.

The two program mangers decided that much of the problem was that requirements were growing in an airplane. They established a set of rules. The set of rules was very simple: If you have a requirement or an engineering change, which is effectively a derived requirement, then it must go to an engineering change board. The engineering change board will evaluate the change for its intrinsic merit. But if it doesn't -- (inaudible) -- the safety of flight or other driven requirements, then it has to have a three-to-one payback in savings and not perturbate the schedule.

REP. ANDREWS: Interesting.

MR. PATTERSON: Real simple.

REP. ANDREWS: Now, in terms of the scope of that C-17 program, how much were the derived requirements overrun? In other words, how much was attributable to derived requirements?

MR. PATTERSON: I would only be guessing. I could certainly get that for the record.

REP. ANDREWS: And with this method that you just described, in your judgment, what percent of the derived requirements were beneficial and which failed to make that three-to-one cut and didn't happen?

MR. PATTERSON: Again, let me give you an example.

At Edwards Air Force Base they were testing the airplane. One of the requirements was that one airplane needed to start another airplane, but it started it with two hoses from the pneumatic system. Somebody said, I wonder if it'll start with one hose. It didn't. Oh my gosh! Well, now we have to go and figure out why that was the case. And so that took a considerable amount of test time and money. And as it turned out, they really wanted to have it start with one hose.

I'm sorry -- that would have taken a huge --

REP. ANDREWS: But who was the someone who said that -- not the person's name, but where were they?

MR. PATTERSON: That was a test community out on the rampart at Edwards.

REP. ANDREWS: All right. I'll just -- I'm going to come back for a second round.

But Mr. Fitch mentioned this probability of success metrics program. Has there been built into that a litmus test for a probability if you fall below it things stop? In other words, does that program have in it a built in go or no-go line?

MR. FITCH: It does not have a go or no-go line. It is information that is updated and provided --

REP. ANDREWS: Do you think it should?

MR. FITCH: I think that there needs to be an informed review and decisions that look at each of the things that occur that is negative, because some of the things that are in the probability of success metrics and reporting are positive.

So I think that it needs to have the program manager. When that information is put together on a monthly basis, he needs to look at that. It needs to be reviewed by someone in his PEO, okay -- and potentially someone even at higher authority when there are negative occurrences.

Could I add something about --

REP. ANDREWS: Sure. And then I'm going to go to Mr. Coffman. Sure.

MR. FITCH: Yeah. I just wanted to explain that the drive requirement process is actually part of the systems engineering process. So that if, for instance, the warfighter says in an aircraft: I need a display that has color; I need a display that has this amount of resolution, et cetera, the first pass -- even by the warfighter or the cost estimators -- may say, this amount of processing may be sufficient for that.

When you get into, well, by the way, you're going to have these other software capabilities. You put those together and they start to build on one another, you can find all of a sudden that the processor that was planned may be a commercial off-the-shelf item with X amount of memory, throughput, et cetera, is insufficient.

That point is the point where it would be very useful to have the requirements community have a real dialogue with the acquisition community to say, do I now take away some of the requirements so I can continue to use that commercial, off-the-shelf processor that is less expensive, or are those requirements really important now that we have figured out what they are?

So the point is the derived requirements is a very important process and it -- it is actually the process where you allocate or features will go to hardware or software.

REP. ANDREWS: Thank you very much.

Mr. Coffman.

REP. MIKE COFFMAN (R-CO): Thank you, Mr. Chairman.

This is to all of you: It seems like part of the problem is sometimes we're dealing with immature technologies. And I think we've had previous testimony to that effect. That sometimes we're asking the contractors to develop something.

Should we bifurcate the process? In other words, that you're contracting with one entity to develop the technology to the develop the -- I don't know if you'd go as far as the prototype -- and then where you can go into fixed cost production with another -- maybe that entity would be allowed to bid on it as well. But is it better to bifurcate the process?

MR. NUSSBAUM: You know, I would say we would do that. Now, if we have different competition, we certainly have an R&D contract followed by an acquisition contract, with certainly no guarantee that the R&D contractor will have a follow-on. So in some sense we do that.

But in another sense, the department has now mandated technology readiness levels -- TRLs -- of certain levels before the program can get beyond a milestone. Before the program can get beyond a milestone. They've narrowed that cone of uncertainty by saying, You have to have a TRL level 6. They prefer 7, and GAO prefers 7. But you have to pass a TRL level of 6 before you go into the next milestone.

MR. FITCH: As a part of the analysis of alternatives, that process that immediately follows it, the technologies that would be appropriate for that system, that capability, are assessed for technology readiness. When you get to the point of saying who should develop it, I prefer the concept of the competitive prototyping because you're going to get proposals from industry. They're not going to all have the same strengths. Some are going to view that I want to do this more in a hardware function. Others may be more software intensive. There are different technologies involved.

Their proposal and then watching them deliver on what they promised and they say is possible in the competitive prototyping is key to them being able to transition and make an award to that contractor who's proven that they did what they say they could do.

So having a different contractor, if I understood your premise correctly, to develop technology and then award it to somebody differently to develop the system, I think that's not the intent of the competitive prototyping system.

MR. NUSSBAUM: I think too there's a problem that arises when we talk about spiral development. Originally spiral development and evolutionary development were designed to have a weapons system that effectively was fielded and then have block upgrades to improve the weapons system. In many cases that is not how that works because you do have a parallel R&D program that is working on improvements. And before the weapon is actually fielded, you have the attempt to integrate improvements. Well the consequence of that often is that you have a stretched-out program, costs escalate, and in the end performance is degraded. And it takes awhile, much longer than had been anticipated, for that weapons system to be fielded.

So I think we need to have very specific and clear understood standards by which we will integrate or put in upgrades or technology advances into weapons systems. It certainly can't be during engineering, manufacturing and development.

REP. COFFMAN: I think it was mentioned, it's been mentioned repeatedly, that the changes drive a lot of the costs as they go forward. Where do the changes, do the changes emanate from—I mean are some of the changes sort of broader in scope where the military, the affected branch will receive, utilize weapons systems? Is it changes involving doctrine, or is it mostly at a very technical level with just engineering requirements?

MR. NUSSBAUM: I think that changes come from all sorts of places including changes in labor, overhead rates, schedules, requirements, quantity, absolutely everything, just like building a house. Everything that changes has the contractor saying, cement has gone up, brick has gone up, my subs have gone up, or down, but they are always changing. And so there's a great churn. Some of it is part of life and some of it we try to control by saying, Tell us what you're building and we'll cost that program.

But I think it's just intrinsic in the cost estimating process.

REP. COFFMAN: Is there any way to just have more discipline? I mean what would be some of the methods for having more discipline over it? I think some of it is having a change board?

MR. NUSSBAUM: There is something called a CARD, a cost analysis requirement document, which is, to the chagrin of all program managers we say three or six months before we go into a milestone, you tell us what you're building, how many, what the technologies are, and we'll cost that program. It may be that things change after that, but that's the program we're going to cost because otherwise we're chasing our tail and we don't know until the very last moment what we're costing. And in fact it takes time to do a cost estimate. It's a part of the discipline at OSD is a cost analysis requirements document will be prepared six months before the milestone, and the lack of availability of that CARD results in a day-to-day slip in the milestone.

So that's real discipline, but it has its obvious downside too.

MR. FITCH: I think too that in dealing with the discipline and structure that I must say I submit that it requires rules, and the rules simply have to say that after Milestone B you don't have anymore requirements. It's somewhat draconian, unless of course it's a safety of flight or it's an obvious design failure that needs to be corrected. But those kinds of things are few and far between, quite frankly. And then this idea that a requirement or a change would give you a 3-to-1 or 4-to-1 payback in savings while not pertobating the schedule, is not a bad idea either.

And as technology moves forward, that's entirely possible over the course of a program.

MR. NUSSBAUM: There are times with the requirements—I recall when I was developing a black box that was going to go in 17 different platforms, I would have this platform come to me and say, "It would really be easier if you changed it this way; it will make me less expensive for my integration, save my costs and schedule."

What I came to understand is, the program manager has to be prepared to say no during the development phase. Now the configuration steering boards we have in place anticipates that there may be compelling reasons to change a requirement, moderate a requirement, during development. But that's the process of a configuration Steering Board is to raise that, if you would, visibility to the pressures of changing requirements to have a senior level decision made about the requirement once the milestone has been approved.

REP. ANDREWS: Thank you.

Mr. Cooper.

REP. COOPER: Thank you, Mr. Chairman. I'd especially like to welcome a fellow Tennessean, David Patterson and congratulate him on starting Defense Business Institute at the great University of Tennessee in Knoxville. I thank you not only for your past service but what you're doing right now.

Can anyone provide me with enough historical perspective to help me understand how during WWII men like Henry Kaiser were able to produce destroyers, and I forget exactly who produced airplanes but it was an amazingly productive period. And I don't know how, was there less bureaucracy then? How was so much able to be accomplished so quickly? Anybody know?

MR. NUSSBAUM: My reading of history is that there were a lot of failures and that we tend to remember the successes which were terrific, but in fact there were false starts. The P51 was wonderfully successful, but because it had a bigger gas tank so it could keep up with the other aircraft, it gave us greater range. But we didn't build it for that reason. Just we sort of lucked into that, if you will.

I think that things were much simpler. We were able to turn technology generations around much faster and therefore absorb the failures. Today's systems are very complex, they take a long time, and we just won't accept failure.

MR. FITCH: I think that what we remember a lot is those ships going down the waves, okay, daily, okay. That was in the production phase. Even the P51 that was just mentioned by Mr. Nussbaum, when it was first fielded it didn't have the engine that ended up being what everybody remembers about the performance of that aircraft. There were various modifications made. After it was fielded it was in the field and found that it was not exactly what was needed.

MR. PATTERSON: I think too had a tremendous industrial base that was able to accommodate to the level of technology that it was asked to accommodate to. Today I'm not sure that we would be able to do that again in that amount of time. We have, Dr. Ron Saga (sp) did a study not long ago when he was at DR&E, director of Defense Research and Evaluation, in which he looked at the industrial base and what it could do and found that 62 percent of all of the Ph.D. candidates that are enrolled in disciplines critical to national security have temporary visas.

And that tends to put us at a disadvantage because they generally don't stay around and work for Lockheed or Northrop or Boeing, and Skunk Works particularly, or Phantom Works. And if you'll look at the Aerospace Industry Association data, you'll find that in 1990, 1991 you had 1.3 million touch labor workers, highly skilled workers. Today we have something less than 700,000. Those are statistics that should give us cause, pause to consider what we're going to do in the future.

REP. COOPER: To help me understand, I know that we've had failures, acquisition, throughout our history. And Mr. Fitch mentions a success when he posts the note that was on the vehicle that saved lives. But on the front page of the Washington Post recently was a statement by Secretary of Defense Gates when he attended the return of the remains of some of our troops. He asked how they died, and he was told, "They were in an inadequate vehicle." And he cursed because that was symbolic of the fact that we've had difficulty fielding relatively simple platforms like MRAPs or up-armored Humvees.

And at least for some period of time there was only one manufacturer of up-armored Humvees in America. And yet at the same time we have automobile companies going bankrupt and looking for vehicles to build. So there seems to be a mismatch somehow between pretty basic demands for troops and our ability to field and source those even when we have able and willing automobile companies who are looking for ways to keep their plants busy.

Why the mismatch?

MR. FITCH: Oh, I think it's a very fundamental problem in that you have a Department of Defense that understands fully that we are at war with terrorists, and you have a country that doesn't.

REP. COOPER: So Chrysler or GM or Ford didn't want to bid on the up-armored Humvee, or we couldn't --

MR. FITCH: Actually the subsidiaries of those folks did, but they really were not in the business of putting out those kinds of vehicles. The people who were, International Harvester obviously -- but the design fortunately or unfortunately for us that was available were foreign designs of up-armored vehicles that were designed to sustain the kinds of things that an MRAP would have to sustain.

MR. PATTERSON: And the number of companies that have as a core competency in the technologies for armor is not the same as we have for the auto industry. In fact, we're doing a lot of investment at Department of Defense today to try to find better armor, cheaper armor, especially lighter armor, whether that be for the vehicles because when you put the armor on it it puts a demand on the engine, it puts a demand on the drive train. If you double the weight of the vehicle, whatever the percentage is, it has additional tolls upon the reliability of the vehicle.

So we don't have the same industrial base. In other words, the auto industry isn't the industrial base for our armor either.

MR. NUSSBAUM: In a sense the Humvee is the wrong vehicle to up armor, but it's the only vehicle to up armor. But it was built as a replacement for the Jeep, which didn't go in harm's way. So it was designed to optimize all the functions that it was going to perform, and we missed the fact that it was going to go in harm's way. So it wasn't suitable, it wasn't the optimal for up-armoring. But it's what we have, so we're going to up-armor it.

REP. COOPER: I see that my time has expired, Mr. Chairman. I look forward to another round.

REP. ANDREWS: Thank you. With the consent of my colleagues, we are going to do another round if that's okay with the panel as well, if it fits your schedule. Thank you.

Mr. Patterson just made a suggestion that we might have a rule that says no new requirements after Milestone B. And that prompted this question. Again, this piece of data that you cite about 33 percent of the program cost growth being attributable to these requirements changes, how many of those requirements changes happened after Milestone B? Do you know?

MR. PATTERSON: That, as I recall, is after Milestone B.

REP. ANDREWS: It's all after Milestone B.

MR. PATTERSON: That's where the majority of the growth generally takes place.

REP. ANDREWS: A related question: In your oral testimony you talk about the fact that, if I read this correctly, the average increase in unit cost to the 28 MDAP programs of less than five years since development is only 1 percent.

MR. PATTERSON: That's correct. That's gained from the --

REP. ANDREWS: But it's 55 percent from years 5 to 9. How many of that in the 55 percent category, the 5 to 9, got a waiver through Milestone B, didn't meet the requirements to get to Milestone B, but got waived past it? Do you know?

MR. PATTERSON: No, I don't. I don't know exactly how many.

REP. ANDREWS: But I mean, would it be accurate to say it's probably most of the 55 percent cost overrun comes from that?

MR. PATTERSON: I would say a significant portion of it, yes, sir.

REP. ANDREWS: One of the things that's in the conference report that we'll be looking at in the waste bill today is what we call "intensive care" where if a program is permitted to go forward even though it didn't achieve the Milestone B criteria, if it's waived past it, there's a whole set of intensive requirements that are imposed upon that program to try to get it back under control.

I wanted to come back to again this bifurcation that you create between derive and customer requirements. Describe for us the process that you think ought to be instituted to determine whether a derive requirement is added to the package or not. Let's say we're at a point, assuming for a moment that we accept your proposition that there are none after Milestone B, which I assume you say there should be some exceptions to that as you said for true safety or emergency purposes. But let's assume we're living in a world where except for those narrow situations there are going to be no changes in requirements after we hit Milestone B. We're now in pre Milestone B and an "oh, by the way" comes up, as you said earlier, "Oh, by the way, this can do this." Who should make the decisions to whether that gets added to the package, and by what criteria?

MR. PATTERSON: I think that the program manager should have the initial cut at whether or not they are going to include that into the program. But program managers generally are colonels or in very large programs brigadier generals who have significant oversight within the department. I think that if they have a set a rules, let's say first of all if it's not a safety of flight or if it's not some sort of safety issue or if it doesn't give me a return on my investment, then I'm going to have a thumbs down initially.

REP. ANDREWS: How do you measure the concept of "return?"

MR. PATTERSON: Well, let's take for example you have, again I'm going to return to the C17. There were parts of that airplane that were originally designed with aluminum lithium for example. Well, aluminum lithium is strong, but it's brittle. So an engineering change was made to change that to a different alloy. We were breaking aluminum lithium cargo floor guides at a regular pace, so changing that eliminated the problem of having to constantly replace. That was a savings. And those are the kinds of things that I would suggest are 3-to-1, and even the suppliers were given the opportunity to do that.

REP. ANDREWS: It strikes me that this is really the essence of the 20 percent the secretary talks about in his 80 percent solution, that what he really is aiming to get at here is to give us an adjudicatory mechanism that draws a line between the 80 and the 20 when you get to this point. And it's your suggestion the program manager should be the first person to weigh in on this. Who should evaluate his or her recommendation?

MR. PATTERSON: Well, then you have an engineering change board that would provide a corporate view of it, and if it's a particularly expensive change and you're going to go to your service acquisition executive, or if it's an ACAT 1-D and acquisition category B, then you have the Defense Department Acquisition executive who would have a cut at that.

REP. ANDREWS: And my final question, are you confident that we can quantify this concept of value sufficiently to hit the 3-or-4-to- 1? In other words, are all of he values that we want to promote, you gave a great example of saving, replacing a piece of a plane that's going to go wrong. But do all of the value concepts lend themselves to that kind of quantification that would let us say, well this fails to meet 3-to-1 so out?

MR. PATTERSON: No. It's much more difficult than that. That's why it takes a lot of research and study and to set up standards and conditions whereby you can evaluate these.

REP. ANDREWS: Frankly, I'll just conclude with this, it's one of the reasons why we're glad we have the three of you and the institutions that you represent because we really do turn to institutions like yours to assemble those data, analyze them, and give us a factual basis to draw the lines that my questions imply. Thank you.

Mr. Coffman, your turn.

MR. NUSSBAUM: Mr. Chairman, is it appropriate for me to make a remark?

REP. ANDREWS: Sure. That's okay.

MR. NUSSBAUM: I think it's not hard to measure the value of ideas that replace current capabilities. It's always harder to measure the value of things that represent new capabilities. They don't represent a savings to the operating and support tail that you were going to incur. But if you are replacing a current capability—and it's pretty easy to do an estimate, but it's still an estimate, of what does it cost to invest to make this thing happen, and what do you save over time in the operating --

REP. ANDREWS: I agree the much more difficult proposition is where you have a new function that could be added by something that you discover. How do you measure that? And that requires trade-off analysis, it requires opportunity cost analysis, it requires a lot of broader inquiries.

MR. NUSSBAUM: And if you have that long tail, the question then is, do you do any discounting on it, the technical issue of net present value and at what rates. OMB helps us there.

REP. ANDREWS: Not to be hyper-technical, but one of the problems is then matching up the federal credit scoring and net present value rules with what the real world wants. Very often a decision, here a classic example is in energy. The department has guidance to hit 25 percent alternative fuels by 2020. And in order for them to do that they need to do multi-year contracts. But to do multiyear contracts, the CBO scores that as putting the whole net present value into one year, which makes it almost impossible to do, which means we don't do much of it, which means we're falling backwards.

So marrying the CBO criteria with the real-world criteria is a bit of a challenge too.

MR. NUSSBAUM: So the devil is really in the details on this.

REP. ANDREWS: No, the devil is in the CBO in this case. Don't tell Mr. Elmendorf I said, that okay?

MR. NUSSBAUM: One other comment. When you go to the configuration control board or your SAE you're proposing to spend investment dollars to make this thing happen.

REP. ANDREWS: Right.

MR. NUSSBAUM: And the promise is that you will return O&S dollars later on. That's a nice conversation but it doesn't accord with the budgeting realities.

REP. ANDREWS: And it also doesn't score, nor should it. Mr. Fitch, do you want to add one thing? Then I'm going to go to my friend from Colorado.

MR. FITCH: If I could quickly. I just wanted to say again that I think it's useful to talk about the operation requirements, the derive requirements, but to say again that when we get the requirement from the user, it's stated usually in operational terms, okay? The results they want to see. To deal with industry and for industry to build something, those need to be translated into technical terms. We also use the term "derive requirements" for that process. So the derive requirements are really, there's a couple types we're talking about here, that which are a part of the normal system that you have to do.

The other one I'd just say is your question is, how do we know we get value? As a program manager I had an acquisition program baseline. That was my contract. The way I viewed it, it was my contract with my (mowst) (ph) and decision authority and with the American taxpayer to produce a capability at such a cost with certain milestones. And I think that's what most of us take and go back to.

And as we do these questions about the live requirements and everything else, I think we keep in mind that framework.

REP. ANDREWS: Yeah, I think the panel clearly understands that some subset of derived requirements are quite legitimate, necessary and desirable. And I think your testimony -- the three of you have given us some interesting tools to discriminate between undesirable derived requirements and desirable ones, which is what we're about.

Mr. Coffman?

REP. MIKE COFFMAN (R-CO): Thank you, Mr. Chairman.

We talked about -- today about having discipline in terms of change requirements, but do we also have to have discipline when programs get so far out of line that they become questionable? And I'm thinking about and I'm wondering if you could reflect on the president's -- I can't remember the name; the nomenclature of the follow-on helicopter, littoral combat ship is in question. I wonder if you all could reflect on when a program gets so far out of line.

MR. FITCH: Well, it would be helpful, I think, in the total scheme of things, is we never let them get out of line, but nonetheless you're exactly right, they do happen, does happen. And one of the things that I think is difficult when you take, for example, the DH-71, the president's helicopter, and that is a perfect example of where you came in with one set of criteria as requirements and over the course of the time it changed dramatically.

I think that when you don't have a set of standards or conditions that raise a budget flag, that say, wait a second, I'm sorry, you're red here and you've been red three reviews in a row and we're canceling your program. But what we do is we put ourselves in a position where it's the only game in town. You have a president's helicopter that is arguably old, and we don't have an alternative. You chose a manufacturer and a helicopter and there's no off-ramp, there's no Plan B. And we do this rather consistently, and I'm almost of the opinion that we do it by design.

And I would offer the Marine's EFV, for example. We don't have a way to walk the dog back down the path to get an amphibious vehicle that would replace the EFV, so we need to make that work.

MR. PATTERSON: I don't have the particulars on either of the programs you asked about. I was the systems engineer for the VH-3D, the current -- one of the two current presidential helicopters. It's a unique mission; I think there's a desire usually if the White House says or WACA or whoever it is says I need a capability that you find a way to do it.

Going back to it, most of the changes that I saw and the pressures and the surprises that I got occur in that first -- traditionally the old first 12 to 15 months of a program when you got towards a PDR. Because the contractor's off doing a lot of things; the reason a lot of object -- deliverable is to figure out does he get it or he doesn't get it. I think that the competitive prototyping system, the focus on doing PDRs earlier, to have effective communications with the contractor teams and oversight of the contractor teams during that period of time will do much more to get to a stable baseline at milestone B, which is the actual program initiation.

MR. NUSSBAUM: I'm going to sound like a professor on one hand. On the other hand, because there is some historical examples of things which failed, failed, failed and then were terrific. AEGIS was one and Tomahawk was another; they just took a long time to bring aboard and for some reason we had the fortitude to stick with them and not say this is an A-12 or a Gama Goat, and get rid of it. And now we come to the category of LCS. Is it a Tomahawk or is it a Gama Goat? (Laughter.) We don't know.

But I'm taken with my colleague's remark that once you get past milestone B, if you have three reds in a row, you have some serious explaining to do with a presumptive answer, you're out. It's a rule. (Laughs.) But the problem is knowing the future, and that's always the problem. I don't know whether LCS is a Tomahawk which is going to be absolutely terrific after a long incubation period, and the same for V-22. You just don't know. So we make decisions as people.

REP. COFFMAN: Thank you. Thank you, Mr. Chairman.

REP. ANDREWS: Thank you. Well, gentlemen, thank you very much for your expertise. Your reward for doing such a good job is we'll have to call on you again. As the committee goes forward our intention is to try to make legislative proposals for the FY 2011 authorization bill that will deal with the area of the problem that the waste legislation that we're dealing with today does not deal with. And I think you've given us some very intriguing ways to measure the gap between what we pay and what we get.

It's also heartening to hear what I've heard this morning, a consensus that this panel's contribution to the waste bill was essentially two concepts: The first was to add a whole series of reviews and a lot of scrutiny pre-milestone B, with particular emphasis on the requirements process. And that did make it into the conference report and that will become the law this week, we think.

And second, the panel was very interested in much more rigorous review, what we call intensive care, of systems that pass milestone B by waiver, that have not met the requirements, or that fail Nunn- McCurdy standards and yet exempted from the penalties there and go forward anyway. And I think if you look at the cost overruns, a huge majority of them fall into one of those two categories.

So what we wanted to do was to take the best practices that you very ably described this morning and engage them as intensely as we can in the systems that, again, never met the criteria to get past milestone B but get past it anyway, and those that fail Nunn-McCurdy but continue to live on. And as Mr. Patterson said a few minutes ago -- I think it was Mr. Patterson -- our ultimate goal is not to have any of those cases in the future, by unraveling the requirements process and looking at it more intensely to intensify that pre- milestone B analysis of what is going on.

And the other point that I'd make, you know, that's more on our side of the table than yours, I think that the principal reason that we get these cost overruns is that once something passes milestone B, an enormous political constituency develops around it. Now there are tens of thousands of people deriving their paycheck from a project, hundreds or thousands of subcontractors, you know, dozens or hundreds of congressional districts, and as Secretary Gates and -- I think, can attest, making changes in those programs is very politically difficult.

If you get to these flawed programs earlier, when their political constituencies are smaller and weaker, the chance to do the right thing is a lot higher. So not just for analytical reasons but we think given the dynamic of the way these decisions are made in the political world, the more precise we are in our measurements and the more focused we are on our evaluation in the requirements phase and a little bit beyond that, we think the better job that we'll do.

So I would say to each of the three of you we welcome your continued participation and input. We're certainly going to call upon you for your feedback as we go forward in our drafting process, and thank you very, very much for your time and attention this morning.

Members will have a period of time by contacting either majority or minority staff to supplement the record with written questions and we invite the witnesses to do the same thing.

With that -- (sounds gavel) -- the hearing is adjourned.


Source:
Back to top