NIST Advanced Technology Program
Return to ATP Home Page
ATP Historical Award Statistics Business Reporting System Surveys EAO Economic Studies and Survey Results ATP Factsheets ATP Completed Projects Status Reports EAO Home Page
NIST IR 7319 - Toward a Standard Benefit-Cost Methodology for Publicly Funded Science and Technology Programs

II. RATIONALE AND HISTORY

Economic Literature

Benefit-cost methodologies applied to S&T projects derive from Edwin Mansfield's work in the 1970s (Mansfield et al. 1977), which is aptly captured in the recent Journal of Technology Transfer issue commemorating Dr. Mansfield (Scherer 2005). Mansfield adapted his private sector analysis specifically to ATP-funded projects in a report for ATP (Mansfield 1996). His public finance approach is an extension of traditional business cash-flow analysis that applies basic concepts in public finance to measure benefits to industry users of technology products and end users as well as the innovators.

Return on investment in developing new technology is thus expanded. It includes consumer surplus in the form of cost reductions and other societal benefits experienced by technology users as well as innovators' profits from technology products and cost-reducing new processes measured relative to a hypothetical, counterfactual situation in which the new technology does not exist.

Mansfield's work provides a foundation for public sector-funded S&T case studies following two themes:

  1. His evidence that social rates of return on industry-led projects (combined public and private returns) exceed private rates of return indicates the potential for market failure in the form of underinvestment in R&D relative to the societal optimal level and thus possible justification for federal subsidies to R&D. His pioneering work in the 1970s and subsequently in two studies sponsored by the National Science Foundation (NSF) showed private rates of return averaging 25%-36% and social rates of return averaging 50%-70%. (The lower rates were reported in the 1977 study.) Several other studies, for example, by Scherer and Piekarz, confirmed and extended these ideas and results and discussed some of the policy issues (Teece 2005). The macroeconomic studies examining returns to R&D, for example, by Griliches and Nadiri using broad statistical databases, showed similar results, although differences across industries were substantial (Bernstein and Nadiri 1988; Nadiri 1993).
             
    Further comparison of private rates of return to such projects with industry and company hurdle rates provides a better indication of whether the private sector is likely to undertake such investments in the absence of federal funding. If company hurdle rates (defined here as the rate of return required to undertake an investment with a given level of risk) are high relative to opportunity cost private rates of return on projects of comparable risk, companies either will not make the investments at all or they will invest only at a minimal exploratory level, and benefits become increasingly longer term and more problematic without federal funding. If hurdle rates are comparable with private rates of return, companies are more likely to undertake a significant investment without federal funding, and federal funding may be displacing private funding (Jaffe 1996, 1998).
             
    Quantification of the difference between social and private rates of return, called the "spillover gap," and evidence that private rates are below industry or company hurdle rates become program measurement objectives. The theoretical basis of analysis is that a spillover gap justifies public funding and, therefore, quantification of it is a primary program evaluation objective. R&D will be underinvested, and projects merit consideration for public funding to the extent that private rates are below social rates of return and therefore R&D is unlikely to be funded at a socially optimal level. The case is even clearer if private rates of return are below the investment hurdle rate for risky R&D and, therefore, the investment most likely would not be made at all without federal funding (Jaffe 1996, 1998).
  2. Mansfield establishes a case study method supported by customized data collection as an appropriate methodological tool of economic analysis. Scherer quotes Mansfield's statement: "If you want to know something, ask the people who know" (Scherer 2005, p. 5). Mansfield described the need to interview personally not just the "doers," but also the beneficiaries among customers and users (ATP 1996). Data often do not exist for large statistical analysis of impacts but can be created at a smaller level. Teece described his instructions to his students "to collect data in the field" (Teece 2005, p. 18) and quotes Mansfield's introduction to his two volumes of collected papers: "in general, my approach has been to try to get a reasonably solid empirical footing before attempting to model complex phenomena about which very little is known; to keep the theoretical apparatus as simple, transparent and robust as possible; to collect data directly from firms (and other economic units) carefully tailored to shed light on the problem at hand (rather than to try to adapt readily available general-purpose data, which is often hazardous), and to check the results as thoroughly as possible with technologists, executives, government officials and others who are close to whatever phenomenon is being studied" (Mansfield 1995, p. ix). The advantage is that questions and resulting data are targeted to what you really need to know.

OMB Circular A-94

OMB's mandate that federal programs use benefit-cost analysis or cost-effectiveness analysis goes back several decades before Circular A-94 was revised in 1992. Most of the Circular A-94 presentation is geared to a prospective analysis of an entire federal program and provides little guidance for evaluating project and program outcomes and impacts, in general, or basic and applied S&T programs, in particular. It predates government-wide mandates for program performance metrics and is more comprehensive and technically challenging than could easily be imposed or implemented by all agencies. Nevertheless, Circular A-94 is a methodology tool for program evaluation efforts stimulated by the Government Performance and Results Act (GPRA) (1993). Its guidelines are consistent with good practice for both retrospective and prospective analyses, at the program or project level, and for any program area.

OMB Circular A-94 establishes the following guidelines:

Outcome Measures

  1. Use net present value as the standard criterion for deciding whether a government program can be justified on economic principles.
  2. When an analysis of competing alternatives is done, consider in addition the present value of benefits relative to a given amount of cost, or vice versa, the cost relative to a given amount of benefits, all measured in present value terms.
  3. Be explicit about underlying assumptions.

Net Benefits Measurement

  1. Use the net benefits to society, not to the federal program, as the basis for evaluating government programs.
  2. Identify incremental benefits and costs relative to the situation in which the government program does not operate. Record displaced activities as a cost.
  3. Include effects on U.S. citizens, and not on others.
  4. Use consumer surplus and "willingness to pay" as measures of value beyond what is captured in market prices.
  5. Assume full employment of resources. Avoid multiplier-based estimates of effects of government spending on the economy.

Treatment of Inflation

  1. Use real or constant dollar values to perform analyses.
  2. Use the administration's GDP deflator if an inflation adjustment is needed.

Use of Discount Rate

  1. Report net present value and other outcomes using a real discount rate of 7% in performing constant-dollar benefit-cost analyses of proposed public investments and regulations. This rate is intended to approximate a marginal pretax rate of return on an average investment in the private sector.

Treatment of Uncertainty

Typically, estimates of benefits and costs are uncertain.

  1. Identify sources of uncertainty, provide expected value estimates, and test sensitivity of estimates to important sources of uncertainty.
  2. Where possible, derive a probability distribution for benefits, costs, and net benefits.

The Circular A-94 guidance is applicable and appropriate either to prospective program-level analyses or to retrospective and prospective project-level analyses. It provides minimum requirements for benefit-cost analysis of an individual project; however, it does not address problems of aggregation of results for projects and studies performed at different times, and it does not provide guidance specific to S&T programs. This report seeks to fill in some of those gaps.

NIST and ATP

NIST's economists have been actively engaged for nearly 40 years in cash flow-based benefit-cost analysis in microeconomic case studies of R&D projects. NIST's Building and Fire Research Laboratory (BFRL) economists are leaders in the international standards community. Their work in building and energy economics includes development and application of life cycle cost models for energy conservation investments and development and implementation of training programs. NIST's BFRL economists' work in modeling and building economics standards has resulted in a series of American Society for Testing and Materials (ASTM) standards for building economics (ASTM 2004). And their work for NIST and other agencies has sometimes entailed program evaluation. Several members of the ATP economics staff were participants in these BFRL activities before joining ATP.

Benefit-cost case studies became a cornerstone of the evaluation of long-term outcomes of NIST's laboratory programs in the 1980s and in planning studies for new programs. More than 30 separate studies have been commissioned by NIST's Program Office. For more information see the NIST Web site: http://www.nist.gov/director/planning/ impact_assessment.htm.

ATP's founding legislation required the evaluation of program outcomes before that practice was common or required by GPRA. ATP naturally drew on the BFRL and NIST experience. In addition, ATP drew on Albert Link's growing involvement in program evaluation, including his benefit-cost studies of outcomes of a number of NIST laboratory projects for the NIST Program Office during the 1980s. Albert Link helped ATP draft a preliminary plan for program evaluation. In its early years, ATP funded Link to undertake several early benefit-cost studies that focused on research cost savings to industry that result from avoiding redundant research through federal sponsorship and the cost sharing of technology development by industry consortia. ATP engaged Ed Mansfield and others to develop appropriate methodologies for assessing longer-term economic impacts using benefit-cost (among other) techniques. Since that time, more than a dozen benefit-cost studies have been published. Several published studies analyze groups of related ATP-funded projects.

Twelve of these studies, covering 28 ATP projects, are listed in Table 1. The cost of conducting these studies ranged from $25,000 to $357,000 (not adjusted for inflation). More than half of the studies focused on one ATP project; the rest covered from two to eight ATP projects. All are available on ATP's Web site: http://www.atp.nist.gov/eao/eao_pubs.htm.

Table 1.   ATP's Benefit-Cost Studies to Date

Short Title (# of projects)

Number

Contractor

Publication Date

Study Cost
(dollars)

Photonics cluster (2)

GCR 05-879

Delta Research

2005

228,000

Composites cluster (2)

GCR 04-863

Delta Research

2004

200,000

2mm project—retrospective (1)

GCR 03-856

MIT

2004

357,000

HDTV joint venture (1)

GCR 03-859

RTI International

2004

75,000

A-Si Detectors for digital mammography (1)

GCR 03-844

Delta Research

2003

136,000

Component-based software (8)

GCR 02-834

RTI International

2002

154,000

Closed-cycle refrigeration (1)

GCR 01-819

Delta Research

2001

77,000

Digital data storage (2)

GCR 00-790

RFF

2000

138,000

Flow-control machining in auto industry (1)

NISTIR 6373

NIST-Building & Fire Research

1999

90,000

Tissue engineering (7)

GCR 97-737

RTI International

1998

122,000

2mm—Early assessment (1)

GCR 97-709

CONSAD

1997

25,000

Printed wiring board (1)

GCR 97-722

Albert N. Link

1997

22,500

Other Agencies

Although a regular tool of public finance, benefit-cost analysis for evaluating program impacts of federal S&T programs has not been common outside NIST and ATP. As the Government Performance and Results Act (GPRA)1 percolated through agencies in the early 1990s, information sharing increased, and interagency evaluation networks and workshops evolved with ATP participation. ATP staff engaged in training programs and participated in a variety of evaluation forums. The OMB Performance Assessment Rating Tool (PART) was born in 2002.2 Throughout their interactions with other agencies, ATP staff members encountered few examples of benefit-cost analysis outside NIST. In the R&D-S&T arena, agencies had been relying largely on anecdotal information and peer review. Data difficulties and outcome uncertainties abound for all programs.

Nevertheless, benefit-cost analysis offers considerable promise for bridging economic analysis to traditional financial analysis tools and metrics that agency stakeholders understand. And it provides an approach to directly measure program impact on the economy. ATP's authorizing legislation requires economic evaluation and such metrics and thus gave ATP a head start. GPRA and PART requirements now facing all U.S.-government funded programs are somewhat broader but have the same goal: to demonstrate outcomes and impacts affecting society broadly, beyond the immediate program participants.

ATP-Sponsored Workshop on Selected Methodological Issues in Benefit-Cost Analysis

ATP's Economic Assessment Office conducted a one-day workshop in June 2004 to address a small number of issues that had become apparent in the course of sponsoring benefit-cost studies. Evaluation practitioners who had conducted studies for ATP or who had expressed interest in benefit-cost analysis were invited to the workshop at NIST. NIST experts were invited also. The workshop had 15 participants, with varying experience in these types of studies. Through structured questioning, workshop participants addressed a number of issues arising in the course of ATP's studies.

Topic 1: Measures of Performance

  • Social (public plus private) returns or public returns
  • Advantages and disadvantages of each approach
  • Approaches to sensitivity analysis
  • Comparison/aggregation across multiple project cases

Topic 2: Attribution to ATP

  • What is the appropriate counterfactual? Is it different for different types of measures?
  • What effect do multiple funding sources have?
  • What effect does joint venture participation have?

Topic 3: When is the best time to study project performance?

  • After technology hurdles have been overcome? After commercialization has begun? Later?
  • What are best approaches to dealing with uncertainties about project outcomes in conducting prospective and/or partially prospective analyses?

Topic 4: Bridging the gap from microeconomic case study to macroeconomic effects

  • What are the basic requirements for implementing macro models successfully?
  • Is it feasible to convert existing micro case studies to the macro level?
  • Is it feasible to aggregate existing case studies to generate input to a macro model?

The discussions cited the overarching role of OMB Circular A-94 and highlighted similarities, consistency, and good practice in existing studies. The discussions also demonstrated the need to pursue approaches to thorny issues in which experienced practitioners either were in some disagreement about the best practice or had not considered the issue. Practitioners clearly were focused on providing quality studies that followed established principles and good practice; however, they disagreed somewhat on what metrics should be presented among economically appropriate choices. They focused on delivering a high-quality product for their own specific contracted work. They had thought little about the way federal programs should present results of different studies performed at different times in project lives by different contractors.

____________________
1. Enacted in 1993, the GPRA provides for the establishment of strategic planning and performance measurement in the federal government.

2. In 2002 the Director of the Office of Management and Budget (OMB) announced development of the PART for formally evaluating the effectiveness of federal programs. He described the PART's purposes as follows: "The PART is a systematic method of assessing the performance of program activities across the Federal Government. The PART is a diagnostic tool; the main objective is to improve program performance. The PART assessments help link performance to budget decisions and provide a basis for making recommendations to improve results."

Return to Table of Contents or go to next section.

Date created: July 11, 2006
Last updated: July 12, 2006

Return to ATP Home Page

ATP website comments: webmaster-atp@nist.gov  / Technical ATP inquiries: InfoCoord.ATP@nist.gov.

NIST is an agency of the U.S. Commerce Department
Privacy policy / Security Notice / Accessibility Statement / Disclaimer / Freedom of Information Act (FOIA) /
No Fear Act Policy / NIST Information Quallity Standards / ExpectMore.gov (performance of federal programs)

Return to NIST Home Page
Return to ATP Home Page Return to NIST Home Page Go to the NIST Home Page