NIST Advanced Technology Program
Return to ATP Home Page
ATP Historical Award Statistics Business Reporting System Surveys EAO Economic Studies and Survey Results ATP Factsheets ATP Completed Projects Status Reports EAO Home Page

Measuring ATP Impact
2004 Report on Economic Progress


Does the Program
Meausre Up?
Image of chemical vials

The Role of Evaluation at ATP

The nature of the Advanced Technology Program—combining federal tax dollars with private sector ingenuity and cost sharing to develop new technologies and refine manufacturing processe—demands that such a program be built on a foundation of evaluation. At any time, ATP must be prepared to show how the program benefits the U.S. economy.  An effective measurement system for ATP must be sophisticated enough to answer a crucial question for Congress, the Office of Management and Budget, the General Accounting Office, and the American people: What does America gain by investing in high-risk technologies that industry would not fund on its own?

The ATP Economic Assessment Office (EAO) uses a battery of analytical tools to measure program effectiveness, including statistical analyses, case studies, surveys, stories, and more. These metrics address the design, conceptualization, implementation, and impacts of the program. They can look at selected features, or focus on measurement of certain outputs or outcomes expected based on the program’s mission. They can be rigorous in the sense of searching for the most comprehensive and systematic set of causal linkages between and among variables, employing carefully constructed and sifted data. Or they can just be general and descriptive, offering a defensible answer to a particular question, given constraints on time, budget, and access to data.

ATP also attempts to measure the program’s counterfactual impact — evaluating what would not have happened in the absence of ATP funding. What differences did the program funding make in scope of research, collaborations, attraction of additional capital, and acceleration of technology development. ATP benchmarks by scanning industries, patents, papers, and commercialization rates of companies that received ATP funding versus companies or industries that have not been funded through the program.

Figure 6 on page 10 depicts the progress of an idea from proposal through dispersal of knowledge and commercialization of a technology. It also shows the measures employed in the short, mid, and long term to compile a 3-D snapshot of the project and its impact. As shown, technologies that attract ATP investment tend to deliver a rather flat return for the developer(s), but a more significant return to the nation through absorption and use of the innovation by other firms and by society as a whole.

Figure 6. Timeline: What EAO Measures and When

In the EAO timeline, economic impacts are depicted on the vertical scale and time on the horizontal scale. A Conceptual Benefits curve starts above zero at the time of competition announcement, implying that there will be benefits from the technology project planning, and from the formation of collaborations stimulated by the announcement. The curve then splits at about mid-project. The lower curve, Benefits to Awardees, shows returns to the project innovators increasing over time as they commercialize or license their technology. This curve remains relatively flat, however, due to such factors as appropriability, or the degree that firms are able to protect the profitability of their inventions (see page 25 for more on appropriability). The upper curve, Total Economic Benefits, shows returns to the economy at-large increasing as the technology diffuses to wider use and generates spillovers. The Total Economic Benefits curve veers more steeply upward from the Benefits to Awardees curve as the project nears completion, signifying an expectation of increasing spillover effects over time.

Timeline
Sources: Ruegg, Assessment of the ATP , 1999, p. 19; Cohen and Walsh, R & D Spillovers, Appropriability and R & D Intensity , 2000.

Short- and Long-term Measurement

How are benefits measured? The ATP evaluation program involves four categories of measurements, including:

  • Program inputs derived from Congressional appropriations and industry cost-share to provide budgets for making awards, convening staff to carry out the research, and providing for equipment, facilities, and other direct costs.
  • Principal outputs including the funded projects, collaborative relationships formed as a result of the program, plus publications, patents, models and algorithms, and prototype products and processes.
  • Principal outcomes, including sales of new and improved products, processes, and related services; productivity effects on firms; changes in firm size and industry size; changes in the inclination of firms and other organizations to collaborate; the spread of resulting knowledge through publications, presentations, patents, and other means; and the adoption of the funded innovations—and various adaptations—by the market.
  • Longer-term impacts related to the broad societal goal that drove the program’s creation, including increased GDP, employment gains, improved international competitiveness of U.S. industry, and quality-of-life improvements to the nation’s health, safety, and environment.  Impacts may also include an effect on the nation’s capacity to innovate.

Evaluation objectives include tracking progress of funded projects; estimating benefits and costs of projects and of the program overall; identifying the more difficult-to-measure effects, such as adaptations of the knowledge by others; relating findings back to the program’s mission; and applying tests of success.  Additional objectives include disseminating evaluation results and feeding them back to program administrators (to improve the program) and to policy makers (to inform them and meet reporting requirements).  Not all projects progress at the same rate.

Recent results from ATP’s Business Reporting System (BRS) looked at the rate of development of innovative technologies by industrial sector. This study found that information technologies and electronics enter the market quickly, with commercialization soon after the ATP funding period.  Manufacturing and materials/chemical projects tend to commercialize at a slower rate because they typically involve new process technologies in mature industries.

Because of regulatory requirements for many health care applications, biotechnologies also enter the market at a slower rate, and major applications often can be implemented more than five years after ATP funding ends. 8

stem cells derived from bone marrow ATP funding helped Osiris Therapeutics, Inc., of Baltimore to research the regeneration of damaged heart tissue using adult stem cells derived from bone marrow. In this image from animal testing, human stem cells are seen in an adult mouse heart 60 days after implantation. Osiris worked with researchers at Johns Hopkins University, the University of Florida, and Emory University on the project. Fifty percent of ATP awards include a university researcher among the principals, which speeds the dissemination of new technologies.

How Does ATP Measure?

Programs such as ATP use a variety of evaluation methods to “measure against mission.” These methods can range from early surveys used to generate immediate information to detailed case studies, statistical analyses, tracking of knowledge created and disseminated through patents and citation of patents, and informed judgments. Table 1 shows the full range of evaluation methods available to ATP.

Table 1. Overview of Evaluation Methods*
Method Brief description Example of use
Analytical/Conceptual modeling Survey Investigating underlying concepts and developing models to better understand a program, project, or phenomenon To describe conceptually the paths through which spillover effects may occur
Survey Asking multiple parties a uniform set of questions for statistical analysis To find out how many companies have licensed their newly developed technology to others
Case study—descriptive Investigating in-depth a program, project, technology, or facility To recount how a particular joint venture was formed, how the collaboration worked, and reasons for success—or lack thereof
Case study—economic estimation Adding quantification of economic effects to a descriptive case study, using, for example, benefit-cost analysis To estimate whether, and by how much, benefits of a projectexceed its cost
Econometric and statistical analysis Using statistics, mathematical economics, and econometrics to analyze links between economicand social phenomena, and to forecase economic effects To determine how public funding affects private funding of research
Sociometric and social network analysis Identifying and studying the structure of relationships to increase the understanding of social/organizational behavior and related economic outcomes To learn how projects can be structured so that the diffusion of resulting knowledge can be increased
Bibliometrics—counts Tracking the quantity of research outputs To find how many publications per research dollar a program generated
Bibliometrics—citations Assessing the frequency with which others cite publications orpatents and noting who is doing the citing To learn the extent and pattern of dissemination of a project’s publications and patents
Bibliometrics—content analysis Pulling information from text using co-word analysis, database tomography, and textual data mining, as well as visualization techniques To identify a project’s contribution, and its timing relative to the evolution of a technology
Historical tracing Tracing forward from research to a future outcome, or backward from an outcome to contributing developments To identify linkages between a public research project and significant later occurrences
Expert judgment Using informed judgments to make assessments To hypothesize the most likely first use of a new technology
* Rosalie Ruegg and Irwin Feller, A Toolkit for Evaluating Public R&D Investment Models, Methods, and Findings from ATP’s First Decade, NIST GCR 03-857, July 2003, pp. 30-31.

Figure 7 shows the actual use of these methods by ATP since its inception in 1990.

Figure 7. Intensity of ATP’s Use of Evaluation Methods
Figure 7 -
Since 1990, ATP has employed a growing number of evaluation methods to gauge the success of the program mission in accelerating U.S. technology development and increasing research partnerships.

____________________
8 Jeanne M. Powell and Francisco Moris, Different Timelines for Different Technologies: Evidence from the Advanced Technology Program, NISTIR 6917, November 2002.

Return to Table of Contents or go to next section.

Date created:  March 15, 2005
Last updated: August 15, 2005

Return to ATP Home Page

ATP website comments: webmaster-atp@nist.gov  / Technical ATP inquiries: InfoCoord.ATP@nist.gov.

NIST is an agency of the U.S. Commerce Department
Privacy policy / Security Notice / Accessibility Statement / Disclaimer / Freedom of Information Act (FOIA) /
No Fear Act Policy / NIST Information Quallity Standards / ExpectMore.gov (performance of federal programs)

Return to NIST Home Page
Return to ATP Home Page Return to NIST Home Page Go to the NIST Home Page