|
|
||||||||||||||||||
|
|
|
A Toolkit
for Evaluating Public R&D Investment CHAPTER 10: Conclusions and RecommendationsProgram Evaluation: An Essential Tool for ATP Evaluation has been a core part of the operations of ATP over its first decade. Evaluation has served de facto as the basis for empirical tests of key propositions contained within ATP’s founding legislation, such as the private sector’s underinvestment in high-risk, general-purpose technologies. It has helped define the dynamics of collaborative R&D ventures and the form and magnitude of spillovers from the resulting research and longer-term commercialization activities. Evaluation has permitted ATP program managers to monitor the program’s evolution and to identify the extent to which specific program elements were producing intended results while uncovering new relationships between ATP and awardees, and among awardees. It has provided descriptive and analytical evaluative information on program recipients and program outputs to ATP and National Institute of Standards and Technology officials, and to key executive and congressional decision makers. Evaluation has provided an objective, empirical basis for considering ATP’s operations and impacts in the face of the politically contentious debate surrounding ATP’s establishment and its first decade of operations. The cumulative impact of these evaluations has been to demonstrate that ATP is achieving the program’s objectives. The body of studies can be grouped, according to their main objectives, into six major categories: (1) program rationale and justification, (2) program impacts on participants, (3) program spillover impacts on others, (4) collaboration, (5) interactions and relationships of ATP with state and foreign counterpart programs, and (6) advancement of methods, techniques, and databases. Taken as a body of work, these studies have also contributed to enhanced understanding of the dynamics of the U.S. innovation system, particularly the characteristics of productive R&D relationships between the public and private sectors. Third-party reviewers have recognized ATP’s evaluation program as being well designed and well executed. The program has combined the activities of an inhouse staff unit, the Economic Assessment Office, and a number of external researchers drawn from academic institutions, consulting firms, and independent research organizations. Reports from all sources have been subject to considerable independent review. One third-party reviewer, the National Research Council, concluded in a recent study of ATP:
Deliberate Use of Multiple Methods in Evaluation Approach ATP’s evaluation program has been noteworthy for its planned diversity. The program has made use of many standard evaluation methods—surveys, descriptive case studies, benefit-cost analysis, econometrics, bibliometrics, expert judgment —as well as new, experimental approaches such as network analysis of knowledge spillovers, and variants of cutting edge techniques, such as hedonic price indices. Two dominant characteristics of this multi-facted approach have been the care with which methods and techniques have been matched to the evaluative questions being posed and the evolution toward more rigorous tests of causal relationships between ATP activities and observed awardee activities. Evolutionary Progress in ATP’s Use of Evaluation Methods Figure 10–1 depicts the evolutionary progress of ATP’s evaluation. Evident are the increasing number of studies, the growing mix of methods, and the addition of econometric and emerging methods as the decade progressed. The increasing technical sophistication of the evaluation program has mirrored the maturation of the program and its generation of market-based data that can be used to test for program impacts. The passage of time now allows ATP’s evaluation to progress from heavy reliance on prospective studies based on forecasts to retrospective studies based on empirical evidence. Figure 10–1. ATP’s Evolving Use of Methods Over its First Decade *These 81 methods are employed in the 45 ATP studies commissioned between 1990 and 2000 that are examined in this report. Focus of Evaluation on Mission-Related Questions The evaluation program has succeeded in maintaining a close correspondence between the questions addressed by the diverse evaluation studies and ATP’s mission. The studies addressed issues relevant to the program and its stakeholders, and served to advance knowledge and understanding of the program. While the studies vary in rigor, ability to achieve their intended outcomes, and ability to form specific or convincing conclusions, in the aggregate they comprise an impressive body of work. They constitute a body of methodological, empirical, and policy-relevant research that is important not only for ATP but also more generally for the fields of evaluation and technology policy, and for other technology programs. Questions addressed in the studies range from the very detailed, such as how flow-control machining technology is likely to move into commercial use, to the very broad, such as whether ATP is an effective program. Evidence of ATP’s Accomplishments ATP has made substantial progress toward developing a comprehensive evaluation program and applying it to produce empirical evidence that ATP has contributed to an increase in private, and thus, total national investment in highrisk, enabling technologies, accelerated technology development and commercialization, enlarged individual project scope and scale, increased R&D efficiency, and increased the probability of research success. Study findings document rich networks of collaborative relationships, many newly formed specifically to participate in ATP. Economic benefit-cost studies, though based largely on projected data, showed plausible pathways through which the technologies could be implemented and pointed to a greater magnitude of potential benefits for others from market spillovers than to the award recipients. Results also show accumulating evidence of knowledge spillovers. Opportunities and Future Directions The report to this point has been based on grouping, recounting, distilling, relating, and interpreting findings produced by other researchers over ATP’s first decade. The sections that follow represent our recommendations of promising directions for ATP’s evaluation program, taking into account the evaluation program’s accomplishments, gaps in coverage, new opportunities, and emerging needs. Increase Retrospective, Market-Data–Based Analyses We recommend that increased attention be directed toward identifying and estimating economic benefits from market-generated data rather than, or in addition to, projections of estimated benefits derived from surveys of ATP awardees, expert panels, or econometric scenarios. Sufficient time has elapsed for some of the ATP-funded projects to enter at least an initial phase of commercialization. Data on technology performance as reduced to practice, market penetration, diffusion patterns, prices, quality characteristics, sales, and value-added should be available for some projects now either in public datasets or obtainable through new studies. We further recommend that several of these technology-focused, market-based studies be conducted for projects already subjected to prospective economic studies. For example, it may be possible to revisit the economic benefits of the tissue engineering projects featured in RTI’s benefit-cost study or to reexamine the adoption paths of the Ehlen study of flow-control machining. We recommend that others be drawn from the completed project set since these also provide a basis for comparison. Incorporate Both Direct- and Indirect-Path Analysis in Benefit-Cost Case Study In conjunction with the above, we recommend that at least one retrospective economic case study attempt to trace benefits both along the direct path entailing commercialization activities of awardees and their partners, and along the indirect path entailing knowledge spillovers. None of the studies examined attempted to estimate the economic benefits of knowledge gains along the indirect path. None attempted to combine assessment of market and knowledge spillovers in the same study. It may now be feasible to incorporate at least a partial estimate of knowledge spillovers in a retrospective cost-benefit study, sufficient to provide some indication of the economic importance of these effects. Continue and Extend Status Reports We recommend that ATP continue to provide Status Reports systematically on all completed projects several years after completion. These are important in terms of providing publicly available, complete coverage of all non-terminated projects. The volume of status reports serves as a handy reference for program officials and other stakeholders—a project directory that is much more informative than the initially provided project abstracts. We recommend a continuation of the uniform collection of data that has accompanied the descriptive cases. The resulting database is an invaluable and unique evaluation resource for characterizing the portfolio of projects, their short-term outputs, their intermediate outcomes, and their overall performance to date via the Composite Performance Rating System (CPRS) constructed from the status report database. We further recommend that a new set of Status Reports be added to the existing set to provide a farther-out look at project developments. We recommend that this be done using a stratified sampling of the previously assessed completed projects, with a random sample of projects drawn from each performance category based on their CPRS ratings (provided such a sampling approach will provide statistically significant results). We recommend that the new analysis be performed four-to-six years into the post-project period, using Business Reporting System data, supplemented by interview, additional outcomes data, including market-based data to the greatest extent possible, and providing at least rough estimates of economic returns where feasible. This extension of the status reports will serve several purposes. First, it will provide more information about how the ATP projects are performing. Second, it could lead to a modified version of the CPRS for rating longer-term project performance and for comparison with the shorter-term CPRS performance ratings. Update Information on State and Foreign Counterpart Programs Update earlier reviews of the technology development and evaluation programs of the states and of the major industrialized nations that continue to experiment with and, in general, enlarge the scope of their non-defense technology development programs. The European Union launched its Sixth Framework Program in November of 2002 to fund technology development projects through 2006. The individual European countries continue to support technology development programs. Finland’s Tekes program, for example, commits approximately $200 million annually to technology development. Japan, likewise, has continued its major science and technology programs. Considerable ferment is evident in evaluation techniques throughout Europe and Japan, and, as with program design features, evaluation techniques should be periodically surveyed by ATP for possible constructive features. We recommend that ATP monitor the evolution of state technology programs to stay in touch with complementary efforts there. Several state governments have undertaken major technology development initiatives positioned toward more fundamental, upstream research. Michigan, for example, is making a major new commitment to biosciences. California is considering establishment of several new Centers of Excellence in selected technologies, such as nanotechnology. Even in the midst of the current shortfall in state revenues, New York has recently increased its funding for its centers of excellence program. These events, if continued, suggest that some states are moving “upstream” into support of high-risk, generic technologies that may overlap with those considered for ATP awards. If this observation is correct, then we recommend that ATP reexamine how its selection of technologies, specific awards, and program design features fit with those of the states, with a view toward continuing to orchestrate complementary, reinforcing rather than competitive or duplicative programs. Further Develop Promising New Evaluation Techniques We see the Austin and Macauley cost-index approach to estimating market spillover benefits from embodied technological innovation, and the Fogarty et al., use of network analysis and fuzzy logic to estimating the extent of knowledge spillovers as worthy of further development. The Austin and Macauley approach provides a technique for estimating the economic benefits of technologies that (1) are likely to be embodied in other products rather than being a final product itself and (2) are likely to emerge as quality-enhancing rather than cost-reducing changes. As presented in their study, the Austin and Macauley technique was dependent on a complex set of simulations based on projected estimates. This complexity reduces the appeal of the approach to decision makers who might hesitate to accept its findings because of its opaque procedures. We recommend that efforts to validate its estimates through increased use of market data be coupled with efforts to simplify its procedures. As the authors acknowledge, the Fogarty et al., method needs further development if it is to be used for either impact assessment or project selection. It also needs to be further demonstrated in additional applications. In general, a continuing scan of evaluation practices and consideration of new approaches is advised to identify and develop new techniques that might be useful to ATP. Deepen Analysis of Knowledge Spillovers Beyond Patent-Only–Based Studies We recommend mounting studies of knowledge spillovers to broaden the proxy for knowledge spillovers beyond patents. Knowledge spillovers appear to represent one of ATP’s major contributions to the overall pace of technological innovation in the United States, and an area of growing evaluation interest. The studies cited in this report have provided new evidence on the magnitude of knowledge spillovers. Thus far, however, the studies of knowledge spillovers have all been based on the use of patent statistics. It should now be possible to improve on this by introducing additional measures, including estimates of economic value. Identify and Address New Questions as ATP is Modified We recommend that ATP identify and explore new questions from stakeholders, such as those that arise from modifications to the program, and that it continue its tradition of probing underlying program principles and theory. For example, if universities are allowed to lead projects, then a relevant evaluative question is the effect of university leadership on commercialization progress. If universities are allowed to hold a project’s intellectual property, then how does that affect the propensity of firms to participate in ATP, the types of projects proposed, and firms’ willingness to invest in commercialization. If recoupment is to be implemented, then we recommend exploring the experience of other programs with cost recoupment, and investigating the impact of recoupment on the risk level of R&D projects funded. Even without program modifications, new questions arise that can best be addressed through evaluation. For example, how does the magnitude of indirect benefits through knowledge spillovers compare with direct benefits through the commercial efforts of award recipients and their partners? What are the relative impacts of the direct and indirect paths on U.S. firms? How should the technology area condition expectations about the rate of commercial progress and the magnitude of benefits? Pursue Analysis of Failures and Successes Results reported in chapter 9 indicated that ATP had terminated 5–6% of its projects prior to completion, and was tracking the reasons for termination. Further, results indicated that about a quarter of completed projects were rated poor performers. These actions point to continual project monitoring and management oversight, and, therein lies an opportunity for further program improvement through evaluation. The program may benefit from further analysis of terminated projects and poor performers. There may be opportunities to learn from these identified groups ways to avoid in future projects systematic problems that caused the earlier ones to fail or perform poorly. ATP has already taken steps aimed at overcoming the problems of one type of terminated project: the would-be joint-venture that terminates prior to starting because its members are unable to reach final agreement among themselves, usually because of issues related to intellectual property rights. By providing more information to applicants about intellectual property requirements and requiring that joint ventures reach an agreement before an award is made and a project can begin, ATP appears to have reduced the problems associated with joint-venture agreements. Assessment of the effectiveness of steps taken would be useful in determining whether additional attention is needed to avert this type of failure. Analysis of several of the other categories of terminated projects may also reveal systemic problems that could be averted or lessened. Where problems are identified and steps taken to overcome them, follow-up assessment of effectiveness is recommended. It may also be instructive to examine the outcome of “close-call” projects that shared characteristics of those that were stopped by ATP, but which were allowed to continue. Do they become the low performers of the completed group? Should more be stopped? Can definable “termination triggers” be developed that would increase efficiency of the program and avoid waste of public resources, while avoiding the corollary risk of stopping projects that might ultimately succeed? Similarly, analysis of the most successful projects as a group may allow identification of replicable factors influencing project success, much as Dyer and Powell identified success factors in joint venture performance. An analytical comparison of the top and bottom performers may reveal factors that can be systematically influenced to advantage across the portfolio, either by ATP or by project participants. Success and failure analyses are potentially valuable components of program evaluation. Continue to Balance Evaluation by In-House Staff and External Contractors Supplementing ATP’s core in-house evaluation capability with outside contractor evaluation appears to be a solid approach for focusing on issues central to stakeholders and providing a feedback path into the program, while achieving evaluation efficiencies and credibility. Take Greater Advantage of Evaluation Results in Decision Processes We echo the National Research Council’s recommendation that ATP “enhance current efforts to integrate assessment results into the decision process.” 430 For example, study findings that identified critical factors to collaborative success could serve as a helpful reminder to members of selection boards who could look for the presence of those factors in proposed collaborative projects, and to ATP project managers who could encourage the enhancement of those factors in ongoing projects. As another example, members of selection boards and ATP project managers might benefit from study findings that emphasized the importance of the network within which participating organizations are embedded to a project’s knowledge spillover potential. Bringing such information to the people who make decisions about project selection and management should help increase the effectiveness of the program. In general, we recommend the pursuit of additional opportunities to increase the flow of evaluation information to ATP staff and the outside reviewers who advise on project selection. Closing Note We designed this report to provide a retrospective analysis of ATP’s large body of evaluation work that will serve as an evaluation toolkit for ATP and be useful to others who operate public technology programs. The toolkit consists of (1) an evaluation framework integrating ATP’s mission, operational goals, and evaluation activities; (2) a directory of evaluation methods, tools, techniques, principles, explanatory information, and best practices central to implementing an evaluation program; (3) illustration of models and methods used for ATP’s evaluation over its first decade; (4) compilation, condensation, and integration of 45 evaluation studies commissioned by ATP over this period; (5) a crosscutting compendium of study findings related to ATP’s mission; (6) recommendations for future work conditioned by an overall assessment of accomplishments, gaps, and opportunities; and (7) a quick reference guide to assist the reader who wishes to jump quickly among different subject tracks. ATP’s multi-faceted approach to evaluation is a sound and effective program evaluation strategy. It has produced a mosaic of studies that employs multiple methods and highly specialized techniques, often combining multiple topics within studies. We hope that the report’s crosscutting analysis of the study findings will help make the large body of work more accessible to ATP staff and the external community. It is our hope that this assembly with its guide that highlights the main topics will provide a clear understanding of ATP’s impact, assist ATP staff and others in gaining faster access to methods and illustrations of how these have been used, and provide strategic direction to future work. Expected benefits are fuller utilization of past work and increased efficiency and effectiveness in evaluation planning. Based on our review of 45 studies selected from ATP’s first decade, and the specific findings of these studies, we conclude that substantial progress has been made toward developing a comprehensive evaluation program and using this program to address major empirical and policy questions asked of the program. This body of evaluation studies has strengthened the ability of analysts to investigate further the effects of ATP and of other science and technology programs. ___________________ Return to Table of Contents or go to References. Date created: July 15,
2004 |
ATP website comments: webmaster-atp@nist.gov / Technical ATP inquiries: InfoCoord.ATP@nist.gov. NIST is an agency of the U.S. Commerce Department |