for Evaluating Public R&D Investment
CHAPTER 5: Survey Method
As discussed in Chapter 2, a survey, if carefully designed, collected, and administered, can reveal useful information about a program’s development long before impacts can be discerned in other ways. A survey can provide information about the characteristics and activities of performers, early effects, and long-term expectations. This information can help program managers detect early accomplishments and early problem areas. It can also be of value in communicating with Congressional committees and other key stakeholders who want to know if a program is on track to achieve its defined goals.
This chapter describes how ATP’s use of the survey method evolved, allowing evaluators to track the progress of ATP projects, to present aggregate statistics in meaningful ways, to gather data for case studies, and to shed light on questions of critical importance. The surveys described here range from simple to elaborate, from relatively inexpensive to costly, and from those performed by outside contractors to those performed in-house. They range from those that are relatively straightforward to analyze, to those that are intricate and complex and contain many levels of data. Further, the surveys presented here demonstrate the varied purposes served by this useful tool of analysis, and show how a program can build its expertise in the use of a method over time. Table 5–1 lists eight ATP commissioned studies that have used survey as a method of research. The table indicates the main purposes of each survey, its approach, and any unique features.
*Note: Three additional studies by Powell, listed in Tables 3–2, 3–3, and 3–4, detailing development and use of the BRS survey are not separately listed here, as the material is covered adequately by the studies cited.
Gaining Early and Broad Perspective of a Program’s Effects
At the earliest practicable time—as the program neared the end of its first year of operation in 1991—ATP commissioned a survey study of projects. The objective was to learn if the ATP awards were having identifiable effects and, if so, what they were. The impetus was two-fold: to aid internal program management and to answer stakeholder questions. Program administrators were eager for evidence that the new program was on track, and both supporters and detractors in the public policy community were looking for quick answers to their questions.
The ATP engaged Solomon Associates, a small consulting firm, to work with ATP staff in designing and carrying out this first survey. 119 In selecting the target population, the contractor and ATP staff concluded that the multi-year projects funded by ATP should be up and running a minimum of 6–12 months before being surveyed to allow sufficient time for short-run effects to manifest themselves. Thus, ATP decided to proceed with the survey at the end of its first year of firstround projects.
At the time of this first survey, the newly formed program was in an experimental stage. It had a small staff, an overall budget of approximately $10 million, and a budget for external evaluation studies of $25,000. In addition to the survey, during the first year of program operation, ATP used administrative funds to support development of a database of program applicants and awardees. This database helped ATP answer stakeholder questions about the characteristics of its projects: the number, size, and affiliation of applicants; number of single company projects and joint venture projects proposed and awarded; geographical extent of the program as indicated by the location of applicants and awardees; technologies proposed and funded; amounts of funding requested and offered in industry cost sharing; and other characteristics of applicants and awardees. The descriptive profiling of applicants and awardees, together with the descriptive survey statistics of awardees after their projects were underway, constituted ATP’s initial evaluation effort.
Because only 11 projects were funded in the first year, it was an easy decision to include all of them in the survey—5 single company projects and 6 joint venture projects. Of the 35 organizations participating in the 11 projects, the survey was restricted to the 26 firms receiving ATP funding because, at that time, the program’s focus centered on firms rather than universities and other organizations. 120 The sample included a range of firms in terms of the number of employees, sales volume, capitalization, and location.
Identification of a suitable respondent at each of the 26 companies in the survey was an iterative process involving ATP staff, the companies, and the contractor. Often the discussions led to the designation in smaller firms of the CEO as the respondent and the business contact or project’s technical program manager as the respondent in larger firms. Early in ATP’s evaluation program, then, the importance of correctly identifying the individuals to whom a survey is targeted was highlighted.
Because of the exploratory nature of this first survey, the survey designers decided to administer it by in-depth telephone interview, using an informal discussion style that followed a semi-structured interview guide and featured open-ended questions. The contractor, Samantha Solomon, conducted all of the interviews to preserve consistency and to provide the professional experience needed to pursue an open discussion that could accommodate unanticipated lines of inquiry while still covering the planned topics of the interview guide. While the topics covered by the interview guide were influenced by hypotheses about ATP’s intended effects (see Chapters 3 and 4), the open-ended nature of the line of questioning allowed respondents to bring up other topics and effects freely and to dismiss topics they thought unimportant for them. The objective was to discover any effects that the project participants might have experienced from ATP participation. ATP asked the contractor to allow the respondents free rein in identifying and discussing effects, and to code the responses after the fact, rather than to use a pre-coded format.
After a brief discussion in which the interviewer indicated general familiarity with the project, respondents were asked if they had experienced any changes as a result of the project and, if so, to discuss the single most important effect from their standpoint. Many respondents mentioned effects anticipated by ATP, but they identified two additional effects. In fact, respondents often listed one of those unanticipated effects, a “halo effect,” as the single most important effect of the award. The majority of the respondents—100% of the single company applicants —said receiving an ATP award bestowed a “halo” of enhanced credibility on the company (or words to that effect). This result heightened ATP staff’s awareness of the prestige value of the award, apart from the funding itself. It also had implications for the method of announcing awards and other operational aspects of the program. The results of this open-ended line of questioning clearly demonstrated the advantage of taking an exploratory approach to the survey method in the early stages of a program’s evaluation.
According to the contractor, respondents to the telephone survey tended to be emphatic in indicating when they thought a line of questioning was premature. They elaborated on areas of particular interest to them. Overall, they were described as willing and forthcoming in their discussions with the interviewer. There was 100% participation by the targeted respondents. 121
The resulting interview data showed some of the anticipated effects to be strong factors, others weak, and some too early to tell. Table 5–2 lists effects of the program identified by this first survey. The last two effects listed were those raised by respondents but not anticipated by the researcher. This first survey produced findings of considerable value to ATP managers. It showed that the projects were headed in the intended direction. It helped program evaluators and administrators better understand the types of impact the program was having and the relative timing of these effects. It identified some unexpected effects of a special nature, including the “halo effect.” The first survey also indicated which data were relatively easy or difficult to obtain, and areas of inquiry to which the companies were particularly sensitive. The feedback was invaluable in structuring ATP’s nascent evaluation program and, more specifically, in designing future surveys.
Source: Solomon Associates, The Advanced Technology Program, An Assessment of Short-Term Impacts: First Competition Participants, 1993.
Extending and Deepening Survey Data on Program Effects
In 1993, three years after the program began, ATP initiated a much larger project survey that covered all 125 companies and consortia participating in the 60 projects funded from 1990 through 1992. The background obtained from the earlier survey, described in Chapter 5, offered insights in designing the larger survey.
Several options were considered in planning the second survey study. These options represented a tradeoff between survey continuity and adapting the survey to changing opportunities and issues. One option was to take a panel approach, whereby the original interview guide would have been used to re-interview the original group of respondents later in the project life cycles. This approach would have facilitated tracking emerging effects of the group of 26 companies in the original survey over the subsequent two years, but it would have been limited to this small, original group and the identical questions. Another option was to apply the original interview guide to a different group of respondents after one year of funding for comparability. This approach would have increased the size of the sample and allowed the comparison of progress for two different groups of funded companies at the same time in their funding histories. The results would have helped determine how representative were the results of the first survey, but it would not have allowed ATP to enlarge the scope of its inquiry.
Yet another option—the one taken—was to develop a new survey instrument and administer it to all projects that had received funding for at least six months. This approach had the advantages of allowing more extensive and in-depth coverage of a much larger sample. In addition to probing in greater depth each area of potential impact identified by the first survey, the second survey would explicitly ask about possible negative impacts. It would explore business goals, plans to commercialize new technologies, and progress towards commercialization. It would allow a comparison of progress over time, and would support analysis by size of company and by type of project (single company applicant versus joint venture). This latter survey approach was taken because the demand for quantitative information about the program’s performance as a whole had intensified by late 1993, particularly in light of discussions about possible plans to increase its size substantially. 122 Furthermore, ATP was approaching the due date for a mandatory report to Congress on its achievements to date, and needed additional material for the report. 123
The average length of the projects in the population to be surveyed was three years. A few of the first-funded projects had reached the end of ATP funding. Nearly half of the 60 projects surveyed had completed at least 50% of their research goals, while another 40% had completed 75% or more. Yet nearly all of the companies were still in the process of carrying out research to develop their technologies. Hence, the focus of the survey was still on early effects and not on ultimate, longer-term economic impacts. NIST statisticians were consulted on sampling strategy. Their advice to ATP was to survey 100% of the projects, and to include as many participating organizations as possible. This approach was recommended in order to allow the data to be analyzed in a variety of ways while maintaining sufficient responses for statistical significance.
ATP engaged an independent contractor to carry out the second survey. Again, it was decided that the survey would be conducted by telephone interview with a single, qualified analyst, Bohne Silber, conducting all of the interviews. 124 This second survey used a detailed questionnaire, consisting mainly of closed-ended questions in contrast to the emphasis on open-ended questions in the first survey. The contractor worked closely with ATP staff to obtain background on the program and to develop the questionnaire, which was pre-tested and revised several times before it became final. Two versions of it were created to decrease the time burden: a “long form” to be administered to all single company applicants and to joint venture participants who indicated progress in commercializing results from their ATP projects, and a “short form,” without the questions on commercialization and business, to be administered to joint venture participants who indicated no current involvement in commercialization efforts. Wherever possible, conditional questions were used that would screen out a number of subsequent questions and branch to another part of the questionnaire. For example, if the interviewer received a negative response to the question “Have you engaged in collaborative relationships?” then none of the subsequent questions on collaboration would be asked. The questionnaire in its long form contained 134 questions and took about 90 minutes to administer if all the questions were applicable. The short form took 70–75 minutes if all the questions were applicable. In most cases not all the questions were applicable.
Respondent were informed in advance that their responses were confidential and would not be revealed to ATP staff. The purpose was to encourage the company representatives, most of whose companies were still receiving funding from ATP, to be candid in their responses. A disadvantage of this approach was that ATP did not receive the data files, and had to go back to the contractor for subsequent analysis of the data.
The survey collected background information on each organization and project. It asked if any positive or negative effects on the environment, health, or safety were expected, since the detailed questions focused on direct economic effects. 125 It asked the respondent to classify the long-run expected technical outcome of the project in terms of a new or improved product, service, or process. It also asked the respondent to classify the level of progress toward accomplishing their R&D plan. Because assessment of projects’ technical achievements was otherwise disaggregated in ATP and held in the files of individual ATP technical program managers, survey statistics provided the only consolidated measure of technical progress for most of the program’s first decade. 126
The survey focused, in part, on the respondent’s plans to commercialize their developing technology. The inquiry attempted to identify any emergent commercial applications of the technology that were not in the original proposal. New applications are significant because it indicates that a funded technology is enabling a variety of commercial activities. The inquiry also probed the companies’ progress in bringing their technology to market in a specific application. To illustrate the level of detail, Table 5–3 gives an example of a series of commercialization questions taken from the Silber survey questionnaire. As would be expected, companies farther along in their research projects were more likely to exhibit commercial activity.
Another focus was on capturing information that would suggest the generation of knowledge spillover benefits. Table 5–4 illustrates several questions relating to that topic.
Collaborative relationships were another featured topic on the questionnaire. The intention was to discover the extent and purpose of collaboration, the role played by ATP, how well the collaborative relationships were working, and who was collaborating with whom. Table 5–5 gives examples of questions related to collaboration.
In addition to the topics of Tables 5–3 to 5–5, the survey covered employment effects, competitive standing, attraction of additional funding, and leveraging of research funds. This survey remained a primary source of performance data for ATP for two years. The survey questionnaire, which is available to the public, also provided the basis for developing ATP’s next major survey tool.
Establishing Routine Project Reporting by Electronic Survey
By the mid-1990s, ATP decided to survey all organizations participating in its funded projects on a regular basis. It decided to replace the occasional telephone survey administered by an outside contractor with an electronic survey administered by ATP staff. ATP staff further refined and substantially extended the previously used questionnaire to better track the progress of individual projects towards delivering economic benefits, and to collect data for reporting under the Government Performance and Results Act of 1993 (GPRA).
Source: Silber & Associates, Survey of Advanced Technology Program 1990–1992 Awardees: Company Opinion About the ATP and its Early Effects, 1996, Survey Questionnaire, Appendix A.
QUESTION: Do you disseminate or plan to disseminate non-proprietary information—again, we’re speaking about your work that’s not confidential—about the ATP-funded technology through any of the following:
Source: Silber & Associates, Survey of Advanced Technology Program 1990–1992 Awardees: Company Opinion About the ATP and its Early Effects, 1996, Survey Questionnaire, Appendix A.
Several advantages were seen to having ATP administer and manage these next surveys of project participants. ATP staff is under strong legal requirements to protect the proprietary and confidential information of applicants and award recipients and to abide by nondisclosure rules. The companies are accustomed to interacting with ATP staff, and would require less administrative time than is needed of ATP staff to help coordinate interactions between the contractor and the companies. Most important, maintaining internal control of the survey would make it easier for ATP to construct an integrated set of databases supporting the comprehensive statistical analyses of all participants in all projects—an important tool of project and program management. Internal control over project data would give ATP more flexibility in generating a variety of analytical reports on a fast turnaround basis to respond to specific stakeholders’ requests.
QUESTION: For each of the following types of collaborative relationships you have had, tell me how well overall the collaboration is working:
Source: Silber & Associates, Survey of Advanced Technology Program 1990–1992 Awardees: Company Opinion About the ATP and its Early Effects, 1996, Survey Questionnaire, Appendix A.
The ATP-administered, electronic survey of project participants is known as the “Business Reporting System” (BRS). 127 It covers all projects from 1993 forward, and pushed beyond the Silber survey in gathering data on technical and economic progress. Its coverage extends not only to the lead companies, but also to other joint venture participants, universities, and not-for-profit organizations. Until recently, the BRS was administered using customized diskettes mailed from ATP to the respondents, completed by them, and returned to ATP for downloading into the BRS database. As its first decade ended, the BRS was largely converted to a web-based system.
To help ATP establish a baseline, project participants are required to report at the outset of their project their planned application areas for the technology and their strategies for commercialization. At the end of each year, participants report on progress toward implementing their commercialization strategies, short-term economic impacts of their projects, and any changes in plans. At the conclusion of their projects, they report on their overall accomplishments. During the post-project period, they report three additional times—every other year over six years, according to the “Terms and Conditions” of their cooperative agreement. Extending the survey six years beyond project end is ambitious because the difficulty in tracking project effects tends to increase over time. Difficulties may stem from personnel changes within the firm, dimming memories about the ATP-funded part of the effort, transfer of developing technology to other parts of a company or to collaborators who may not know or care about the ATP-source of research funding, other sources of funding becoming predominant, mergers and acquisitions, and a blending of the ATP-supported technology with other technologies that blurs its ATP origins.
Over time, the emphasis in the BRS reports shifts increasingly from early indicators of progress to economic impacts. Table 5–6 summarizes the several parts of the BRS.
The resulting database is a unique management, policy, and evaluation research tool. It captures the linkage of technologies under development to applications in numerous industry sectors. It allows ATP to see major tendencies and emerging trends across its portfolio of projects. The data can support varied analyses by industry sector, technology area, geographical location, funding year, collaborative relationships, type of ATP competition, organizational size and type, and other characteristics.
Since the beginning of the program, for example, small business advocates have worried that small companies might not fare well in the program relative to larger companies. BRS data allows evaluators to compare small firms with larger firms with respect to their participation, strategies, and commercial progress. 128
Source: Powell, “The ATP’s Business Reporting System: A Tool for Economic Evaluation,” 1996.
Table 5–7 lists some of the dimensions on which Powell has compared small and large firms. She computed Z-test statistics to determine the level of significance of the statistical difference in the two groups. Findings, discussed in Chapter 9, suggest that small companies are thriving in the program.
Because of the confidential and proprietary nature of much of these data, ATP publishes results in aggregate form only.
Source: Powell, “ Business Planning and Progress of Small Firms Engaged in Technology Development Through the Advanced Technology Program ,” 1999.
Soliciting Feedback by Survey on Customer Satisfaction and Marketing Issues
In addition to providing statistics that describe program effects, surveys can be used to assess customer satisfaction and to address marketing issues. Applied to a public program such as ATP, customer satisfaction means determining how well the relationship is working between the program and the organizations with which it directly interfaces in carrying out its mission. In ATP’s case, the organizations it funds may be thought of as counterparts to a business firm’s customers.
The use of the term “customer satisfaction” in this context does not imply that the funded businesses are ATP’s ultimate customer. As a public program, U.S. citizens are ATP’s ultimate customers. But as a counterpart to business use, the funded organizations are those with whom ATP deals directly, and the success of those interactions affects the success of the program. In the case of a public program, which by definition is operating outside private markets, “marketing issues” refers to questions about how the services offered by the program become known by, and are perceived by, the public.
Soliciting early feedback from customers is particularly important when a new endeavor creates many new sets of relationships. It is preferable to find out as soon as possible how well the relationships are working, rather than simply to assume they are working well. But even though program administrators want to know how they are doing in these new relationships, they are not immune from disliking criticism and being defensive in the face of it. For this reason a third-party assessment, with full confidentiality to respondents, is essential.
ATP’s published customer satisfaction survey met these tests. It was conducted as part of the second contractor survey, and the results included as part of the larger report. 129 Questions were asked about the resources and personnel of NIST technical support, the ATP professional staff, and all aspects of the program over which ATP has control, including the solicitation of proposals, review and evaluation of proposals, selection of award recipients, and project monitoring.
Respondents were also encouraged to give specific comments regarding their views about working with ATP. 130
A marketing issue was also addressed by this survey. Respondents were asked how they learned about ATP. The objective was to find out which outreach efforts were most effective. Several other marketing issues have been addressed by other surveys.
After its first competition, ATP received complaints from several companies that the cost of writing a winning proposal was too high. In its first survey ATP included a question about cost of proposal preparation in order to examine this complaint. 131 Survey findings indicated that winning proposals ranged in preparation costs from a low of several thousand dollars to a high of several hundred thousand dollars. The fact that proposals with low preparation costs were able to win awards supported ATP’s contention that proposal content—not preparation cost—mattered most. At the same time, the perception of prospective applicants about preparation costs is likely an important factor in shaping their decisions on whether or not to submit proposals.
The question of proposal cost was re-surveyed nearly 10 years later by Feldman and Kelley, 132 with much the same result: Reported preparation costs were extremely variable across winning proposals, ranging again from several thousand dollars to several hundred thousand dollars. 133
The survey by Feldman and Kelley addressed two other issues important from a program marketing perspective: (1) Do the applicants, regardless of the outcome, consider the ATP review and selection process fair; and (2) do they plan on applying to ATP again in the future. 134 The majority, whether they received an award or not, viewed the process as fair, and the majority planned on applying to the program again.
Using Survey for Case Study and to Address Research Questions
The previous examples of the survey method focused on providing aggregate descriptive statistics for a program overall. But the survey method can also be used to collect data for an individual project case or to investigate particular research questions. This section illustrates the varied use of the survey method in evaluation research, using examples from ATP.
Surveying Joint Venture Members to Compile Case Study Data
The survey method can be used to gather data in support of other studies. In fact, in the first case study of an ATP joint venture project, Professor Albert Link of the University of North Carolina-Greensboro used a survey to collect economic data from participants. 135 The survey targeted participants in the Printed Wiring Board (PWB) research joint venture, a five-year effort aimed at a turnaround in an industry sector in sharp decline. To establish a lower bound estimate of the economic value achieved by the joint venture, 136 the survey examined a subset of project tasks that participants said they would have started even in the absence of ATP support thereby introducing a counterfactual element. All members of the joint venture completed the survey for each of five major research groups.
The survey had three parts. In the first part of the survey, the counterfactual part, Link asked joint venture members to quantify, by project task, a number of related metrics comparing the actual project technological state to a hypothesized technological state that would have existed at the same time in the absence of ATP’s financial support of the joint venture. From the results, he identified cost and time savings based on those research tasks the members thought they would eventually have done without ATP, and he separated out the tasks they otherwise would not have done at all.
In the second part of the survey, Link collected technology diffusion information: number of papers presented; number of conferences attended in which joint venture members talked about the project’s activities; and percentage of PWB supplier industry with which members interfaced in conjunction with the project. In the third part of the survey, Link collected information about changes in international competitiveness that members believed were linked to the PWB joint venture. He looked at changes in the companies’ share of each market segment due to their participation in the project, and changes in the United States PWB industry’s share in world markets due to accomplishments of the PWB joint venture as a whole. Survey questions regarding the competitive position of member companies are shown in Table 5–8.
As a result of my company’s involvement in the PWB program, my company’s share of each of the following segments of the PWB market has (choose one for each applicable market segment): increased; stayed the same; decreased; no opinion.
Source: Link, Advanced Technology Program; Early Stage Impacts of the Printed Wiring Board Research Joint Venture, Assessed at Project End, 1997, p. 43.
Using Survey to Explore Research Questions
The survey method may be applied to individual research questions. For ATP, a question of central importance is what difference ATP makes for the projects it funds. Or, expressed counterfactually, what would have happened had there been no ATP. These questions are fundamental to both the politics and economics of the program.
As indicated previously, establishing the fact that the larger portfolio of ATP-funded projects has had a substantial positive impact is a necessary but insufficient argument for the program’s existence. Both economists and politicians want to know how much of the impact is attributable to ATP. Multiple evaluation methods have been used to answer this question, and prominent among them was the survey method. An account of how the survey method has been used to tackle this difficult question is instructive to the field of program evaluation. 137 It is a story that demonstrates increasing sophistication in both the questions asked and the efforts to answer them.
Early Surveys Address Counterfactual Question
In accordance with good evaluation practice, both of the early contractor surveys of ATP project participants attempted to isolate the effects attributable to ATP. The first survey 138 simply asked participants “with what likelihood would your organization have pursued the development of your technology, without the ATP award.” If the response indicated they likely would have pursued the technology development, a follow-on question was asked: “Without the ATP award, would you have pursued it at about the same level of effort, with the same ultimate goal?” Almost all the program participants responded either that they would not have pursued their technology development projects at all without ATP or not with the same level of effort and not with the same goal. Anecdotal information was solicited of respondents about how the projects would have differed without ATP. This finding conforms to ATP’s expectations since ATP’s selection process was geared not to fund projects that were expected to proceed in the same way without ATP funding. 139
The second survey 140 probed deeper to determine the effects attributable to ATP. Table 5–9 lists some of the companies, including their number sequence in the survey as reference. A combination of open-ended and closed-ended questions was used, and similar questions were approached in several different ways. Results from both of these earlier surveys supported the conclusion that ATP made an important difference in the timing, resources, and level of risk exposure of participating companies. The effects of ATP as indicated by the responses to these counterfactual questions appeared totally consistent with its mission. But opponents continued to charge that the effects would have happened anyway. To better respond to stakeholders, particularly skeptics, ATP’s evaluation efforts continued to pursue the counterfactual question with more diligence.
BRS Survey Regularly Probes Counterfactual Question
When ATP implemented the BRS electronic survey system in 1993, the question of ATP’s impact—apart from the impact of the projects themselves—was high on the list of evaluation study topics. The BRS survey adopted many of the questions in the Silber survey and added more on timing, resource commitment, and risk acceptance. Indeed, counterfactual elements were incorporated throughout the BRS survey. The objective was to test in multiple ways whether ATP funds were leveraging private investment dollars, or substituting for them; whether ATP was encouraging companies to take on more technically challenging projects than they otherwise would; and whether it was accelerating technology development.
The results of the on-going BRS survey substantiated and extended results of the earlier surveys. It provided evidence that ATP accelerated the participants’ technology development, enabled them to take on high-risk R&D, and stimulated them to spend more of their own funds on R&D than they would otherwise have invested.
Still, opponents of the program continued to assert that ATP simply substituted public dollars for private R&D dollars and did not cause the effects found by the evaluation studies. This view held strong and steady in some quarters despite a 1996 survey by the General Accounting Office (GAO) of “near winners,” 141 that found about half discontinued their projects altogether when they failed to win an ATP award, while nearly all that continued did so at a reduced level of activity.
This finding was consistent with the finding of the earlier surveys that ATP was either enabling projects to start that otherwise would not have started, or accelerating or expanding the scale or scope of those that would have started—effects totally in keeping with its legislated mission. Feldman and Kelley confirmed the findings in a later survey. 142
Focused Surveys Seek Details on ATP’s Effect
In 1996, a study was launched which used the survey method to explore the details of how and why ATP might accelerate technology development and commercialization, what this might be worth to companies, and whether there were effects on timing that extended beyond the ATP project. The survey, carried out as a doctoral dissertation, 143 used structured telephone interview of company participants in 28 projects funded by ATP in 1991. All of the questions centered on the companies’ applied research and technology development cycle time. The purpose was to see if more light could be shed on ATP’s purported effect on technology development time.
The survey found that reducing applied research cycle time was important to the participating companies. It found that they gave concrete reasons for the importance of reducing research time, that the median time savings from ATP participation was put at 50% or three years, and that savings in research time for most of the companies translated into faster time to market. Most of the respondents were able to give either a quantitative or qualitative ballpark estimate of the value of acceleration, with a median value in the millions of dollars for every year saved.
The responses were compatible with ATP having a leveraging rather than a substitutive effect. Respondents listed five major ways participating in the program helped them reduce cycle time, one of which was ATP funding. They explained how some of these effects carried over to benefit other non-ATP technology development projects. The findings confirmed and extended previous survey findings and provided further evidence that ATP was making a difference. However, these findings remained insufficient proof for some.
A later survey focused on universities as research partners in ATP projects. 144 The study used a random stratified sampling process to identify 54 companies for the survey. Category-specific survey instruments were faxed to each respondent, with multiple follow-ups by telephone to increase the response rate. The sample included joint ventures and single applicants, each with and without university involvement.
Recent Surveys Use Randomly Drawn Control Groups to Strengthen Tests of ATP’s Effect
The Feldman and Kelley survey of large random samples of winners and nonwinners from ATP’s 1998 competition, 145 discussed earlier in the chapter, also investigated the question, “Does ATP funding make any difference?” 146 Rather than ask winners what they would have done without the ATP award, this survey asked a randomly selected control group of non-winners one year after proposing to ATP if they had proceeded with their proposed projects, and if they had, how the scale of work compared with what had been proposed to ATP. The sample groups included 119 award winners and 122 non-winners. The investigators used independent sources to verify survey responses concerning employment, financing, and the founding date of the company, and matched survey records with data from the proposals and the firm’s prior history of applications and awards. The survey found that most non-winners did not proceed with their R&D plans and that most of those who proceeded pursued the project at a smaller scale than what they had proposed to ATP. In addition, the survey found differences in the behavior of winners and non-winners in terms of their propensity to share knowledge, and in their ability to raise funding from other sources. It confirmed the existence of, and provided a measure of, the “halo effect” identified by participants 10 years earlier in ATP’s first survey as one of the most important effects of ATP.
ATP’s most recent survey, under development at the time of this study, takes the investigation of differences in winners and non-winners yet a step further. 147 It is to focus on the 2000 competition, and capture what is different between ATP projects and other R&D projects in the company. Like the previous survey, it also seeks to capture differences between winners and non-winners in R&D project characteristics, R&D financing, and ATP’s role. By sampling the level and sources of funding support for the proposed technology before and after submittal of the ATP proposal, and researching how winning and non-winning projects differ from other R&D projects in the proposing companies, this new survey is expected to take another major step toward defining ATP’s effect.
Summary of ATP’s Use of the Survey Method
From the program’s beginning, the survey method has played a vital role in ATP’s evaluation activities, helping to define the program’s impact and value. Surveys have been used to describe features of the entire portfolio of projects, to analyze the progress and effects of funded projects, and to identify the effects of ATP. The survey method has been used as an adjunct to other methods to compile needed data. It has served as a tool to gather feedback from ATP’s clients, and thereby shaped the marketing and operation of the program. It has been used in increasingly sophisticated ways to expand knowledge about the program and its effects, and to investigate key questions of interest to the evaluation and policy communities.
The examples presented in this chapter have illustrated many characteristics of sound survey practice. To mention only a few, the development of the surveys has given considerable attention to the formulation of questions, the sample populations surveyed, the method of administering the survey, the administrator of the survey, and the presentation and interpretation of results to diverse audiences. Attention has been paid to identifying appropriate individual respondents within organizations, and to the timing and sensitivity of the questions asked. Anonymity has been provided when needed to encourage greater candor. The data have been protected and used in ways that do not compromise confidentiality of participant information. Key issues have been pursued in increasingly sophisticated ways, refining and extending previous approaches and results. Important issues have been pursued in different ways, at different times, and by different surveys to confirm and extend the knowledge base. The use of a feedback loop has ensured that future surveys have built on what has been learned from previous ones, resulting in a growing expertise within the organization on the use of the survey method in support of evaluation.
The continuing evolution of ATP’s survey development illustrates how a program can broaden and deepen its use of an evaluation method over time, building on previous work as it adapts to changing issues, challenges, and emerging opportunities. The breadth of ATP’s experience suggests the survey method is a tool that has a place in every evaluation program.
120 In the early years of the program, the focus was on capturing how ATP, which featured leadership of high-risk research by for-profit companies to develop broadly enabling new commercial technologies, differed from other government research funding programs that generally featured either university research of a more basic nature or development of technologies to serve a specific mission—primarily defense and energy. Since then, the role of universities, government laboratories, and other non-profit organizations in ATP-funded projects has emerged as substantial, and attention has been devoted to them in ATP’s evaluation effort.
121 A question at the beginning of the survey was the willingness of companies to participate. ATP included in its cooperative agreements with the companies a provision that they would be expected to cooperate with ATP in evaluation studies. Factors favorable to their willing cooperation may have included the following: (1) The cooperative agreements had been negotiated in the previous year, and memory of the provisions may have been relatively fresh. (2) The agreement was for conditional cost sharing, and companies may have been more eager to please ATP than if the funding had been in the form of an outright grant. (3) Many of the companies were small, with the CEO as respondent, and the CEOs may have had the sense of a public duty to respond as well as a special enthusiasm for discussing their innovations. A major survey conducted in 2000 (see Maryann Feldman and Maryellen Kelley, Winning an Award from the Advanced Technology Program: Pursuing R&D Strategies in the Public Interest and Benefiting from a Halo Effect, NISTIR 6577 (Gaithersburg, MD: National Institute of Standards and Technology, 2001)) had a lower response rate among awardees, despite researcher efforts to increase it. The difference in response rates probably reflects in part the much larger sample size of the later survey (240 projects versus 11 projects), its greater use of closed-ended questions, and the fact that it was largely without the nurturing of one-on-one discussions of the researchers with the companies.
122 The Clinton Administration released a plan in 1993, titled “A Vision of Change for America,” to make ATP the flagship of an economic program that emphasized economic prosperity driven by technological advance.
123 Advanced Technology Program, Report to Congress: The Advanced Technology Program: A Progress Report on the Impacts of an Industry-Government Technology Partnership, NIST–ATP–96–2 (Gaithersburg, MD: National Institute of Standards and Technology, 1996), drew heavily on survey results as well as the results of case studies and site visits.
124 See Silber & Associates, Survey of Advanced Technology Program 1990–1992 Awardees: Company Opinion About the ATP and its Early Effects (Gaithersburg, MD: National Institute of Standards and Technology, 1996).
125 ATP defined its mission in terms of delivering broadly based net economic benefits, taking into account any related environmental, health, and safety effects, positive or negative. For example, a project that developed process technology with reduced toxicity to a large population of workers was considered to have broad-based economic benefits. In short, it sought to take into account to the extent possible all effects of ATP affecting the economy and quality of life for U.S. citizens.
126 The economic evaluation surveys did not attempt to measure in detail the achievement of technical milestones; this duty was assigned to ATP’s technical program managers who managed the process in a distributed way. Toward the end of the program’s first decade, a database to track technical milestones of the portfolio of projects was introduced in ATP as an additional management tool.
127 Multiple reports based on BRS data have been published by Jeanne W. Powell, project manager for development of the BRS, and various coauthors. The BRS was first described by Powell in “The ATP’s Business Reporting System: A Tool for Economic Evaluation,” paper presented at the Conference on Comparative Analysis of Enterprise Data in Helsinki, Finland, 1996.
128 See Jeanne W. Powell, Business Planning and Progress of Small Firms Engaged in Technology Development through the Advanced Technology Program, NISTIR 6375 (Gaithersburg, MD: National Institute of Standards and Technology, 1999).
129 See Section 4, “Satisfaction with NIST and ATP,” in Silber & Associates, Survey of Advanced Technology Program, 1990–1992 Awardees: Company Opinion About the ATP and its Early Effects, 1996.
130 Care was taken to distinguish in the survey between those matters over which ATP has control and can change, and those outside its control, such as legal requirements for government audits and limits on the time allowed for a project to be carried out.
131 The survey conducted by Solomon Associates (published in 1993) included a few customer satisfaction and marketing questions that were delivered to ATP informally in a memo and not included in the published report on impact assessment.
132 See Feldman and Kelley, Winning an Award from the Advanced Technology Program: Pursuing R&D Strategies in the Public Interest and Benefiting from a Halo Effect, 2001, p. 23.
133 There were no data available on decisions of prospective applicants not to submit because of perceptions about the level of preparation costs needed to win an award, or on how proposal quality is affected by costs.
134 Feldman and Kelley, Winning an Award from the Advanced Technology Program: Pursuing R&D Strategies in the Public Interest and Benefiting from a Halo Effect, 2001, pp. 34–35. The main purpose of the survey was not to gather marketing data. Rather, data of relevance to program marketing were compiled in conjunction with testing the proposition that an ATP award certifies quality, resulting in a reputational or halo effect.
135 See A. N. Link, Advanced Technology Program; Early Stage Impacts of the Printed Wiring Board Research Joint Venture, Assessed at Project End, NIST GCR 97–722 (Gaithersburg, MD: National Institute of Standards and Technology, 1997).
136 No attempt was made to estimate the aggregate value of impacts from adoption of the new technology because the project was only just concluding and the technology just beginning to be adopted. However, early adopters provided anecdotal evidence of benefit that is included in the case study report.
137 Other methods of evaluation also investigated the “with and without ATP” question. These efforts are treated in other chapters in Part II.
138 Solomon Associates, The Advanced Technology Program, An Assessment of Short- Term Impacts: First Competition Participants, 1993, p. 13.
139 From the outset of the program, applicants were asked to explain how their work would be different with and without ATP funding. Later, this requirement was supplemented with a provision in the application kit that required applicants to document their specific search for funding prior to applying to ATP and the reasons they were turned down.
140 Silber & Associates, Survey of Advanced Technology Program 1990–1992 Awardees: Company Opinion About the ATP and its Early Effects, 1996.
141 The “near winners” group was not necessarily a sound control group because GAO did not adjust for the reason the sample projects were not selected as winners. For example, ATP may have discovered evidence during oral reviews of the semi-finalist projects that the projects would likely go forward without ATP funding.
142 See Feldman and Kelley, Winning an Award from the Advanced Technology Program: Pursuing R&D Strategies in the Public Interest and Benefiting from a Halo Effect, 2001.
143 Frances Laidlaw prepared the dissertation in partial fulfillment of a doctoral degree at George Washington University. Laidlaw also published a condensed version of her research as a NIST publication, where she served in a part-time capacity as an Industry Consultant to ATP. See Frances Jean Laidlaw, Acceleration of Technology Development by the Advanced Technology Program: The Experience of 28 Projects Funded in 1991, NISTIR 6047 (Gaithersburg, MD: National Institute of Standards and Technology, 1997).
144 Hall et al., Universities as Research Partners, 2002.
145 The 1998 group of applicants was chosen for survey in order to effectively time the administering of the questions; that is, soon enough that applicants would not have forgotten and delayed enough to find out what they subsequently did.
146 Feldman and Kelley, Winning an Award from the Advanced Technology Program: Pursuing R&D Strategies in the Public Interest and Benefiting from a Halo Effect, 2001.
147 At the time of this report, plans were in place for the survey questionnaire to be administered by Westat, an independent research firm. The planned survey, “Survey of ATP Applicants, Year 2000,” received OMB approval to proceed. Andrew Wang, staff economist at ATP, is project manager for the study. of the survey, and the presentation and interpretation of results to diverse audiences.
Date created: July 22,
NIST is an agency of the U.S. Commerce Department