NIST Advanced Technology Program
Return to ATP Home Page
ATP Historical Award Statistics Business Reporting System Surveys EAO Economic Studies and Survey Results ATP Factsheets ATP Completed Projects Status Reports EAO Home Page
NISTIR 7323 - The Determinants of Success in R&D Alliances

Part 3 - Data and Sample

The Advanced Technology Program (ATP) at the National Institute of Standards and Technology (NIST), U.S. Department of Commerce, supports technology innovation in the United States through competitive funding awards to companies pursuing high-risk R&D. A key mission of the ATP is to promote collaborative R&D in U.S. industry. Since 1990, ATP has funded over 200 research joint ventures involving almost 1000 companies, universities, and other organizations. An ATP research "joint venture" must include at least two for-profit companies, and may also include universities and other nonprofit organizations. ATP joint venture project awards do not have a maximum award amount, but the project participants must provide at least 50 percent of the total project cost. ATP joint venture projects typically last for three to five years. The ATP targets high-risk R&D projects with potential for broad economic impact, and supports innovative technologies in any field—biotechnology, chemistry and advanced materials, electronics, information technology, manufacturing.

We note that ATP research "joint venture" projects are not equity joint ventures where partner firms create a separate legal entity (with joint ownership by the parent companies) to pursue collaboration objectives. Instead, they are contractual joint ventures or alliances, organized under a contractual agreement. For the purpose of this paper, we use the terms "R&D alliance" and "research joint venture" interchangeably to refer to these R&D project collaborations that receive funding support from the ATP.

This study of R&D alliances began with interviews of ATP program managers responsible for supervising ATP-funded research joint venture projects, and then interviews with company participants in ATP-funded R&D alliances. The interviews focused on investigating the determinants of success in R&D alliances in order to build internal validity for the subsequent research (Dyer and Powell, 2001).

To build external validity, we used the interview findings to contribute to the development of a special survey, the Survey of ATP Joint Ventures, which was fielded in 2003. The survey collected information at the firm level (e.g., project benefits to the respondent's company), and also at the firm-dyad level (e.g., characteristics of the relationship between the respondent's company and specific partner companies participating in the same R&D project). All companies in ATP research joint venture projects funded between 1990 and 2001, with project completion by 2004, were included in the survey. Altogether, 486 companies were eligible to respond to the survey.

The survey used a mixed-mode methodology that included an internet web survey, and a follow-up phone interview for those that did not respond to the web survey. Survey design and data collection was carried out by a leading survey research firm. Following standard survey procedures, multiple contact attempts were made in order to maximize survey response rates. An advance letter describing the purpose of the survey was mailed to each company representative. An email then provided the representative with the web survey link and login information. Reminder emails were sent to non-respondents over the course of several weeks. After eight weeks, remaining non-respondents were contacted by telephone to collect the survey data. While these telephone contacts were intended to allow respondents the opportunity to complete the interview over the telephone, the vast majority of these calls actually served to prompt the respondent to complete the web survey. Thus, although technically the administration of the survey was mixed-mode, only fourteen telephone interviews were actually conducted while 383 completed the web survey. The overall response rate (proportion of survey responses relative to the within-scope population) was 89%.

The survey yielded a dataset with a total of 397 companies and 142 R&D alliances. At least one firm is represented from each of 142 R&D alliances. For 121 R&D alliances, two or more firms are represented in the dataset. The survey data were supplemented by firm-level and project-level archival data. The archival data included information such as the amount of project award funding provided by the ATP, the project participants' cost-share contribution, and other project and company descriptive information. Table 1 shows the distribution of projects and companies in the survey dataset, by year of project completion. Table 2 shows the distribution of projects and companies, by the technology area of the project.

TABLE 1 - Survey Respondents: Projects and Companies, by Year of Project Completion

Project
Completion
Year

Projects Companies

Number

Percent

Number

Percent

1995

4

3%

11

3%

1996

6

4%

11

3%

1997

13

9%

29

7%

1998

14

10%

25

6%

1999

17

12%

53

13%

2000

30

21%

101

25%

2001

10

7%

24

6%

2002

18

13%

43

11%

2003

18

13%

62

16%

2004

12

8%

38

10%

Total

142

100%

397

100%

Source: Advanced Technology Program, Survey of ATP Joint Ventures

TABLE 2 - Survey Respondents: Projects and Companies, by Technology Area Project

 

Technology Area

Projects

Companies

Number

Percent

Number

Percent

Biotechnology

15

11%

27

7%

Chemistry/Materials

43

30%

102

26%

Electronics

41

29%

106

27%

Information Technology

17

12%

38

9%

Manufacturing (Discrete)

26

18%

124

31%

Total

142

100%

397

100%

Source: Advanced Technology Program, Survey of ATP Joint Ventures

Analysis of Outcomes: Firm-level or Alliance-level

For 121 R&D alliances, we have survey data responses from two or more companies participating in the alliance. Therefore, we address an issue that has not been resolved in the alliance literature: Should alliance outcomes be assessed at the firm-level or at the alliance-level? Anderson (1990) argues that joint ventures should be evaluated as independent entities seeking to maximize their own rather than their parents' performance, in order to minimize parent politics and parochial viewpoints, foster harmony among parents, and facilitate learning and innovation. Anderson's argument suggests that the appropriate level of analysis for studying alliance performance is the alliance level. Glaister and Buckley (1998) criticize Anderson's perspective as naive and impractical because alliances are embedded within their parents' alliance networks and thus politically inseparable from the power structure of those networks. These opposite views reflect a difference of opinion in the alliance literature regarding whether the appropriate level of analysis is that of the alliance or the participating companies.

The typical approach to making alliance performance an operational measure is to equate the company and alliance levels of analysis by viewing alliance performance from the vantage point of a single participating company. For example, Arino (2003) collected responses from 83 Spanish companies participating in alliances in order to evaluate the construct validity of measures of alliance performance. While responses from more than one participating company were received in the case of four alliances, Arino (2003) randomly dropped company responses in order to keep only one response per alliance, thereby equating the company and alliance level of analysis. Geringer and Hebert (1991) justify their use of a single response per alliance by arguing that participants are aware of their collaborators' assessment of the alliance, and the assessments of their collaborators are in effect incorporated into their own assessment. While using a single response per alliance is generally accepted in alliance studies in part because of the difficulty of obtaining multiple responses, there are clearly potential problems with viewing alliance performance from the perspective of a single participating company. In particular, this is problematic in cases where Geringer and Hebert's (1991) assertion regarding the interdependence of alliance partners' performance assessments may not hold.

Our data enable us to assess empirically whether companies participating in the same alliance have similar assessments of the alliance performance. If responses are sufficiently similar, then they can be aggregated to form an alliance-level construct. If they cannot be aggregated, then the appropriate level of analysis is that of the participating companies. Procedures used to assess whether individual data can be aggregated to form group measures (Ostroff, 1993; James, 1982) can be applied to responses from companies participating in an alliance. Intraclass coefficients (ICC) provide a means of assessing whether data may be aggregated (Bliese, 2000). ICC(1) compares the proportion of variance accounted for by membership in a particular group—or in our analysis a particular alliance. An ICC(1) of 1 would indicate perfect agreement among firms in an alliance. Our data yields an ICC(1) of 0.12. While there is no standard cutoff for ICC(1), a value of 0.12 indicates a low level of agreement. ICC(2) is an estimate of the reliability of group (alliance) means; the ICC(2) for our data set is 0.28, well below the accepted cutoff of 0.70. These low values indicate that aggregation would be inappropriate for our data and that the appropriate unit of analysis for our study is that of participating companies rather than the alliance.

Return to Table of Contents or go to next section of interim report.

Date created: August 29, 2006
Last updated: September 11, 2006

Return to ATP Home Page

ATP website comments: webmaster-atp@nist.gov  / Technical ATP inquiries: InfoCoord.ATP@nist.gov.

NIST is an agency of the U.S. Commerce Department
Privacy policy / Security Notice / Accessibility Statement / Disclaimer / Freedom of Information Act (FOIA) /
No Fear Act Policy / NIST Information Quallity Standards / ExpectMore.gov (performance of federal programs)

Return to NIST Home Page
Return to ATP Home Page Return to NIST Home Page Go to the NIST Home Page