IRI Logo
Hi,
spacer
Download file: Measuring the Effectiveness of R and D
(Adobe PDF File)

MEASURING THE EFFECTIVENESS OF R&D

R&D metrics continue to be an important topic for measuring the effectiveness of R&D. Practitioners share their issues and recommendations.

Lawrence Schwartz, Roger Miller, Daniel Plummer, and Alan R. Fusfeld


Lawrence Schwartz is a vice president and principal of Intellectual Assets, Inc., a California-based professional services company linking business decisions and intellectual property. His areas of technical expertise are in materials and sustainability. Previously he was vice president of strategic development for Aurigin Systems. At Raychem (Tyco), he worked for 25 years in all phases of technology management. He holds a PhD in chemistry from the University of Arizona, an MBA from San Jose State University and a BS in chemistry from San Diego State University. larryschwartz333@aol.com

Roger Miller is a founding partner of Secor, a strategy consulting firm with offices in Montreal, Toronto, New York, and Paris. He is presently a Distinguished Research Fellow at the Said Business School at Oxford University. He was previously the Jarislowsky Professor of Innovation and Project Management at Ecole Polytechnique in Montreal, Canada. rmiller@groupesecor.com

Dan Plummer is the manager of R&D for Sasol North America in Lake Charles, Louisiana, a manufacturer of surfactants, surfactant intermediates, and specialty chemicals. He has 27 years of industrial experience with Sasol North America and its predecessor companies. He has filled roles in product management, sales and marketing, quality development, and global R&D management. Dan received a BA in chemistry from Kenyon College and a PhD in inorganic chemistry from Iowa State University. dan.plummer@us.sasol.com

Alan R. Fusfeld is president and CEO of The Fusfeld Group, Inc., a consultancy practice specializing in strategic development and technology management. Formerly, he was the founder of the technology management group of Pugh-Roberts Associates, Inc., where he was also senior vice president and director. His current interests include R&D leadership, strategy for designing the future organization, R&D metrics, and portfolios. He received his B.E.S. degree in mechanics from the Johns Hopkins University and was a member of the Massachusetts Institute of Technology’s PhD program in the management of technology. www.fusfeldgroup.com ; afusfeld@aol.com

OVERVIEW: Measuring the effectiveness of R&D has been a perennial challenge. IRI’s Research-on-Research working group Measuring the Effectiveness of R&D sought to investigate how managers define R&D effectiveness and what metrics they use to measure it. Via surveys and questionnaires, attendees at IRI meetings revealed that while the three top metrics are unchanged over the past 15 years, there were significant differences in metrics used depending on the industry type. The study also revealed issues with metrics in general and the need for new metrics to meet the changing R&D environment.

KEY CONCEPTS: Metrics, Technology value pyramid, Innovation games, R&D effectiveness, Research-on-research groups

The creation of a set of metrics to measure the effectiveness of R&D has been a major need for research managers for some time. In recent surveys of Industrial Research Institute (IRI) participants, the need for metrics has ranked in the top three for the past three years (Cosner 2010). The enhanced importance of reliable metrics is being driven by several forces: the need to justify the investment in R&D to senior management, the desire to improve efficiency in the use of R&D resources, and the need for a means to estimate the value of the R&D investment for the future growth of the company.

Because R&D tends to be both longer term and more subjective than a sales or manufacturing target, effective metrics must encompass the broad influence R&D has over an organization, including such diverse elements as new products, cost savings, customer support, competitive evaluations, and IP protection. The difficulty in creating metrics is further complicated by the need for changing measures of R&D effectiveness as business conditions and technology strategies change. Over the years, there have been many attempts to create metrics to quantify the success or failure of R&D organizations. The result has been a proliferation of metrics, ranging from financial measures, such as R&D spending as a percent of sales (Andrew et al. 2008, Hauser 1996), to more complex measures, such as strategic alignment (Roussel, Saad, and Erickson 1991). A 2004 study by the European Industrial Research Management Association (EIRMA) lists over 250 potential R&D metrics (EIRMA 2004).

While metrics clearly draw a high level of interest from R&D leaders, it isn’t clear whether this continued focus is because of the inadequacy of existing choices for metrics or a lack of knowledge of the topic. To address these questions, the IRI formed the Measuring the Effectiveness of R&D Research-on-Research (ROR) Working Group in 2005 as a follow-up to previous ROR groups on the topic. After a review of the literature on R&D metrics, the ROR group polled members on the metrics they currently used. The subcommittee further attempted to evaluate whether metric choices were dependent on the type or industry of the organization. Using the Technology Value Pyramid (TVP), a rediscovered and modernized model created by a 1994 ROR group, the group classified and organized existing metrics, providing a guide for choosing relevant, useful metrics, and identified gaps in available metrics.

The Technology Value Pyramid

One of the key challenges of implementing R&D metrics is matching metrics to the various levels and functions of the R&D organization, so that metrics are meaningful to the appropriate personnel. A bench scientist’s metrics, for instance, should be related to his or her accomplishments, while the CEO or general manager should have more overarching metrics related to the performance of the organization under his or her responsibility. A metric for financial return makes sense for a CEO or CTO but has little direct meaning for a bench scientist. Conversely, the number of patents issued may make sense as a measure of performance for the bench scientist but is not directly meaningful for the CEO. Categorizing metrics by their relevance to the particular components that make up the eventual value of the R&D investment addresses this by allowing metrics to be targeted to the most relevant levels of the organization. In this way, bench scientists can access relevant measures of their group’s performance, while managers get the quantitative measures they need—not just to rate the return on the R&D investment but to answer the key question, “Are we doing the right things?”

An earlier ROR group developed the TVP (Figure 1), a model that takes this approach to categorizing metrics. The TVP provides a hierarchy of metrics based on the fundamental elements of R&D value and the relationships of those elements to business results in the long and short term (Tipping, Zeffren, and Fusfeld 1995). Value creation metrics at the apex of the TVP investigate the financial returns arising from the investment in R&D. The returns are impacted by the pyramid’s next segments, portfolio assessment and integration with the business. Portfolio assessment metrics examine the distribution of the R&D investment in terms of risk, timing, and potential return. Metrics addressing integration with the business focus on the interaction of R&D and the business groups in terms of process, teamwork, and organization. The foundations of the TVP are the asset value of the technology and the practices of R&D. Asset value of technology metrics address the development of core competencies that are essential for growth and competitiveness. Metrics assessing the practices of R&D investigate the procedures, culture, and operations of the R&D organization and their ability to contribute to enhancing technology development.



figure 1 border=0

Figure 1.—The Technology Value Pyramid (TVP)



The TVP classified 33 R&D measures identified by the original ROR group. For example, financial metrics fall in the outcome and strategy tier of the pyramid, while a measure of the number of technical reports produced falls in the foundations level.1 The group then surveyed 161 IRI members about the R&D metrics they used, asking participants to rank the 33 identified metrics in order of importance. Over the years, some companies have expressed a sense that the metrics available did not adequately quantify their own corporate needs. This resulted in the creation of newer metrics (Donnelly and Fink 2000; Germeraad 2003). Some of these new metrics were eventually added to the TVP dictionary, so that the list now incorporated into the TVP includes a total of 50 metrics.

Fifteen years after the publication of the original TVP survey, a new ROR subcommittee convened to revitalize the TVP, determine whether there has been a shift in the metrics used to measure R&D, and identify any new metrics that should be added to the TVP. The group also sought to determine whether the importance of particular metrics varies by industry or innovation strategy.

New Surveys on R&D Metrics

In an effort to answer these questions, the current group, Measuring the Effectiveness of R&D, administered two surveys. The first, survey A, was largely a repeat of the original 1994 survey; the intent of this instrument was to provide data to facilitate comparison in metrics usage between the original survey and this one and to measure changes in practice with regard to R&D metrics. Survey A was administered at IRI’s 2009 Annual Meeting; attendees were typically high-level R&D managers, vice presidents of R&D, and chief technical officers (CTOs). The survey listed the 33 metrics included in the original 1994 survey and asked respondents to rank them by order of importance in their organizations. We did not ask respondents to identify their organizations, but we did ask if they represented for-profit or not-for-profit corporations.

The second survey, Survey B, was a broader instrument that asked participants to rate the expanded list of 50 metrics on their importance to the respondents’ organizations, using a scale of 1 (no importance) to 5 (top importance). Again, organizations were not identified, but we did ask respondents to identify the industry in which they worked. The group used this data to distinguish relationships between preferred metrics and industry, company type, and innovation game. Additionally, the survey solicited open-ended comments on the value and usage of metrics and methods used to collect data. Respondents were also asked to describe metrics that they felt were needed and list metrics that their corporation was using but that were not on the list. Survey B was administered to attendees at IRI’s 2008 Member Summit; again, this group comprised high-level R&D managers, vice presidents of R&D, and CTOs.


One of the key challenges of implementing R&D metrics is matching metrics to the various levels and functions of the R&D organization.


Results

Survey A. Survey A was completed by 56 respondents from both corporate and not-for-profit organizations; the 1994 survey did not ask about for-profit versus not-for-profit status, but given that IRI’s membership did not include not-for-profit organizations at that time, the original sample most likely included only for-profit corporations.

Data from Survey A were compared to data from the 1994 administration of the same survey (Table 1). The 2009 data are sorted by for-profit or not-for-profit to allow a better comparison of the data from 1994 to 2009.



Table 1.—Summary of Survey A results: Top ten metrics in 1994 and 2009

table 1 border=0



Interestingly, the top three metrics from 1994—financial return to the business, strategic alignment with the business, and projected value of the R&D pipeline—maintained their importance for for-profit corporations in the 2009 survey, but only the strategic alignment metric was judged of high value by the not-for-profit group.

A number of metrics ranked higher in 2009 rankings than they did in 1994, including achievement of R&D pipeline objectives, quality of R&D personnel, level of business approval of projects, comparative manufacturing costs, and effectiveness of transfer to manufacturing; none of these made the top 12 list in 1994, and all did in 2009. Other metrics lost ground, including portfolio distribution of R&D projects, market share, current spending level for technology, and customer satisfaction surveys; all were in the top 10 in 1994, and none were in 2009. Metrics ranked highly by for-profit respondents to the 2009 survey revealed a focus on financial returns, strategic business alignment, and quality, while not-for-profit respondents were focused on strategic alignment, accomplishment of milestones, quality of people, portfolio distribution, and clarity of project goals. The common thread in both groups was strategic alignment.

Survey B. Survey B asked about companies’ practices with regard to R&D metrics; the survey also asked participants to rate the metrics currently included in the TVP structure. There were 52 respondents.

Questions about company practices with regard to collecting and using metrics were revealing. While a minority of respondents indicated that their companies did not collect any metrics, most companies apparently feel compelled to measure the effectiveness of their R&D efforts. The most-reported uses for R&D metrics included:

  • Providing data for project justification and decision-making processes;
  • Enabling portfolio analysis, balancing, and tracking;
  • Calculating ROI for R&D;
  • Enhancing efficiency in product development;
  • Driving performance and defining goals;
  • Ensuring that R&D is aligned to the business strategy; and
  • Benchmarking against intracompany units as well as outside companies.

 

About half of participants reported that their companies collect data for metrics on a quarterly basis; a quarter collect data either semiannually or annually. Most data must be collected by hand; only a small percentage of participants reported that their companies had automated data collection and metric generation processes. ROI calculations, according to respondents, require some sort of project time tracking, in addition to other data collection.

Participants also rated the importance to their companies of each entry in the expanded list of TVP metrics (including those added after the original 1994 study), sorted by TVP level (Table 2). As in survey A, financial return is a dominant theme, especially in the top two levels of the TVP. The top metrics for the value creation level are all financial, as are most of the top metrics at the strategy level. Other important strategy metrics include strategic alignment and probability of success. Foundation metrics are associated with quantitative measures, such as number of patents, as well as more abstract metrics associated with people development and creativity.



Table 2.—Survey B results: Top five metrics by TVP level

table 2 border=0



In an attempt to distinguish how innovation strategy influenced the choice of metrics, we analyzed results in the context of innovation strategy, using Roger Miller’s concept of “innovation games” (Miller and Floricel 2004). Miller and his colleagues identified eight distinct approaches to innovation strategy, each most likely to be used by firms in particular industrial contexts; for instance, pharmaceutical firms are likely to engage in “science-based journeys” while biotechnology firms engage in “technology races” and cellphone manufacturers engage in “battles of architectures” (Miller and Floricel 2004; Tirpak et al. 2006; Miller and Olleros 2008). Clearly, the structure of the innovation game should influence the shape and objective of the R&D effort and, hence, the metrics an organization uses to measure the effectiveness of its R&D effort.

Because of our small sample size, the original list of eight innovation games yielded data sets too small to provide meaningful information. As a result, for the purposes of analysis, the eight innovation games were consolidated into four: “new and improved” (standalone) industries, such as chemicals or petroleum; “pushing the envelope” (integrated systems) industries, such as computers, machines, aircraft, or computer chips; consumer products industries, such as food, soap, or cosmetics; and services to innovator firms, such as financial, technical, and governmental services. Representatives taking the survey from the services sector were typically from government labs, universities, and research institutes; these were largely not-for-profit organizations.


Most companies apparently feel compelled to measure the effectiveness of their R&D efforts.


Once overall data were analyzed for responses from each of the four defined innovation games, the results revealed some interesting variations (Table 3). Financial return, for instance, is much less important in services than it is in the other groups, perhaps a reflection of the preponderance of not-for-profit organizations in this category. With reduced emphasis on financials, the foundation metrics focusing on intellectual property, R&D processes, and people development rank much higher. Product quality is much more important in consumer products than in the other groups, and the metrics for integrated systems industries is different from the others in that pathways for exploiting technology, environmental management, process efficiency, and management systems are significantly more important than they are in other sectors.



Table 3.—Survey B results: Top metrics by innovation game

table 3 border=0



Respondents to Survey B also identified a number of metrics that they felt should be added to the 50 currently included in the TVP; these fell into four broad groups:

  • Open innovation metrics, including external vs. internal R&D, percent of projects outsourced, and external innovation;
  • Metrics to measure the effectiveness of R&D processes beyond financial, including, for instance, measures of efficiency in project management;
  • Value creation metrics for technical service and support for existing products; and
  • Not-for-profit metrics to measure the development of new capabilities, awards, professional activities, and bibliometrics.

 

Of particular interest were calls for additional metrics in R&D effectiveness and value creation. While the literature does describe some metrics in the area of R&D effectiveness beyond financial measures (e.g., McGrath 1994), most companies found these existing measures inadequate. Similarly, many participants reported a need for metrics to measure the effectiveness of technical service and support groups, which are often associated with R&D budgets, in order to show their value to the company. The closest metrics currently available in this area are related to cost savings, but these do not adequately address the issue.

Respondents also identified a number of issues with existing metrics, including the difficulty of collecting data and calculating and quantifying metrics, the lack of uniform standards for different business units or global sites, and inconsistency in the application of business-specific metrics across the company. Respondents were sometimes skeptical of the value of metrics, attitudes reflected in comments that the easiest-to-measure metrics generally have the lowest value, that metrics are trailing measures that tend to focus on the short term, and that metrics tend to focus on inputs rather than outcomes. Respondents also worried that a reliance on metrics can lead to gaming the numbers to make a group look good. Respondents generally commented on the danger that metrics can influence behavior, for good or bad, and that poor choices of metrics can lead to bad business decisions. Finally, they commented on the difficulty of creating metrics that are meaningful to shareholders. And they lamented that, even with perfect metrics, doing all the right things does not guarantee success.

Discussion

The comparative analysis enabled by Survey A reveals some interesting trends, but also suggests areas where R&D focus has remained the same. Financial return, strategic alignment, and projected pipeline value were the leading metrics in 1994, and they remained so in 2009, in spite of dramatic changes in the context and understanding of R&D in the intervening decade and a half. These three metrics remain important because they reflect core sources of value. The quantitative financial measures (return and value of pipeline) are indicators of the absolute value of R&D in terms the world understands as generally accepted accounting. Strategic alignment measures the likelihood that the R&D effort will have a significant impact on the core business. Taken together, these three measures also offer a wide perspective of measurement. Financial return is a retrospective measure, while projected pipeline value and strategic alignment are outcome measures. Not-for-profit organizations, which do not have the direct financial criteria for success of most corporations, tend to place more importance on strategy and foundation than on value creation metrics.

Shifts in ranking may reflect a number of changes in corporate culture and the context of innovation: the effects of the recession that began in 2008, which has driven a more inward and short-term focus and increased worries about finances, cash flow, and excess spending; the shift from total quality management (TQM) with its emphasis on quality and the voice of the customer; the increased attention to corporate governance; and the rise of global competition. Recessionary anxieties are reflected in the increased importance of financial measures, such as the value of the R&D pipeline, manufacturing costs, and transfers to manufacturing, and in the decreased significance accorded to such metrics as portfolio distribution, market share, spending on technology, and customer satisfaction. The increased importance of business approvals and transfer to manufacturing may be a reflection of a closer focus on corporate governance. The shift away from TQM as a management paradigm may have driven down the importance of customer satisfaction as an R&D metric, allowing other measures to take its place.

A third explanation for the emphasis on costs is global competition. In 1994, China was just beginning to replace some U.S. manufacturing. In today’s worldwide marketplace, low price has become one of the most important criteria for product sales. This is reflected in the increased importance of manufacturing costs and transfers to manufacturing as R&D metrics.

There were also some differences noted in the metrics preferred depending on a company’s innovation game and between corporations and not-for-profit organizations. For instance, consumer product companies tend to rely more heavily on quality metrics than other companies do, possibly because these industries lack the well-defined specifications and standards that generally guide companies working in the standalone or integrated systems areas. Companies in these areas are more likely to emphasize number of defects, efficiency, and system management and R&D metrics. Consumer products companies, on the other hand, must deal in a consumer market where a contaminated or poor quality product could generate major lawsuits, negative publicity, and bad will. Future events could influence following surveys; for instance, we might expect a higher emphasis on quality metrics in the petrochemical industry in the wake of the 2010 BP oil spill in the Gulf of Mexico.

There are also discernible differences in preferred metrics between for-profit and not-for-profit organizations. Not-for-profit companies are typically less concerned about financials and product sales; instead, they are focused on meeting goals and managing people and portfolios. The common thread linking corporations and not-for-profits is strategic alignment; both are concerned to make sure they’re working on the right projects to support the core business.


As the R&D model is changing to an open innovation perspective, metrics depicting these changes, and measuring their outcomes, are required.


As the R&D model is changing from exclusively internal development to an open innovation perspective, metrics depicting these changes, and measuring their outcomes, are required. Proposals for such metrics as percent of new intellectual property from licensing in or percent of new business created by licensed technology have been proposed. Fine-tuning these and integrating them into the TVP are important next steps; one part of that process will be determining in which levels of the hierarchy they belong. Since R&D will continue to evolve, identifying and integrating new metrics is a continuous process. The TVP should continue to develop as the repository of these metrics, along with their definitions, recommended uses, and references.

Conclusion

Our respondents, along with the data they provided, offer some concrete advice for selecting R&D metrics. First, it is important to recognize that some metrics are more important to specific industries and innovation strategies than others. Choosing a metric, or benchmarking results, is best done within similar business types. In a similar vein, before choosing a metric, it is important to understand what the company (and its stakeholders) will do with the results. To be effective, metrics must align with the business and technical strategies as well as with the immediate goals of measurement. The number of metrics that are collected should be limited to the minimum necessary to acquire a complete picture; at the same time, a few robust and clearly measurable metrics should be selected from each portion of the TVP.

Beyond the surveys, the ROR subcommittee has successfully integrated the TVP process into the IRI website. An updated list of the metrics with contemporary language and examples, and their definition and usage has been captured to enhance future knowledge sharing. As new metrics are developed, these will need to be incorporated into the list. Ultimately, a goal would be for a user of the TVP to complete a survey and be able to compare the results with those of others playing the same innovation game.

Even as measuring R&D performance becomes more important, metrics remain difficult and sometimes elusive. IRI should continue with working groups that continue to monitor, survey and make available to others these evolutions. The TVP is an ideal system to support this effort.

References

Andrew, J.P., Hannæs, K., Michael, D.C., Sirkin, H.L., Taylor, A. 2008. Measuring Innovation Survey 2008: Squandered opportunities. Boston Consulting Group. http://www.bcg.com/documents/file15302.pdf (accessed November 11, 2010).

Cosner, R. R. 2010. The Industrial Research Institute’s R&D trends forecast for 2010. Research-Technology Management 53(1): 14–22.

Donnelly, G., and Fink, R. 2000. A P&L for R&D. CFO Magazine February: 44–50.

European Industrial Research Management Association (EIRMA). 2004. Assessing R&D Effectiveness: WG62 Report. EIRMA Working Group Reports. Paris. www.eirma.org/pubs/reports/wg62.pdf (accessed April 11, 2011).

Germeraad, P. 2003. Measuring R&D in 2003. Research-Technology Management 46(6): 47–56.

Hauser, J. R. 1996. Metrics to value R&D: An annotated bibliography. Sloan Working Papers. Cambridge, MA: Sloan School of Management. Massachusetts Institute of Technology.

McGrath, M. E., and Romeri, M. N. 1994. The R&D Effectiveness Index: A metric for product development performance. Journal of Product Innovation Management 11(3): 213–220.

Miller, R., and Floricel, S. 2004. Value creation and games of innovation. Research-Technology Management 47(6): 25–37.

Miller, R., and Olleros, X. 2008. To manage innovation, learn the architecture. Research-Technology Management 51(3): 17–27.

Roussel, P. A., Saad, K. N., and Erickson, T. J. 1991. Third Generation R&D: Managing the Link to Corporate Strategy. Boston, MA: Harvard Business Press.

Tipping, J., Zeffren, E., and Fusfeld, A. 1995. Assessing the value of your technology. Research-Technology Management 38(5): 22–39.

Tirpak, T., Miller, R., Schwartz, L., and Kashdan, D. 2006. R&D structure in a changing world. Research-Technology Management 49(5): 19–26.

1The group also produced a comprehensive dictionary of the identified metrics, including for each entry its definition, a list of advantages and disadvantages, instructions for use, a catalogue of options and variations, and contacts and references for consultation. After the article was published in Research-Technology Management in 1995, the dictionary was stored on a floppy disk; over time, the dictionary fell into disuse and was largely forgotten. The new version of the TVP produced by this ROR group, and the accompanying metrics dictionary, is now available to IRI members on the IRI website.

Reprints

THE ART OF TECHNOLOGY MANAGEMENT

Forty-nine selections the “Managers at Work” department of RESEARCH-TECHNOLOGY MANAGEMENT are now available in paperback. To order, see inside back cover.

Leadership and R&D Productivity

To Motivate, Set Goals

Motivating Research Scientists to Reach for the Eagles

Wining the Respect of Subordinates

How Am I Doing?

Become A Better Coach

Exercising Leadership in the Corporation

What Does the CEO Want?

Are You Credible With Your CEO?

Encouraging ‘Little C’ and ‘Big C’ Creativity

Fostering Breakthrough Innovations

What to do About ‘NIH’

To Innovate Faster, Try the Skunk Works

Overcoming Roadblocks to Commercializing Industrial R&D Projects

Overcoming Roadblocks to the Marketplace

Technology Transfer: A GM Manager's Strategy

Metanoic Society Helps Shell Commercialize Product Ideas in Half the Time

Making Customer Visits Pay

Teams Speed Commercialization of R&D Projects

Building Teams—What Works (Sometimes)Before you Try Team Building

Creating High-Performance Teams

Rules of Thumb for Project Management

Four Steps to Better Staff Meetings

Red Flags at Dawn, or Predicting R&D Project Termination at Start-Up

Hiring People Who Do Good Research

Revisiting the Dual Ladder at General Mills

Misusing the Dual Ladder, or the ‘The Case of Joe Mertz’

Building Careers at Procter & Gamble

…AND MORE