Improving Quality and Performance in Your Non-Profit Organization--Chapter 4

by

Note: This chapter, by Gary M. Grobman, replaces the original chapter that appeared in this book written by Jason Saul. The rights to include the original chapter expired.  This new material was added in August 2016. 

 BENCHMARKING

Beginning in the 1980s, the federal Health Care Financing Administration (HCFA) began collecting and sharing data about the mortality rates experienced by hospitals. Not too surprisingly, it became public knowledge that there was quite a variation among the mortality rates experienced by different hospitals. The initial reaction by some of the poor performers was predictable—blaming the high mortality rates on admitting patients with higher acuity problems. But this clearly did not explain the disparities. Six New England hospitals formed the Northern New England Cardiovascular Disease Study Group to explore whether the way in which medical care was delivered could explain the high mortality rates of outliers rather than aspects of the patient mix, initiating a benchmarking program. Within two years, mortality dropped 24% among the hospitals (Lawrence, 1999), attributed to a methodical analysis of the processes used in each hospital. A post mortem of the experience provided convincing evidence that all of the hospitals, even the high-performing participants, benefitted from learning what their peer organizations did well and not so well.

What is benchmarking?

Benchmarking refers to collecting and analyzing data to determine how well a business process, policy, or program is performing, and whether modifying it based on the experience of similar organizations will improve outcomes. Xerox’s R. C. Camp, a pioneer of corporate benchmarking, defines it as “searching for the most effective methods for a given activity allowing to develop a competitive edge” (Sitko-Lutek & Cholewa-Wiktor, 2015, p. 78). Some refer to this process as finding “best practices,” and these can be found both within an organization (internal benchmarking) and from outside of the organization from direct competitors or other organizations that engage in similar functions (external benchmarking).

There are two primary forms of benchmarking—

Internal benchmarking refers to collecting and analyzing data from within the organization, collecting historical data over time, setting goals, analyzing trends, and then figuring out where expectations are not being met to focus improvement efforts.

External benchmarking refers to collecting and analyzing data from like organizations and using their outcomes as a yardstick to ascertain where “the organization is thriving and where it lags behind” by comparison (Warady & Davis, LLP, 2012).

There are two basic forms of external benchmarking—comparing business processes using data from those in the same sector —“competitive benchmarking”)—and data generated from organizations in other sectors— “general benchmarking” (Sitko-Lutek & Cholewa-Wiktor, 2015).

Benchmarking is much more than simply duplicating the successes of others—it recognizes that each organization is unique and that solutions to problems relating to business processes must be individualized (Sitko-Lutek & Cholewa-Wiktor, 2015). Common focuses of benchmarking are cost, quality, cycle-time, and productivity (Caturano & Co., n.d.). Among items included in the comparisons from a systems theory perspective are inputs, processes, outputs, and outcomes (Coombs, Geyer, & Pirkis, 2011).

As with many of the other strategies described in this book, benchmarking is data-driven. The intent is to analyze how an organization is performing a particular business process, and consider modifying it to improve the desired outcome, with ideas about how to do this generated by the experience of others who are achieving better results. In many cases, the solution to improving performance is not to have employees work harder, but to work “smarter,” a concept that seems to come up often in the change management literature.

Jason Saul, the author of the original version of this book chapter written in 1998, defines benchmarking as “a systematic, continuous process of measuring and comparing an organization’s business processes against leaders in any industry to gain insights that will help the organization take action to improve its performance” (Saul, 2004, p. 7).

In the 1990s, there was a flurry of benchmarking initiatives organized by third parties, many of them nonprofit associations, which recognized the value of establishing performance databases accessible to their members. An organization that determines from accessing one of these databases that its performance lags behind peer organizations (e.g., it is spending much more on fundraising expenses than its peers or is retaining a much lower percentage of its donors each year) can analyze whether there are reasonable explanations for this, and perhaps work cooperatively with staff of peer organizations that demonstrate high performance to determine what changes can be made for improvement.

For example, Hospitals and Health Networks (http://www.hhnmag.com/) sponsors an annual “Most Wired Survey,” a benchmarking initiative first unveiled in 2000. The sponsoring organization is affiliated with the American Hospital Association (AHA). According to the program’s website (http://www.hhnmostwired.com/aboutus/Survey-Benefits.dhtml), “Every organization that completes the survey receives comprehensive feedback on its own IT processes as well as an industry-wide benchmarking report.”

The 2016 survey focused on IT adoption, although previous surveys have included broader topics. Any hospital or health system can participate. Participating hospitals fill out a survey providing data on various business processes that include questions such as, “For what percentage of pharmaceutical supplies is an electronic order generated when they reach a predetermined par level?” Each May, the top 100 hospitals receive a “Most Wired” award, and each October, all survey participants receive a benchmarking report that ranks their performance with all others that participated. In this way, hospitals can determine which of their business processes have the most potential to improve, and they can take whatever steps they feel are necessary to find out why one particular hospital or another, based on which they judge to be similar to their own, is having better results from any particular business process (Solovy, 2003; Health Care’s Most Wired, 2016). 

Many of us are already familiar with databases such as those available from Guidestar (http://www.guidestar.og) and Charity Navigator (http://www.charitynavigator.org), which have data from thousands of nonprofit organizations, gleaned from 990 tax returns. Savvy nonprofit leadership can research substantial financial data publicly available, choosing which organizations they identify as peer organizations, and make comparisons without leaving the comfort of their offices. Salary information, fundraising and administrative costs, and program funding are among the items required to be disclosed on 990s. Taking this one step further, many nonprofit executives are flattered by receiving a call to discuss how their nonprofits are demonstrating exemplary results. And formal benchmarking practices take this well beyond a telephone conversation—where a nonprofit may send in an entire leadership team to visit a high-performing nonprofit and see firsthand how that organization is achieving such good results.

What nonprofit operations can be benchmarked?

Jason Saul (1999) refers to the three “P’s” of nonprofit benchmarking for guidance on appropriate targets for a benchmarking strategy:

  1. Processes—these are aspects of operations such as accounts payable, customer service, and recruitment of staff and volunteers
  2. Policies—examples are personnel policies, salary structures and retention policies, and those relating to staff who are voluntarily or involuntarily separated from the organization
  3. Programs—these include programs relating to the mission of the organization, in which data is collected from peer organizations to determine if there are better ways to achieve a program outcome.

What techniques are used in benchmarking?

There is no “one best way” to do benchmarking, and benchmarking is likely to be more successful when it is custom-designed to meet the needs of the organization rather than using a cookie-cutter approach—although perhaps it is possible to benchmark how to benchmark. One typical benchmarking model is provided in an article by one of the leading accounting and consulting firms, Caturano and Company (Caturano & Co., n.d.). This 6-step model involves the following:

  1. Planning, Scope and Goals. In this phase, the organization analyzes which benchmarking projects to pursue, conducts a cost-benefit analysis, determines what metrics will be collected, chooses which staff will participate, and generally pre-plans what is expected to occur.
  2. Data Collection. In this phase, quantitative data for each business process is targeted for study in step 1, including who is involved in that process, when the process is performed, and how it is performed.
  3. Data Analysis. This phase involves comparing the data collected against peer benchmarks. During this phase, a decision is made to target resources to where there is relatively poor performance compared to peer organizations, so that change will result in maximum economic benefit and priorities can be set.
  4. Action Plan Development. In this phase, an action plan, effectively a road map to achieving desired ends, is created to set objectives and provide a path to achieve goals during a particular time period.
  5. Action Plan Implementation. Other than simply carrying out what is developed in the plan, the firm also suggests the importance of providing clear and effective communication to all stakeholders to minimize unnecessary resistance to changes that are being proposed.
  6. Review and Calibration. This involves reviewing the results of the benchmarking project and identifying additional opportunities to improve business processes that are the result of that benchmarking.

Who manages the benchmarking program?

Noah Kahn of Kaiser Permanente and Stephen Mulva of the University of Texas at Austin have pointed out some of the advantages and shortcomings of three distinct benchmarking management models that are available for external benchmarking programs (Kahn, 2009, Mulva, 2009). I have augmented these with some of my own observations:

  1. Outside for-profit consulting firm

Advantages: This choice is quicker to start up and obtain results.

Disadvantages: Project selection may be inconsistent, and definition of metrics may not provide a clear picture. This is likely to be the most expensive choice.

  1. Independent benchmarking managed by a third party, such as an association or a university

Advantages: data collected is likely to be more secure. The results are more likely to have academic rigor. And because of the assumed stability of these organizations, they are more likely to be able to provide long-term stability and provide “consistently reliable, unbiased results on a long-term basis” (p. 11).

Disadvantages: Academic projects are often inherently slower than their for-profit counterparts, and many of their participants have other duties, such as teaching.

  1. Casual benchmarking, which is informal information sharing among organizations.

Advantages: There is no formal process, and participants can simply share whatever information they desire, or the information is obtained from public sources, such as newspaper reports and online public databases.

Disadvantages: The data shared may be comparing “apples” to “oranges,” confidentiality can be hit-or-miss, and this model depends on strong personal relationships and trust compared to the other two models.

History of benchmarking

Land surveyors originally coined the term “benchmarking” to refer to marking reference points used to measure the distance from a particular spot (Saul, 1999). As Jason Saul wrote in the first edition of this book, benchmarking often refers to not only measuring how close an organization is to reaching a particular reference point, but also helping to set the goals in the first place. 

Informal benchmarking has a long and undocumented history. It is human nature to be curious about how one’s competitors are achieving success. It is not unusual for achievers to share their successes in the media or in other venues, either publicly or privately.

The roots of formal benchmarking practices are often attributed to Xerox Corporation as a reaction to intense market competition in the late 1970s from Japanese producers of copying and computer equipment (Letts, Ryan, & Grossman, 1999). Rather than studying how the Japanese were successful, Xerox identified one particular shortcoming of its operations from customer complaints—slow order fulfillment—and sent a team to the Freeport, Maine headquarters of a U.S.-based company, L. L. Bean, with a reputation of exemplary success in minimizing the time between when a customer placed a telephone order and when that customer received the order. Based on the value received from this exercise, Xerox encouraged its managers to adopt benchmarking because, as one manager explained, it provides a competitive advantage whenever he can “discover where something is being done with less time, lower cost, few resources, and better technology” (Letts, Ryan, & Grossman, p. 1).

The American Performance and Quality Center (APQC) (https://www.apqc.org/) established the International Benchmarking Clearinghouse in 1992. You can go to the “Benchmarking Portal” on the center’s website and find benchmarking resources that are perhaps unparalleled on the Web, many of which are free. (By the way, using the search term “nonprofit” on this site’s “search” form provides access to hundreds of case studies, reports, articles, and white papers featuring benchmarking efforts in the nonprofit sector.)

Why benchmarking is useful to nonprofits

Nonprofit organizations have a performance margin rather than a profit margin, but funders and other stakeholders are increasing their demands for accountability with respect to efficiency and effectiveness (Saul, 1999). Traditionally, with fewer resources than their for-profit counterparts, nonprofits are under increasing pressure to make their dollars go farther in fulfilling their important missions, and benchmarking is an effective strategy to improve quality and performance. Data collection and analysis is of benefit not only to the task of saving money. It also has a public relations value in that many funders—both in government and in the private sector—are impressed by hard data that demonstrates that the organization’s leadership is sensitive to current business practices and is not simply running the organization by the seat of the pants.

How benchmarking is being used by nonprofits

There are many business processes that are in common between for-profits and nonprofits that are suitable for benchmarking projects. Among them are accounts payable and receivable, expense reimbursement, financial control, invoicing, payroll, procurement, order fulfillment, recruitment, employee retention, payroll processing, social media management, advertising, and customer service. Volunteer recruitment, training, and retention, unique to the nonprofit sector, can be benchmarked, as data can be collected and compared among similar nonprofit organizations.

Fundraising cost control is a ripe area for benchmarking initiatives. One such effort that received national attention was a program launched by Creighton University in the early 2000s just after the board of directors announced a $350 million capital campaign, a 40% increase over its previous goal. The University VP overseeing the effort accessed a benchmarking database managed by a Minnesota-based consulting firm (The Core Group) that had fundraising data from 65 institutions of higher learning. That VP overcame potential resistance from the board to hiring more fundraisers by demonstrating that the data from the benchmarking database predicted a large return on investment as a result of hiring major gifts officers. According to the account in The Chronicle of Philanthropy, $3 million in additional gifts were attributable to the four major gifts officers she hired, contributing to a 50% overall increase in university fundraising that year (Schwinn, 2007). According to that account, institutions participating in the Core Group benchmarking services pay $10,000-$25,000 for the right to participate in accessing the database, and they receive a detailed analysis of their fundraising operations based on the benchmarking data and consultations with the provider. Obviously, enough institutions are willing to make this investment, because the benchmarking data and analysis provides significant returns and permits the organizations to focus on aspects of their fundraising that can be improved upon based on how peers are performing.

Benchmarking studies in higher education have been carried out by the Institute for Education Best Practices (an APQC subsidiary) in institutional budgeting, electronic student services, and faculty development (Banta, 1998).

Care USA, in the early 1990s, engaged in internal benchmarking to change its highly decentralized culture of managing international relief projects, recognizing that there were inefficiencies associated with its ad hoc local management. Increased competition for donor funds and expected federal grant cutbacks encouraged the agency to take a more scientific approach to how best to get the most bang for its funders’ bucks. In one benchmarking approach, technical staff in the United States collected data relating to “best practices” on various types of development projects. They then graded current projects using this comparative data, encouraging the project managers in the 37 countries served by CARE to improve. A second approach used was having the local project managers and headquarters staff collaborate on identifying best practices and create an environment where managers would be able to learn from this and incorporate lessons learned into the local projects (Letts, Ryan, & Grossman, 1999).

Limitations of benchmarking in the nonprofit sector

There are some generic difficulties with pursuing a formal benchmarking program, some of which are common to other strategies mentioned in this book. Among them are—

  1. There needs to be a basic level of trust that data shared by participating organizations will be kept strictly confidential, particularly when participants are in direct competition with each other. In cases in which competition is an issue, the data must be collected and maintained by a trusted third party, and there must be enough data points so that participants can’t derive any individual results.

One example of this is an initiative of the Construction Industry Institute (CII), an affiliate of the University of Texas at Austin, to work with several healthcare organizations to benchmark healthcare facility projects with respect to cost, schedule, productivity, safety, function, and operations (Kahn, 2009, Mulva, 2009). CII trained 18 graduate students in benchmarking techniques and then embedded them as summer interns in member organizations. The mission of the CII Healthcare Facilities Benchmarking Program is “to develop a standardized and secure system to measure and evaluate capital project performance by organizations engaged in the delivery of healthcare” (Mulva, 2009, p. 35), and it appears to be a response to a burgeoning increase in hospital construction costs, a growing component of healthcare costs in general that have certainly been a salient political issue in recent years.

  1. Sitko-Lutek & Cholewa-Wiktor (2015) write that “virtually all major obstacles in the implementation of benchmarking in the management system of a healthcare facility are connected with the issues of cooperation and the exchange of information” (p. 86).
  1. Staff may feel threatened by any program that is designed to improve efficiency, as they may feel that their jobs are threatened.
  1. There may be legal problems (such as anti-trust violations) with sharing information among competitors.
  1. Benchmarking is not without cost—it can be expensive to fund travel to meet with peer organizations, collect and analyze data, and develop an action plan.
  1. It takes a major effort to create performance indicators that can be adopted to compare apples to apples when each situation in a nonprofit may be unique. For example, in the case of benchmarking fundraising costs, a peer organization may appear to have substantially lower fundraising costs not because it fundraises more efficiently, but because some of its costs are allocated to some other administrative or program line-items.

“Consistent data are hard to find,” writes Noah Kahn, National Manager of Finance Metrics at Kaiser Permanente. “Data are often either inaccurate or incomplete. This is because data typically are part of complex systems that change over time, and the systems and data involve interaction among many different groups and people for whom training and definitions are not always consistent” (Kahn, 2009, p. 12).

  1. There may be unique reasons why one program has higher costs than another. These reasons may relate to something other than inefficiency, such as local taxing or regulatory policy, the labor-management environment, simple politics, or data that are entered incorrectly into a database.
  1. There is often resistance to anything new.
  1. Benchmarking often requires staff to add something to their activities. The nonprofit sector is thought to have many staff who are already stretched with respect to their duties, and stressed out from having too few resources to fulfill more and more demands.

Conclusion

Benchmarking is an attractive strategy to improve quality and performance in nonprofit organizations, although it is not without its shortcomings. Letts, Ryan, & Grossman (1999) point out that the prevailing culture in the nonprofit sector is for organizations “to get the job done, not to rethink the nature of the job, or the possibilities of improving performance…faced with the choice of doing or analyzing, most nonprofits opt for doing, and avoid the challenge of establishing performance metrics” (p.11). They note that those who work in the nonprofit sector “place a premium on shared values, mutual respect, and professional esteem, and would be reluctant to make comparisons even if they could” (p. 11). This may well have been true when these words were written. Since then, there has been a virtual revolution in the way nonprofits have been managed, particularly in the health care and education sectors, where competitive pressure for net revenue and market share have encouraged the diffusion of the management and quality improvement strategies discussed not only in this chapter, but throughout this book. And funders are demanding efficiencies documented by data collection and analysis. But as with any of these strategies, benchmarking programs may be seen as a threat by staff who still see nonprofit management as more art than science, with minimal likelihood that collecting and analyzing data pays any real dividends. Of course, the proof is in the pudding—and the many success stories of benchmarking and its sister formal management and quality improvement techniques speak for themselves.

References

Banta, T. (July-August 1998). Benchmarking in assessment. Assessment Update.10(4).

Caturano & Co. (n.d.). Why benchmark your organization’s operations? Retrieved from:http://www.nacdne.org/docs/

Why%20Benchmark%20your%20Organization.pdf

Coombs, T., Geyer, T., & Pirkis, J. (June 2011). Benchmarking adult mental health organizations. Australasian Psychiatry, 19(3).

Health Care’s Most Wired. (2016). Most wired survey benefits. Retrieved online from:  http://www.hhnmostwired.com/aboutus/Survey-Benefits.dhtml  

Hubbell, G. (2015). Benchmarking fundraising performance. WVDO Advanced Skills Workshop. International Journal of Contemporary Management, 14(2).

Kahn, N. (Fall 2009). National healthcare capital project benchmarking—An owner’s perspective. Health Environments Research & Design Journal, 3(1) 

Lawrence, J. (1999). Exceed your goals. New Enterprise. Fall 1999.

Letts, C., Ryan, W., & Grossman, A. (1999). Benchmarking: How nonprofits are adapting a business planning tool for enhanced performance. Retrieved online from: http://www.tgci.com/sites/default/files/pdf/Benchmarking_0.pdf

Mulva, S. (Fall 2009). Healthcare facility benchmarking. Health Environments Research & Design Journal, 3(1).

Saul, J. (1999). Benchmarking. (in Improving quality and performance in your non-profit organization). pp. 47-60. Harrisburg, PA: White Hat Communications.

Saul, J. (2004). Benchmarking for nonprofits: How to measure, manage, and improve performance. Minneapolis, MN: Fieldstone Alliance.

Schwinn, E. (2007). How much fund raising really costs. Chronicle of Philanthropy, 19(16).

Sitko-Lutek, A. & Cholewa-Wiktor, M. (2015). Benchmarking for public hospital management—Research findings. International Journal of Contemporary Management. 14(2), 77-88.

Solovy, A. (November 2003). The value of benchmarking. Hospitals and Health Networks. 77(11).

Warady & Davis, LLP. (2012). Profitable solutions for nonprofits. Issue 2, 2012. Retrieved online from: http://www.waradydavis.com/news/2012/nfp/benchmarking612.php

Resources

The American Performance and Quality Center (APQC) (https://www.apqc.org/)

NTEN’s Nonprofit Benchmarking Tool

 http://www.benchmarks.nten.org

Not For Profit Benchmarking Association

http://nfpbenchmarking.com

What Works Clearinghouse

http://ies.ed.gov/ncee/wwc/ 

Back to topbutton