Improving Quality and Performance in Your Non-Profit Organization--Chapter 5

by

Chapter 5

Introduction to outcome-Based Management

To improve quality in a larger organization, simply adopting a progressive management philosophy such as TQM is not going to be enough in today’s modern competitive business climate. As an organization grows, there are more pressures for accountability, not only internally from a board of directors, but from elected officials, government funders, foundation funders, individual donors and volunteers, and the public. Leaders of large organizations generally do not have the ability to visualize every aspect of their organization’s operations and assess what is going on just by looking out their office windows, or by engaging in informal conversations with their staff and clients. The proverbial “one-minute manager” is an ideal construct that is not particularly well-suited to crystallizing the information a CEO needs to make judgments on how to allocate precious resources.

To accomplish the important task of assessing what is really going on within a large organization, most have a Management Information System (MIS) that permits the aggregation of data in a form that can be analyzed by a manager, enabling him or her to see trouble spots and make adjustments in operations and to generate reports required by the government, funders, auditors, and the board of directors.

For many larger non-profits, particularly those that depend on government and foundation grants rather than private donations, the objective of “meeting clients’ needs” has become a more formalized process. Times have changed within just the last decade or so. Traditionally, measures of organizational performance for human service organizations were based on a model more appropriate for industrial processes, where raw materials were turned into finished products. In the language of industrial systems analysis, inputs (the raw material) were processed into outputs (the finished product). In adopting an analogous frame of reference to industry, the conventional thinking was that human service agencies took in unserved clients (input), provided services (process), and changed them into served clients (output). In this way of thinking, organizations improved their output by increasing the number of clients served.

An exciting new way of looking at the output of an agency is called outcome-based management (OBM) or “results-oriented accountability” (ROA). Most recently, results-oriented management and accountability (ROMA) has become the buzz-word describing this general tool. All of these terms have similar roots and go back to original work done in the early 1980s by Harry Hatry of the Urban Institute and Reginald Carter who, at the time, worked for the Michigan Department of Social Services. In September 1981, the Urban Institute together with the American Public Welfare Association (now called the American Public Human Services Association), published Developing Client Outcome Monitoring Systems, which included both Hatry and Carter among its principal authors. In 1983, Sage Publications, Inc. published Carter’s The Accountable Agency. These two publications played a prominent role in the development of outcome-based management during the last two decades.

OBM focuses on program outcomes rather than simply quantifying services delivered. Program outcomes can be defined as “benefits or changes for participants during or after their involvement with a program” (from Measuring Program Outcomes: A Practical Approach, United Way of America).

For example, an organization dealing with reducing drug abuse may have a stellar record of attracting clients through a flashy outreach program. It may be exemplary in convincing doctors in the community to donate thousands of hours of free services to the program, thereby reducing unit costs per client. It may have few complaints from the clients, who feel the staff are competent and treat them with dignity. An analysis of conventional data might indicate that there is little room for improvement. But, perhaps, no data are collected on whether those treated for drug abuse by the organization are successfully able to become independent, avoid future interactions with the criminal justice system, and rid themselves of the scourge of drug dependence for an extended period of time—all measurable outcomes for a successful substance abuse program. If most of these clients are back on the street and drug dependent, is that organization providing successful treatment even if drug abuse services are being provided? Are funders and taxpayers getting a fair return on their investment?

In the outcome-based management model, the number of clients served is an input. The output is considered to be measurements concerning the change in the condition of the clients after receiving the services. For example, if thousands of clients are served, but the conditions of the clients have not improved, then the outcome is zero, even if the services were provided 100% on time, every client received a satisfactory number of hours of services, and there were no client complaints.

It is no longer indicative of the effectiveness and value of an organization to only collect data on how many clients sought services, how many of these were accepted into the client stream rather than being referred or turned down, how many hours of service were provided, and how much each service cost and was reimbursed. Outcome data together with the above process data are needed to measure the effectiveness and value of an organization.

Major funders of human service agencies are becoming more sophisticated in requiring answers to questions that go beyond the usual data analysis that focuses on costs and the quantity of services being provided. Among them are the U.S. Department of Health and Human Services’ Office of Community Services, the U.S. Department of Housing and Urban Development (HUD), and the federal Head Start program. Mainstream charities such as the United Way, the Boys and Girls Clubs of America, and the Girl Scouts are fostering the adoption of outcome-based management in their member agencies.

Increasingly, foundations and government funders seek not only to know if costs are reasonable and clients are receiving services, but whether the provision of these services is actually achieving the broad objectives of the program being funded.

Government Performance and Results Act

In addition to a significant change in attitude about the accountability of the private non-profit sector, the passage in 1993 of the Government Performance and Results Act, PL 103-62, changed the way federal agencies plan, budget, evaluate, and account for federal spending. The intent of the Act is to improve public confidence in federal agency performance by holding agencies accountable for program results and to improve congressional decision-making. It seeks to accomplish this by clarifying and stating program performance goals, measures, and costs “up front.” These changes were implemented beginning in September 1997.

Beginning in March 2000, federal agencies are required to report to the President and Congress about their own performance when compared to goals established for that year, analyze progress toward those goals, and explain any deviations from the goals and impediments encountered during implementation. On the surface, this would appear to affect primarily federal agencies. However, agencies that either directly fund or block grant dollars to the states, who in turn allocate, grant, or contract out those dollars to local government and private non-profit agencies, will similarly have to set up an accountability framework to comply with changes in the federal legislation.

For example, the Administration for Children and Families, specifically its Office of Community Services, funds the Community Services Block Grant (CSBG) to states. The states, in turn, fund Community Action agencies. The Office of Community Services has created an outcome-based management framework entitled ROMA (Results-Oriented Management and Accountability) that is recommended for developing and reporting family, agency, and community outcomes.

Many local government and non-profit agencies that receive CSBG funds have built outcome requirements into their annual reporting systems. Other federal agencies such as the Department of Housing and Urban Development (HUD) are in the process of developing an outcome framework for their community-funded programs.

Motivations for Outcome Evaluation

All of us have been familiar with the contentious public policy debate that heated up in the ‘80s and ‘90s during a time of shrinking government funding and increasing questioning, if not outright cynicism, about the effectiveness of the spending of billions of dollars in federal anti-poverty funds.

The changes in welfare reform laws made by states and the federal government were in a sense a repudiation of the view that federal government money could wipe out hunger, homelessness, and unemployment. Of course, we have no crystal ball to predict what would have happened in the absence of spending these billions of dollars. But in the minds of many hardworking taxpaying voters, there was a prevalent view that the cycle of poverty was not being effectively broken by government funding, and a new strategy was needed to encourage self-sufficiency. Unfortunately, advocates did not have the outcome data to demonstrate that their programs were effective. Funders are increasingly looking beyond their focus of whether services are being delivered in a cost-effective manner, to whether the delivery of the service is successfully accomplishing what was intended. And many forward-thinking organizations are proactively adapting outcome-based management to collect the data they feel are necessary to tell their story to funders, political leaders, advocates, and the public. Why? Because in their own words, it helps them to tell their story.

According to the United Way of America Web site on outcome-based management, the dividends paid by such programs include helping organizations recruit and retain talented staff, increase volunteers, attract new participants, engage collaborators, increase support for innovative efforts, win designation as a model or demonstration site, retain or increase funding, and gain favorable public recognition. The outcomes data can be used by managers to strengthen existing services, target effective services for expansion, identify staff and volunteer training needs, develop and justify budgets, prepare long-range plans, and focus the attention of board members on programmatic issues.

Implications for Startup

For some organizations, the shift to outcome-based management will have modest cost implications. It may mean more data being collected from clients during intake. It may mean follow-up surveys to see what happens to clients after they have availed themselves of the organization’s services. When this information is available, it is of extraordinary value to those who design, administer, and deliver those services.

Most agencies, however, need to know that they will incur short-term startup costs during the first 12-24 months. It is not necessarily that more data are being collected; it is how these data are being integrated into the existing services and physical structure that increases the costs of obtaining outcomes.

Frequently, the outcome does not occur within the agency’s scope and, therefore, the agency has to track and follow up on data after the service was delivered in order to identify and measure the outcome. A good example is case management in which case managers often manage an array of programs and services external to their own agency and must track and identify outcomes across a host of other agencies. This has implications for one’s own job and begs for the use of computers that in many non-profits are underutilized or are of older technology.

How else does the shift to outcome-based management affect an organization?

First, it can create some perverse incentives. For example, there is more of an incentive for an agency to reject those clients who are judged by intake staff to have little likelihood of having a successful outcome. Even after a client is accepted, the nature and duration of the services provided may change as a result of the pressure on staff to generate statistics compatible with good outcomes. For example, long-term interventions with high cost but a high probability of success may be avoided in lieu of quick-fix, short- and intermediate-term interventions that will generate a swift positive outcome that will show up in the data more quickly.

Second, the mindset of agency staff needs to change about the nature of their work. At present, it is rare to link performance standards to outcomes in non-profit organizations. For example, we know that hitting more than 61 home runs in a major league season is a very rare event, and never happened prior to 1998. Before two players broke the record, the last time the home-run record was challenged and broken was in 1961 with Roger Maris’ 61 home runs. Because we understand baseball, we know not to expect batters to hit 80, 100, or 300 home runs in a season even though the average ball player has between 500 and 600 chances (plate appearances) to hit the ball. We also recognize that 66 or 70 home runs is a measure of excellence, the absolute best that can be accomplished. In human services, we can articulate the outcomes, but we do not always know what constitutes success nor can we accept modest results. For example, a large state-funded welfare-to-work program with a permanent job placement rate of 13% for recipients with limited work history and a lack of educational skills is excellent. It may not be readily apparent that 13% here is the equivalent of 70 home runs in one season.

The use of outcome scales reduces the misunderstanding and makes clear realistic expectations for human services outcomes. It also helps separate short, intermediate, and long-term outcomes so that all parties recognize the consequences of limiting success to the quick, short fixes. Where this all comes together is in the use of return-on-investment (ROI) techniques applied to outcomes and outcome scales. In this manner, the agency measures the financial implications of programs and services. If the return on investment is equal to or greater than the cost of the outcome, it minimizes the perverse incentives— since the fiscal mechanism is a common measure of impact and effectiveness.

Key Questions To Answer and Factors To Consider

Reginald Carter, in the book The Accountable Agency, has suggested seven key questions to be answered in building a database model that is responsive to the outcome-based management model. Two additional questions appear here based on the work done by the Positive Outcomes™ organization (Frederick Richmond and Eleanor Hunnemann). The two illustrations of how these nine questions would be answered in the context of a welfare program and an early intervention program are provided as Appendix C.

Mark Friedman, in a May 1997 paper titled A Guide to Developing and Using Performance Measures in Results-based Budgeting, suggests that data collection designers consider six factors in developing their performance measurement systems.

1. The most important factor is that the system must have credibility. Those viewing and analyzing the data must have some confidence that it is both accurate and relevant. There need to be rules and policies governing data collection methodology to make sure it reflects reality, and it is helpful to have some external or otherwise independent review to assure that credibility is maintained.

2. The system must be fair. The system needs to take into account factors that are within the control of the agency and its managers. It should not be used as a “blunt instrument” to punish poor performance, but rather should be a tool to improve performance.

3. The system needs to be clear. If the data are provided in a form that is too obscure, too complicated, or uses statistical measures not in the parlance of those who are analyzing them, the system won’t be useful and accomplish its purpose.

4. The system needs to be practical. The system should be integrated with current data collection methods, so there is not a major increase in the data collection itself and the staff time needed to process it.

5. The system should be adaptable. Programs change, policies change, public goals change, and data collection requirements will need to keep pace with these changes.

6. The system needs to be connected. The data collection for performance measurement needs to be integrated with other management, budgeting, and accountability systems in order to permit the feedback gained from the performance measurement system to really make a difference in the decision-making of the organization.

Outcome-based management is a better way of managing. It usually takes some time for management, staff, and the board of directors to make the transition and begin understanding the basic concepts. The adoption of OBM usually requires updates to the agency’s policies on confidentiality and data management. As staff are required to collect new data, and follow clients beyond the organization’s delivery of services, job descriptions may also have to be changed. One unintended consequence of OBM, according to the paper, What Every Board Member Needs to Know About Outcomes, is that staff trained in OBM are increasingly being sought out by both government and the non-profit sector, and may find their value increased—requiring pay raises commensurate with their new skills to facilitate retention.

Agencies must also build the data collection infrastructure to handle this change. It may require an increase in computer capability to store and process the data. At least for the moment, there is a lack of user-friendly software compatible with the needs of human service organizations, although some vendors are working on this.

Where do you start? There are several useful publications that provide more details on how to implement outcome-based management in human service organizations.

United Way’s 8-Step Process

The United Way of America’s publication Measuring Program Outcomes: A Practical Approach suggests an eight-step process for organizations that want to implement OBM. In summary, the process is—

1. Get Ready.

2. Choose the outcomes you want to measure.

3. Specify indicators for your outcomes.

4. Prepare to collect data on your indicators.

5. Try out your outcome measurement system.

6. Analyze and report your findings.

7. Improve your system.

8. Use your findings.

Conclusion

What makes outcome-based management an easy sell to the human services sector is that it is common sense. What is the point of investing thousands, if not millions, of dollars of an agency’s resources if the end result is not accomplishing what is intended by the investment—improving the lives of the agency’s clients? Our human service organizations have been established to make people’s lives better. When our organizations change their focus to concentrating on doing what it takes to make people’s lives better compared to simply providing human services, then it is much more likely that this worthy goal can be accomplished successfully. This is compatible with the values of most in the sector, who often make financial sacrifices to make a difference in the lives of those who need human services.

In cases where the data show that an agency is successfully providing services, but those services are not having the intended effect on the clients, then the agency leadership should be the first to recognize that it is wasteful to keep business as usual. Outcome-based management is a powerful tool to allow organizations to allocate their precious resources to do the most good. If successfully implemented, it can also provide the ammunition to fight the increasing public cynicism about what is often perceived to be a poor return on investment of tax dollars, and provide a competitive edge to organizations that adopt OBM.

Up Close: Frederick Richmond

Frederick Richmond is the President of the Harrisburg, Pennsylvania-based Center for Applied Management Practices. Founded in January 1998, the Center provides training, on-site technical assistance, and organizational development for community-based organizations, large non-profits, and local and state governments.

Before founding the Center, Richmond spent over a decade as the Director of the Bureau of Research, Evaluation and Analysis, an in-house think tank at the Pennsylvania Department of Public Welfare. Prior to that, he worked at the National Center for Health Services Research, the federal government’s think tank for health-based clinical and public health research. It was in his first month with the Department of Public Welfare that he had his first exposure to, and became enchanted with, the outcomes approach to managing human services.

“ It was in 1980 when I was asked to fill in for a Pennsylvania state official who was unable to attend a meeting in Washington, DC, hosted by the Urban Institute and the American Public Welfare Association (now known as the American Public Human Services Association),” Richmond recalls. “It was at this meeting that I received my first introduction to the technique of ‘Client Outcome Monitoring Procedures for Social Service Agencies,’ the term for what we now call results-oriented management, outcome-based management or Results-Oriented Management and Accountability (ROMA).”

It was at this meeting that he met Harry Hatry and Reginald Carter, two of the national leaders in developing outcome-based management.

“ We tried to implement Client Outcome Monitoring Procedures in Pennsylvania, but Pennsylvania state government was not as ready as other states, particularly Michigan and Texas, who provided national leadership,” he remembers. “People were entrenched in their traditional systems; outcomes were not as important compared to the traditional budget practices of funding based on utilization of services and historical spending patterns.”

The department began a modest outcomes effort in 1987, where it was used for external purposes but not for managing within the agency. Richmond left the Department in 1991, and served as a consultant to the PA Department of Community Affairs, which asked him to develop an outcomes curriculum, provide outcomes training to local agencies, and rewrite state program regulations requiring outcomes reporting. How did this agency become enamored with OBM?

“ During a state budget appropriations hearing, a state senator asked the Secretary of the Department of Community Affairs what happened to the people who received services funded by the agency? Her reply was to account for the numbers of people who received services and how much was spent,” Richmond explains. “The senator again asked what impact these programs made on people’s lives, a question that could not be answered by the Secretary.”

What was unusual about the episode was not the Secretary’s response but that the senator asked the outcome question, Richmond says. “The Secretary’s staff eventually tracked me down for advice and from that we began to integrate outcome thinking into the agency, beginning with changes to the regulations governing the Community Services Block Grant and Neighborhood Assistance Programs.”

Richmond shares two examples, one positive and one negative, in which the outcomes approach directly affected the survival of a human service agency in Pennsylvania.

“ The lack of an outcome-based reporting perspective contributed to a decision by the county commissioners of one Pennsylvania county to terminate services in two family centers because they couldn’t justify the expenditure of public dollars where they couldn’t see a measurable result,” he shares. “For a year, the county commissioners had requested outcome data, but the agency provided little information to justify its programming. The agencies could not demonstrate their impact on families nor generate any data supporting the preventive nature of their program.”

As a result, he says, it could not be determined what impact the program may have had in the community. At the end of the fiscal year, the program was not re-funded.

In another county of Pennsylvania, an adult payeeship program (a mental health program where the agency places and maintains clients in the community rather than an institutional setting) was threatened with being de-funded in the middle of a budget cycle.

“ The commissioners had planned to cut the program in the middle of the year, and there weren’t any data the agency had to counter the decision,” Richmond describes. “Fortunately, an agency staff member had attended a workshop on the outcomes approach, took the tools she acquired back to her agency, and used a Return-On-Investment (ROI) model.” According to Richmond, her data indicated that 25 clients with previous hospitalizations all stayed out of the hospital for a year, saving $123 in hospital costs for each dollar of program expenditures.

“ Not only did the county commissioners decide not to shut the program down, but they re-funded the program for the next fiscal year, as well,” Richmond says.

Is this the way all human service agencies will be judged in the future, or is this just one more three-letter acronym fad?

“ With privatization, competition, and managed care coming to human services, the basis for funding and decision-making will be performance and, therefore, outcomes,” he predicts. “It will become the basis for subcontracting of human services for the foreseeable future.”

Policy changes that are driving the welfare reforms of the late 1990s are reducing service dollars and making existing funding more competitive.

“ Funders will put those dollars where they will be the most effective,” Richmond contends. “Obviously, funders are looking for hard data to support and justify their decisions and, all things being equal, they will fund those that demonstrate that they can produce measurable results and define the product that they produce rather than accepting qualitative, anecdotal indications of a program’s value.”

What advice does he offer to agencies interested in this new approach to managing programs? Richmond shares seven suggestions.

“ First, be willing to accept the disruption that will occur as well as absorb some of the human costs of changing management systems,” he implores. “Second, designate someone in the agency, or create a team to take a leadership role. Seek outside funding to hire consultants or other expertise, and make a long-term investment in your management structure,” he says. “Third, recognize that adopting this new way of operating will change the way business is conducted throughout the agency—outcome information will be actively used to manage programs and services, not just report client counts and dollars expended.

“ Fourth, adopt a proactive approach rather than having it imposed on you later; the costs will be less and the transition will be less painful. If you can’t afford it, consider collaboration with similar agencies and pool resources. Fifth, begin with a skeleton or rudimentary system that identifies at least a couple of outcomes per program—and build on this incrementally each year. Sixth, identify a process to collect, analyze, and report outcomes information and develop this into a formal operating system used by all staff in the agency. A manual system should be developed and tested and, when working, be converted into an automated system.

“ And last, recognize that these accommodations will change the entire operation of an agency from intake and screening to service provision and outside referrals.”

Richmond also has some advice to funders who have joined the outcomes reporting bandwagon.

“ Many agencies do not have the in-house capacity or the funds to develop or sustain an outcome-based system on a regular basis,” he points out. “It is not an overlay on an existing system but a wholesale change and overhaul of the agency. Funders need to recognize this and provide additional funds for startup and implementation in addition to direct services allocation.”

Back to topbutton