In the context of providing non-profit organization services, quality can have several meanings. The broad definition of “suitability of purpose” is of some benefit, but provides a less than satisfactory picture for typical activities performed by non-profits. Improving quality in a non-profit setting is often a matter of simple common sense. This means staff who are pleasant and professional at every point in the agency-client point of contact. It means agency communications that are accurate, easy to read, aesthetic to the eye, and invite the reader. It means meetings that start on time, have an agenda that is stuck to by the convenor, and are accompanied by meeting materials that facilitate productive interaction and participation from those who attend. And it means office space that provides a pleasant, comfortable, and functional environment for both workers and clients.
Every non-profit agency, from the one-person shop to the most complex hospital, can benefit from a philosophy that encourages continual improvement. It is rare to find a non-profit organization in which even a cursory examination wouldn’t suggest improvements that are needed in physical plant, working conditions, employee morale, office policies, and business processes. Most organizations already solicit improvement suggestions, and it is not unusual to find a suggestion box accessible to both customers and staff alike where complaints and suggestions can be made anonymously. The university I attend offers a $50 reward for suggestions that not only save money but may “improve quality” as well.
Yet in many large non-profit settings, quality improvement can benefit from a more scientific, quantitative approach. Mathematically valid, statistical approaches have been developed that provide guidance to the non-profit organization manager on when action should be taken to improve quality. The philosophy and techniques of Total Quality Management (TQM) evolved from looking at variation, the deviation from a standard in a statistical way, and making rules for when adjustments should be made and when things should be left alone.
Too much variation causes problems. An organization’s clients utilize services with the expectation that the service will be approximately the same each time they use it. For example, even if the services are of high quality, the consumer may complain about a perceived deterioration in service if one day the service is extraordinary in quality and the next day the service is good, but not extraordinary. That is not to say that an organization can’t augment its services at special times, and engage its clients creatively. But the client must feel that if he or she is getting a service, that service will meet the client’s needs in a reasonable manner, and that mistakes on the part of the organization won’t result in the denial of service or service below the quality that the client has come to expect.
In a manufacturing setting, variation causes a machine to make a part that does not fall within specifications. Quality control rejects the part (and sometimes, the entire batch). The ultimate consumer of the product never knows that the machine randomly spit out a part not meeting specifications, or the person operating the machine made a mistake and produced a poor-quality part. In a non-profit setting, particularly those providing human services, the consequences of error can be life-threatening. In the case of a Meals on Wheels program, undercooking the turkey by even a few minutes can have repercussions that could threaten the lives of vulnerable clients, as well as the viability of the entire agency. Poor quality in non-profit organizations can have a steep societal cost.
The first step in improving quality is to look at each process and activity performed by the organization and concentrate on those that generate complaints, not only from agency clients but from staff and suppliers as well. What is it that the agency does, and is trying to achieve? How do the complaints relate to these? In the language of the scientist, one must operationalize a variable, that is, put it in terms that can be defined, measured, and tracked over time.
For example, a Meals on Wheels program wants to achieve several different objectives. It wants to make timely delivery (1) of hot (2) meals of high nutritional quality (3) and reasonable cost (4) that satisfy (5) the people receiving them. Each of these objectives can be operationalized. For example, one can make a chart for each day showing the number of meals delivered, how many meals were delivered more than an hour past the time they were supposed to be delivered (or not at all), the percentage of the meals that were delivered cold when they should have been hot, the percentage of meals that failed to meet the minimum nutrition requirements of the program, the number of meals that generated consumer complaints, and the number of meals that cost more than what was budgeted for preparing them.
There is no magic and absolute way to operationalize these variables, and much of this is as much art as science. Once the variables are operationalized, data can be collected for each. Statistically, it is very unlikely that in the course of a week, every meal will be delivered, every meal will be delivered hot, every meal will meet minimum nutrition requirements, every meal will be applauded by every client, and every meal will be prepared under budget. Despite the most careful planning, things hardly ever go as planned. According to the developers of TQM such as Deming, only about 15% of things that go wrong, i.e., 15% of failures to meet minimum standards of quality, are the fault of workers. The other 85% is attributable to problems in the system of management, and other happenstance beyond the control of the person delivering the meal. For example, there might be a flu epidemic and drivers don’t report to work. An accident on the highway may delay delivery of meals. A problem with a microwave coil might result in undercooking some vegetable, generating consumer complaints.
Any number of things cause variation in the quality of a process. And it is this variation that results in poor quality. The job of the manager is to train workers to avoid the 15% of errors that they are responsible for—through continuing education, incentives to spot problems before they occur, and infusing within them a spirit of quality improvement—and to address the other 85% of errors that cause variation themselves.
The manufacturing industry developed Statistical Process Control (SPC) to provide a mathematical tool for managers to make a judgment when a process needed to be fixed, or whether variation was within acceptable limits. A process with variation within acceptable limits was called “in control” and one beyond the boundaries of acceptable limits was called “out of control.” To tell whether a process is out of control and needing of attention, the average of the data for each variable being measured is calculated, and upper and lower boundaries for acceptable variability are calculated. Obviously, the low boundary for defects for any variable is zero. The upper boundary is typically taken as three times the square root of the average for each variable.
A process control chart is a graph that shows the data points over time. The acceptable boundaries (dubbed the “upper control limit,” or ucl, and “lower control limit,” or lcl) also appear on the graph. If any of the data points appear above (or below) the ucl or lcl, then the process is considered to be unstable and requires some adjustment. If all of the data points fall within the boundaries, then they are considered to be within acceptable statistical variation that would occur randomly. A process in control should show apparently random data points within the boundaries. Patterns, even within the boundaries, such as eight or more consecutive increases, eight or more consecutive decreases, or a cyclical nature of data points, indicate that there is something going on that should be analyzed by the manager.
New staff, trucks breaking down, or changing food suppliers, may result in finding patterns in these charts. Also, keep in mind that some of the variables may not be completely independent from each other. For example, data indicating that meals were not delivered hot may be consistent with data indicating that meals were delivered late.
There are many other mathematical tools that are used by those who engage in SPC and computer programs available commercially that process organizational data. The bibliography that begins on page 145 provides additional resources for those who want to go beyond the simple explanations and examples of SPC provided in this introductory appendix.