Process Capability – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Mon, 01 Mar 2021 04:46:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Process Capability – biopm, llc https://biopmllc.com 32 32 193347359 The Missing Information in Business Metrics https://biopmllc.com/strategy/the-missing-information-in-business-metrics/ Mon, 01 Mar 2021 02:18:09 +0000 https://biopmllc.com/?p=1254 Continue reading The Missing Information in Business Metrics]]> Modern businesses generate and consume increasingly large amounts of data.  Information is needed to support operational and strategic decisions.  Despite the advent of Big Data tools and technology, most organizations I have worked with aren’t able to take advantage of the data or tools in their daily work.  While greater awareness of human visual perception and cognition has improved dashboard designs, effective decision-making is often limited by the type of information monitored.

It is common to see summary statistics (such as sum, average, median, and standard deviation) being used in reports and dashboards.  In addition, various metrics are used as Key Performance Indicators (KPIs).  For example, in manufacturing, management often use Overall Equipment Effectiveness (OEE) to gauge efficiency.  In quality, process capability indices (e.g. Cpk) are used to evaluate the process’s ability meet customer requirements. In marketing, the Net Promoter Score (NPS) helps assess customer satisfaction.

All of these are statistics, which are simply functions of data. But what does each of them tell us? What do we want to know from the data? What specific information is needed for the decision?

Unfortunately, these basic questions are not understood by most people who use performance metrics or statistics.  I discussed some specific mistakes in using process capability indices last July.  A more general problem is that statistics can hide the information we need to know.

For example, last year I was coaching a Six Sigma Green Belt (GB) working in Quality.  A manufacturing process had a worsening Cpk.  The project was to increase the Cpk to meet the customer’s demanding requirement. Each time we met, the GB would show me how the Cpk had changed.  But Cpk is a function of both the process center (average) and the process variation (standard deviation), which comes from a number of sources (shifts, parts, measurements, etc.).  The root causes of the Cpk change were not uncovered until we looked deeper into the respective changes in the average and in the different contributors to the standard deviation.  

The key takeaway is that when multiple contributors influence a metric, we cannot just monitor the change in the metric alone.  We must go deeper and seek other information needed for our decisions.

Many people may recall in statistics training that the teachers always tell them “plot the data!”  It is important to visualize the original data instead of relying on statistics alone because statistics don’t tell you the whole story.  The famous example to illustrate this point is the Anscombe’s quartet, which includes four sets of data (x, y) with nearly identical descriptive statistics (mean, variance, and correlation) and even the same linear regression fit and R2.  However, when visualized in a scatter plot, they look drastically different.  If we only looked at one or few statistics, we would have missed the differences.  Again, statistics can hide useful information we need.

Nowadays, there is too much data to digest, and modern tools can conveniently summarize and display them. When we use data to inform our business decisions, it’s easy to fall into the practice of looking only at the attractive summary in a report or on a dashboard.  The challenge of using data for decision making is to know what we want and where to get it.

Guess who wrote below about information monitoring for decisions?

With the coming of the computer this feedback element will become even more important, for the decision maker will in all likelihood be even further removed from the scene of action. Unless he or she accepts, as a matter of course, that he or she had better go out and look at the scene of action, he or she will be increasingly divorced from reality.

Peter Drucker in 1967.  He further wrote:

All a computer can handle is abstractions. And abstractions can be relied on only if they are constantly checked against concrete results.  Otherwise, they are certain to mislead.

Metrics and statistics are abstractions of reality – not the reality.  We must know how to choose and interpret these abstractions and how to complement this information with other types1

1. For more discussion on “go out and look” (aka Go Gemba), see my blog Creating Better Strategies.

]]>
1254
Understanding Process Capability https://biopmllc.com/operations/understanding-process-capability/ https://biopmllc.com/operations/understanding-process-capability/#comments Sat, 01 Aug 2020 03:17:57 +0000 https://biopmllc.com/?p=1196 Continue reading Understanding Process Capability]]> Process capability is a key concept in Quality and Continuous Improvement (CI).  For people not familiar with the concept, process capability is a process’s ability to consistently produce product that meets the customer requirements.

Conceptually, process capability is simple.  If a process makes products that meet the customer requirements all the time (i.e. 100%), it has a high process capability.  If the process does it only 80% of the time, it is not very capable.

For quality attributes measured as continuous or variable data, many organizations use Process Capability Index (Cpk) or Process Performance Index (Ppk) as the metric for evaluation.  In my consulting work, I often observe confusion and mistakes applying the concept and associated tools, even by Quality and CI professionals.  For example,

  • Mix-up of Cpk and Ppk
  • Unclear whether or when process stability is a prerequisite
  • Using the wrong data (sampling) or calculation
  • Misinterpretation of process capability results
  • Difficulty evaluating processes with non-normal data, discrete data, or binary outcomes

The root cause of this gap between this simple concept and its effective application in the real world, in my opinion, is lack of fundamental understanding of statistics by the practitioners.

Statistics

First, a process capability metric, such as Cpk, is a statistic (which is, by definition, simply a function of data).  The function is typically given as a mathematical formula.  For example, mean (or the arithmetic average) is a statistic and is the sum of all values divided by the number of values in the data set.   

The confusion between Cpk and Ppk often comes from their apparently identical formulas, with the only difference being the standard deviation used.  Cpk uses the within-subgroup variation, whereas Ppk uses the overall variation in the data.  Which index should one use in each situation?

It is important to understand that any function of data can be a statistic – whether it has any useful meaning is a different thing.  The formula itself of a statistic does not produce the meaning.  Plugging whatever existing data into a formula rarely gives the answer we want. 

To derive useful meaning from a statistic, we must first define our question or purpose and state assumptions and constraints.  Then we can identify the best statistic, gather suitable data, calculate and interpret the result. 

Enumerative and Analytic Studies

Enumerative and analytic studies1 have two distinct purposes. 

  • An enumerative (or descriptive) study is aimed to estimate some quantity in the population of interest, for example, how many defective parts are in this particular lot of product? 
  • An analytic (or comparative) study tries to understand the underlying cause-system or process that generates the result, for example, why does the process generate so many defective parts?

If the goal is to decide if a particular lot of product should be accepted or rejected based on the number of defective parts, then it is appropriate to conduct an enumerative study, e.g. estimating the proportion of defectives based on inspection of a sample from the lot.  A relevant consideration is sample size vs. economic cost – more precise estimates require larger samples and therefore cost more.  In fact, a 100% inspection will give us a definite answer.  In this case, we are not concerned with why there are so many defectives, just how many.

If the goal is to determine if a process is able to produce a new lot of product at a specified quality level, it is an analytic problem because we first have to understand why (i.e. under what conditions) it produces more or fewer defectives.  Methods used in enumerative studies are inadequate to answer this question even if we measured all parts produced so far.  In contrast, control charts are a powerful analytic method that uses carefully designed samples (rational subgroups) over time to isolate the sources of variation in the process, i.e. understanding the underlying causes of the process outcome.  This understanding allows us to determine if the process is capable or needs improvement.

Cpk versus Ppk

If our goal is to understand the performance of the process in a specific period (i.e. an enumerative study), we are only concerned with the products already made, not the inherent, potential capability of the process to produce quality products in the future.  In this case, demonstration of process stability (by using control charts) is not required, and Ppk using a standard deviation that represents the overall variability from the period is appropriate.  

If our goal is to plan for production, which involves estimating product quality in future lots, the process capability analysis is an analytic study.  Because we cannot predict what a process will produce with confidence if it is not stable, demonstration of process stability is required before estimating process capability. 

If the process is stable, there is no difference between within-subgroup variation (which is used for Cpk) and overall variation (which is used for Ppk), except estimation errors. Therefore, Cpk and Ppk are equivalent.

If the process is not stable, the overall standard deviation is greater than the within-subgroup variation — Ppk is less than Cpk as expected.  However, Ppk is not a reliable measure of future performance because an unstable process is unpredictable.  If (a big IF) the subgroup is properly designed, the within-subgroup variation is stable and Cpk can be interpreted as the potential process capability if all special causes are eliminated.  In practice, the subgroup is often not designed or specified thoughtfully, making interpreting Cpk difficult.

In summary, process capability analysis requires good understanding of statistical concepts and clearly defined goals.  Interested practitioners should peruse many books and articles on this topic.  I hope the brief discussion here helps clarify some concepts. 

1. Deming included a chapter “Distinction between Enumerative and Analytic Studies” in his book Some Theory of Sampling (1950).

]]>
https://biopmllc.com/operations/understanding-process-capability/feed/ 2 1196