statistical process control – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Sat, 01 May 2021 03:23:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png statistical process control – biopm, llc https://biopmllc.com 32 32 193347359 Improving Change Detection https://biopmllc.com/operations/improving-change-detection/ Sat, 01 May 2021 03:23:07 +0000 https://biopmllc.com/?p=1270 Continue reading Improving Change Detection]]> Change detection in time-related data is a common application of statistical methods.  For example, we may want to detect if the consumer preferences have changed over time, if a piece of equipment has deteriorated and requires maintenance, or if a manufacturing process has drifted, increasing risk of producing defects.

In my teaching, consulting, and general discussion with students and practitioners, I have noticed that many people are eager to learn the mechanics of different tools, e.g. how to choose a specific type of process control chart or how to determine the right parameters for cumulative sum (CUSUM), so they can get the job done.  But few ask the question: “what makes the tool effective in the real world?”

In the case of a control chart, a crucial condition that makes the control chart effective is process standardization. 

Continuous Improvement (CI) professionals know that standardized work is a fundamental principle of the Toyota Production System (TPS) or Lean.  Standardization minimizes process variation, which enables greater sensitivity in detecting special cause variation by the control chart.

Many people don’t realize that even if a control chart shows no special cause variation, it does not mean that the process is in statistical control.  In many cases, such as processes that lack standardization, there are simply too many uncontrolled variables present and they become part of the process.  But these variables are not inherent to the process and inflate the common cause variation.

The accompanying figure shows a hypothetical example.  The top chart shows a stable process, except that points 51 to 55 have a positive deviation of 20.  The individuals chart (or I-chart) detects the change.  The bottom chart is the same process with the same positive deviation for points 51 to 55, but some random deviations (or noise) are added. The control limits are more spread out, and the special cause variation is not detected.

Additional noise reduces the ability of a control chart to detect change.

In my observation of real processes, many contain both special cause variation and the additional noise illustrated above.  So naturally, the CI professionals tend to focus on reducing the special cause to bring the process back in control. However, with the persisting noise the process never reaches its true state of control.

The additional noise can come from many sources.  A major source is lack of standardization. 

In a regular production environment, operating procedures have room for interpretation and thus can lead to process variation.  In my experience in R&D and manufacturing, many people honestly believe that they follow the same procedure each time.  But upon careful investigation, deviations are common.

Those familiar with gage repeatability & reproducibility (R&R) studies appreciate the potential for human errors or deviations. Using a well-established measurement procedure, the same operator can still have varying results measuring the same items (i.e. repeatability error).  Different operators likely introduce additional variability (i.e. reproducibility error).  In a less standardized process, there are many more opportunities for deviation.

The effectiveness of standardization to reduce noise is limited by our understanding of the design space and critical process variables.  Because many processes are not well studied and designed using Quality by Design (QbD) principles, some residual noise will likely remain after standardization.

In summary, if you want to improve change detection, make sure that you identify the sources of the extra noise in the process and operationally control them.

]]>
1270
Achieving Improvement https://biopmllc.com/strategy/achieving-improvement/ https://biopmllc.com/strategy/achieving-improvement/#comments Tue, 30 Jun 2020 12:11:53 +0000 https://biopmllc.com/?p=1186 Continue reading Achieving Improvement]]> In my blog Setting SMART Goals, I made the point that having a measurable goal in an improvement project is not enough — we have to know how it is measured and interpreted to make it useful.

What makes a goal achievable?  In my work as a Continuous Improvement (CI) coach and consultant, I have seen some common practices setting a numerical goal using, for example

  1. A target set by management, e.g. a productivity standard for the site
  2. Customer requirements, e.g. a minimum process capability
  3. Some benchmark value from a similar process
  4. A number with sufficient business benefit, e.g. 10% improvement

At the first glance, these methods seem reasonable.  In practice, they are problematic for two reasons.

First, the goals are based on what is desirable, not sound understanding of the opportunity using data.  How do we know if a desirable goal is achievable?   In many organizations, a numerical goal is “set in stone” when the project starts; failing to achieve the goal can have potential career repercussions.  While management tends to aim for aggressive targets, the project leaders are more concerned with the risk of failing to achieve them.  They prefer a more “realistic” target that can be met or even exceeded and negotiate with the sponsors to make the desirable target a “stretch” goal.  In the end, no one knows what the real improvement opportunity is.

Secondly, the practices create a mindset and behavior inconducive to the CI culture.  I have seen too many organizations’ Lean, Six Sigma, or other CI initiatives focus only on training and project execution.  They fail to build CI into their daily decisions, operations, and organization’s culture.  Quality improvement cannot be accomplished by projects alone – numerous incremental improvement opportunities exist in routine activities outside any project.  Projects, by their nature, are of a limited duration and are merely one mechanism or component of continuous improvement. Most improvement does not require a project.  Depending on projects to improve a process is a misunderstanding of CI, reinforces reactive (firefighting) behavior, and sends a wrong message to the organization that improvement is achieved through projects, and even worse, by specialists.

Creating a project with only a desired target leads to high uncertainty in project scope, resources, and timelines – a lot of waste. 

To be effective, a CI project should have a specific opportunity identified based on systematic analysis of the process.  Furthermore, the opportunity is realized through a project only if it requires additional and/or specialized resources; otherwise, the improvement should be carried out within routine activities by the responsible people in collaboration. 

What kind of systematic analysis should we perform to identify the opportunities?

One powerful analysis is related to process stability.  It requires our understanding of the nature and sources of variation in a process or system.  In a stable process, there is only common cause variation – its performance is predictable.  If a process is not stable, there exists special cause variation — its performance is not predictable.  Depending on process stability, the opportunity for improvement and the approach are distinct. 

The first question I ask about the goal of any improvement project is “Is the current performance unexpected?”  In other words, is the process performing as predicted?  No project should start without answering this question satisfactorily in terms of process stability.  Most often the answer is something like “We don’t really know but we want something better.”  If you don’t know where you are, how do you get to where you want to be?  This is a typical symptom of a project driven by the desirability rather than a specific opportunity based on analysis.  If the process stability was examined, most likely the first step toward improvement would be to understand and reduce process variation, which does not need a project.

For people familiar with Deming’s 14 Points for Management, I have said nothing new.  I merely touched point 11 “Eliminate management by numbers, numerical goals.”  His original words1 are illustrative.

“If you have a stable system, then there is no use to specify a goal.  You will get whatever the system will deliver.  A goal beyond the capability of the system will not be reached.”

“If you have not a stable system, then there is again no point in setting a goal.  There is no way to know what the system will produce: it has no capability.”

A goal statement that sounds SMART does not make a project smart.  A project devoid of true improvement opportunity achieves nothing but waste.  But if we follow the path shown by Deming, opportunities abound and improvement continues. 


1. Deming, W. Edwards. Out of the Crisis : Quality, Productivity, and Competitive Position. Cambridge, Mass.: Massachusetts Institute of Technology, Center for Advanced Engineering Study, 1986.

]]>
https://biopmllc.com/strategy/achieving-improvement/feed/ 2 1186
Is Your Process in Control? https://biopmllc.com/innovation/is-your-process-in-control/ Fri, 30 Aug 2019 01:22:43 +0000 https://biopmllc.com/?p=1092 Continue reading Is Your Process in Control?]]> In 1980, the American Society for Quality (ASQ) republished Walter Shewhart’s seminal book Economic Control of Quality of Manufactured Product as a 50th anniversary edition.  In his Dedication to this commemorative issue, W. Edwards Deming wrote: “There was never before greater need for statistical methods in industry and in research.”  I’d say the same today, after almost 40 years.

In the past decade, the US Food Drug Administration (FDA) and regulatory bodies in other countries have published a number of guidance documents for the industry to encourage the use of statistical methods and sound science.  For example, Process Validation: General Principles and Practices provides a three-stage framework for implementing process validation (PV) using the principles of Statistical Process Control (SPC): Process Design, Process Qualification, and Continued Process Verification (CPV).

The life sciences industry is increasingly embracing the concept and practice of SPC to improve the quality, safety, and cost of pharmaceutical and other medical products.  The progress remains slow.  As a practitioner and consultant in statistical methods, I have seen the challenges facing many organizations.   Here are a few examples of incorrect or ineffective use of control charts. 

  1. Retrospective analysis of what went wrong.   Often control charts are used as a tool for root cause analysis.   While the analysis can provide useful insight and lessons learned, not much can be done for things that happened months or years ago.  The right evidence and knowledge were long gone, and the opportunity to understand the true cause and/or make a positive impact was lost.  
  2. Updating control limits with every new observation.  This happens when a control chart is made using software.  It is easy to do but is neither correct nor necessary.  If the control limits represent the true inherent variation of the process, they should not change unless for an assignable reason.
  3. Poor measurement systems.  The observed variation comes from both the process and the measurements.  If the measurement system itself is inadequate, i.e., too much variation compared to the process variation, a control chart of the process will not produce correct signals. 
  4. Used only by specialists.  Process operators and other staff are not trained or involved in generating the charts.   The data and charts are in the computer and visible to only a few selected members.  The rest of the organization are not routinely engaged in understanding process variation.
  5. Over-reliance on software and rules.  Software can quickly and reliably compute data and detect special cause variation using the set rules.  But it does not connect observations on the shop floor with the analysis as humans can.  It is a missed opportunity for learning, especially when compounded with a lack of broader involvement of the organization.

Effective use of SPC and other scientific methods requires both resources and expertise.  But it is achievable with careful planning.  If you want to implement or invigorate SPC, I recommend paying attention to the following.

  1. Get the organization’s commitment from the top.  Keep in mind that this capability development is also a culture change in most organizations.  It takes time, resources, and long-term commitment to change. Change management is essential. 
  2. Develop deep expertise in quality.  SPC cannot be implemented in isolation but is an integral part of quality management.  Ineffective use of SPC is often due to lack of understanding of the fundamental theory in quality.  Ideally the resource should be an internal/external consultant who has expertise in statistics, science, business, as well as the subject matter.  
  3. Involve the whole organization.  A few experts are not sufficient if the rest of the organization does not have a quality mindset.  Make sure that everyone (including management) is trained in the basics of quality concepts and tools and understands how they contribute to quality in their daily work.  Wherever possible, develop mechanisms to allow them to use the tools or data and create routine dialogues on quality and process improvement.

I hope that with the concerted effort of the life science industry and regulators, by 2030 when the 100th anniversary edition of Shewhart’s book is published, we will see much progress in our industry.

]]>
1092