Variation – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Mon, 01 Feb 2021 01:45:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Variation – biopm, llc https://biopmllc.com 32 32 193347359 Understanding Variation https://biopmllc.com/strategy/understanding-variation/ Mon, 01 Feb 2021 01:45:39 +0000 https://biopmllc.com/?p=1248 Continue reading Understanding Variation]]> Lean and Six Sigma are two common methodologies in Continuous Improvement (CI).  However, neither has a precise definition of what it is.  Many disagree on the definitions or even the value of these methodologies, and I won’t join the debate here.   What I care about is the underlying principles used by these methodologies – whatever the substance that is useful, independent of the label.

The questions about “what is Lean” and “what is Six Sigma” inevitably come up when you train and coach people in CI methodologies.  Without delving into the principles, my answer goes something like this:

  • Lean is about delivering value to the customer, fast and with minimum waste.
  • Six Sigma is about understanding and reducing variation.

None of them is satisfactory.  But practically these messages are effective in stressing the necessary concepts they need to develop, i.e. value and variation — a prerequisite for CI.  These answers are certainly insufficient and not meant to be.  It’s hard to understand the true meaning of life or happiness when we are 5 years old.  Likewise, it takes a lifetime of experience to understand the true meaning and principles of CI and apply them well.

While the concept of value versus waste is intuitive, most people don’t interpret their daily observations in terms of variation.  Because of the (over-)emphasis of statistical tools in Six Sigma by many consultants, many organizations prefer Lean to Six Sigma (see my earlier blog “How is your Lean developing” for potential pitfalls in replying on simple Lean tools).  The lack of appreciation of the concept of variation will eventually constrain the organization’s ability to improve.

There are many applications of the concept of variation in understanding and improving a process.  Most applications don’t require sophisticated knowledge in statistics or probability theory.  One example is management of supply and demand.

Let’s say that you plan your resources and capacity to meet a target demand level.  The demand can be from internal or external customers, and can be for products, services, materials, or projects. For simplicity, let’s assume that it’s a fixed capacity without any variation, e.g. no unplanned downtime or sick leaves.  

If you plan enough resources for the total or average demand but the demand varies greatly (upper left of the figure), you will meet the demand exactly only occasionally. Most of the time, you will either not have enough capacity (creating backlogs or bottlenecks) and miss some opportunities or have too much capacity and lose the unused resources forever.

If it is too costly to miss the opportunities, some organizations are forced to raise the capacity (upper right of the figure). Many optimize the resources to strike a balance between lost capacity and missed opportunities.  What I have observed is that organizations go back and forth between maximizing opportunities and reducing waste.  One improvement project is sponsored (by one function) to reduce the risk of the missed opportunities with a solution that shows a high return-on-investment in the added resources.  As a result, the excess capacity is common, leading to another project (probably by another function) to reduce waste and maximize resource utilization.  The next demand surge will lead to another round of improvement projects.

Many people don’t realize that the real long-term improvement has to address the issue of demand variation.  For example, if we understand the sources of demand variation and therefore develop solutions to limit it, both missed opportunities and lost capacity will be reduced (bottom half of the figure).  A much lower capacity is needed to satisfy the same overall but less variable demand.

Capacity variation has similar effects. 

What is more interesting is that most processes are made of a series of interdependent supply-demand stages, each of which propagates or accumulates the effect of variation.  We can use this understanding of variation to explain many phenomena in our lives, e.g. process bottlenecks, traffic jams, project delays, supply overage, excess inventory, etc.  The Theory of Constraints popularized by Eliyahu Goldratt in his book The Goal is also based on the same ideas of process interdependence and variation.

No matter what CI methodologies you use, I hope you agree that understanding and reducing variation is always a key to improvement. 

]]>
1248
Can You Trust Your Data? https://biopmllc.com/operations/can-you-trust-your-data/ Mon, 30 Dec 2019 05:54:37 +0000 https://biopmllc.com/?p=1115 Continue reading Can You Trust Your Data?]]> Data is a new buzzword.   Big Data, data science, data analytics, etc.  are words that surround us every day.  With the abundance of data, the challenges of data quality and accessibility become more prevalent and relevant to organizations that want to use data to support decisions and create value.   One question about data quality is “can we trust the data we have?” No matter what analysis we perform, it’s “garbage in, garbage out.”

This is one reason that Measurement System Analysis (MSA) is included in all Six Sigma training.  Because Six Sigma is a data-driven business improvement methodology, data is used in every step of the problem-solving process, commonly following the Define-Measure-Analyze-Improve-Control (or DMAIC) framework.  The goal of MSA is to ensure that the measurement system is adequate for the intended purpose.   For example, a typical MSA evaluates the accuracy and precision of the data. 

In science and engineering, much more comprehensive and rigorous studies of a measurement system are performed for specific purposes.  For example, the US Food and Drug Administration (FDA) publishes a guidance document: Analytical Procedures and Methods Validation for Drugs and Biologics, which states

“Data must be available to establish that the analytical procedures used in testing meet proper standards of accuracy, sensitivity, specificity, and reproducibility and are suitable for their intended purpose.”

While the basic principles and methods have been available for decades, most organizations lack the expertise to apply them properly.  In spite of good intentions to improve data quality, many make the mistake of sending newly trained Six Sigma Green Belts (GB’s) or Black Belts (BB’s) to conduct MSA and fix measurement system problems.  The typical Six Sigma training material in MSA (even at the BB level) is severely insufficient if the trainees are not already proficient in science, statistical methods, and business management.  Most GB’s and BB’s are ill-prepared to address data quality issues.

Here are just a few examples of improper use of MSA associated with Six Sigma projects.

  • Starting Six Sigma projects to improve operational metrics (such as cycle time and productivity) without a general assessment of the associated measurement systems.  If the business metrics are used routinely in decision making by the management, it should not be a GB’s job to question the quality of these data in their projects.  It is management’s responsibility to ensure the data are collected and analyzed properly before trying to improve any metric.
  • A GB is expected to conduct an MSA on a data source before a business reason or goal is specified.  Is it the accuracy or precision that is of most concern and why? How accurate or precise do we want to be?  MSA is not a check-box exercise and consumes organization’s time and money.  The key question is “is the data or measurement system good enough for the specific purpose or question?”
  • Asking a GB to conduct an MSA in the Measure phase and expecting him/her to fix any inadequacy as a part of a Six Sigma project.  In most cases, changing the measurement system is a project by itself.  It is out of scope of the Six Sigma project.  Unless the system is so poor that it invalidates the project, the GB should pass the result from the MSA to someone responsible for the system and move on with his/her project.
  • A BB tries to conduct a Gage Repeatability & Reproducibility (R&R) study on production data when a full analytical method validation is required.  A typical Gage R&R only includes a few operators to study measurement variation, whereas in many processes there are far more sources of variation in the system, which requires a much more comprehensive study.  This happens when the BB lacks domain expertise and advanced training in statistical methods.

To avoid such common mistakes, organizations should consider the following simple steps.

  1. Identify critical data and assign their respective owners
  2. Understand how the data are used, by whom, and for what purpose
  3. Decide the approach to validate the measurement systems and identify gaps
  4. Develop and execute plans to improve the systems
  5. Use data to drive continuous improvement, e.g. using Six Sigma projects

Data brings us opportunities.  Is your organization ready?

]]>
1115
Capabilities of Future Leaders https://biopmllc.com/organization/capabilities-of-future-leaders/ https://biopmllc.com/organization/capabilities-of-future-leaders/#comments Mon, 22 Oct 2018 04:20:19 +0000 https://biopmllc.com/?p=985 Continue reading Capabilities of Future Leaders]]> What capabilities are required for future leaders in life sciences? How can organizations develop such leaders? A recent McKinsey article, Developing Tomorrow’s Leaders in Life Sciences, addresses this exact question. Using data from their 2017 survey on leadership development in life sciences, the authors illustrated the gaps and opportunities and presented five critical skills.

  1. Adaptive mind-set
  2. 3-D savviness
  3. Partnership skills
  4. Agile ways of working
  5. A balanced field of vision

It is a well written article with useful insights and actionable recommendations for effective leadership development. However, there is one flaw – presentation of the survey data. Did you notice any issues in the figures?

I can see at least two problems that undermine the credibility and impact of the article.

Inconsistent numbers
The stacked bar charts have four individual groups (C-suite, Top team, Middle managers, and Front line) in addition to the Overall. In Exhibit 1, for example, the percentages of the respondents that strongly agree with the statement “My organization has a clear view of the 2-3 leadership qualities and capabilities that it wants to be excellent at” are 44, 44, 30, and 33%, respectively. Overall, can 44% of them strongly agree? No. But that is the number presented.

It doesn’t take a mathematical genius to know that the overall (or weighted average) has to be within the range of the individual group values, i.e. 30 < overall < 44. Similarly, it is not possible to have an 8% overall “Neither agree or disagree” when the individual groups have 11, 9, 16, and 17%. The same inconsistency pattern happens in Exhibits 4 and 5.

Which numbers are correct?

No mention of sample size
Referring to Exhibit 1, the authors compared the executive responses in the “strongly agree” category (“less than 50 percent”) to those of middle managers and frontline staff (“around 30 percent”), stating there is a drop from the executives to the staff. But can a reader make an independent judgment whether the difference between the two groups really exists? No, because the numbers alone, without a measure of uncertainty, cannot support the conclusion.

We all know that the survey like this only measures a limited number of people, or a sample, from each target group. The resulting percent values are only estimates of the true but unknown values and are subject to sampling errors due to random variation, i.e. a different set of respondents will result in a different percent value.

The errors can be large in such surveys depending on the sample size. For example, if 22 out of 50 people in one group agree with the statement, the true percent value may be somewhere in the range of 30-58% (or 44±14%). If 15 out of 50 agree in another group, its true value may be in the range of 17-43% (or 30±13%). There is a considerable overlap between the two ranges. Therefore, the true proportions of the people who agree with the statement may not be different. In contrast, if the sample size is 100, the data are 44/100 vs. 30/100, the same average proportions as the first example. The ranges where the true values may lie are tighter, 34-54% (44±10%) vs. 21-39% (30±9%). Now it is more likely that the two groups have different proportions of people who agree with the statement.

Not everyone needs to know how to calculate the above ranges or determine the statistical significance of the observed difference. But decision makers who consume data should have a basic awareness of the sample size and its impact on the reliability of the values presented. Drawing conclusions without necessary information could lead to wrong decisions, waste, and failures.

Beyond the obvious errors and omissions discussed above, numerous other errors and biases are common in the design, conduct, analysis, and presentation of surveys or other data. For example, selection bias can lead to samples not representative of the target population being analyzed. Awareness of such errors and biases can help leaders ask the right questions and demand the right data and analysis to support the decisions.

In the Preface of Out of Crisis, Edwards Deming made it clear that “The aim of this book is transformation of the style of American management” and “Anyone in management requires, for transformation, some rudimentary knowledge of science—in particular, something about the nature of variation and about operational definitions.”

Over the three and half decades since Out of Crisis was first published, the world has produced orders of magnitude of more data. The pace is accelerating. However, the ability of management to understand and use data has hardly improved.

The authors of the McKinsey article are correct about 3-D savviness: “To harness the power of data, design, and digital (the three d’s) and to stay on top of the changes, leaders need to build their personal foundational knowledge about what these advanced technologies are and how they create business value.” That foundational knowledge can be measured in one way by their ability to correctly use and interpret stacked bar charts.

Now, more than ever, leaders need the rudimentary knowledge of science.

]]>
https://biopmllc.com/organization/capabilities-of-future-leaders/feed/ 2 985