Fang Zhou – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Sat, 05 Jun 2021 19:18:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Fang Zhou – biopm, llc https://biopmllc.com 32 32 193347359 Approaches to Lean Six Sigma Deployment https://biopmllc.com/strategy/approaches-to-lean-six-sigma-deployment/ Tue, 01 Jun 2021 03:22:56 +0000 https://biopmllc.com/?p=1276 Continue reading Approaches to Lean Six Sigma Deployment]]> In my previous blogs, I discussed some challenges in deploying continuous improvement (CI) methodologies in organizations and made recommendations, such as

In the last recommendation, I didn’t include an alternative approach because it required more elaboration.

The traditional Lean Six Sigma (LSS) deployment uses classroom training to teach concepts and tools to employees, who become Green Belts (GB) or Black Belt (BB) candidates.  The inexperienced GBs and/or BBs leading improvement projects often struggle to recall what they learned in the class and relate it to the real-world problems.  

What I think works better is project-based learning, in which the employees learn by participating in a job-related project led by an experienced CI professional.   The on-the-job hands-on learning is supplemented by expert coaching and self-paced learning. 

Assuming the organization is new to CI, I propose that it starts with a pilot project led by a CI veteran, who can guide the organization in a learning journey.  The journey will not only teach the team CI methodologies but also help the organization leaders discover many existing gaps, risks, issues, and opportunities, which leads to a better long-term strategy.  This CI leader has multiple roles — the coach to the organization leaders, the leader of the project, and the trainer of CI methodologies to the employees.

The proposed approach achieves multiple goals.

  • Enable the organization to achieve optimal outcomes
  • Build internal capabilities, including processes and skills
  • Help develop a CI strategy and culture for the long term

The approach can include the following.

  1. The senior CI sponsor (a top executive) recruits or retains a truly experienced CI leader (either an employee or consultant), with an explicit role of leading the pilot project, assessing organization, and helping develop its deployment strategy
  2. The CI leader works with the sponsor to charter a suitable project, including clear expectations of their respective roles
  3. The CI leader works with the sponsor and other managers to select project team members
  4. The sponsor clearly communicates the role, responsibilities, and decision power of the CI leader to the entire organization
  5. The sponsor personally demonstrates his/her commitment and holds the organization accountable
  6. The CI leader leads the project and project team, giving just-in-time training as appropriate (Lean, Six Sigma, project management, change management, statistical methods, etc.)
  7. The CI leader engages the team in using the CI concepts and tools in the project and demonstrates their value and limitations
  8. Project members are given ample materials and opportunities to expand the learning on their own and have open access to coaching by the CI leader
  9. The CI leader assesses the organization (e.g. organizational readiness, maturity, culture) and team members (e.g. skills, behavior, performance) throughout the entire project lifecycle
  10. The CI leader provides analyses (e.g. SWOT) and recommendations to the sponsor, such as deployment strategy, high value projects, and high potential employees (i.e. future leaders)

This approach will avoid many common pitfalls in LSS training and deployment and take advantage of many opportunities provided by modern technology, such as online and on-demand learning.

The two limiting factors I see are a capable CI leader and a committed sponsor.

What other alternatives would you recommend?

]]>
1276
Improving Change Detection https://biopmllc.com/operations/improving-change-detection/ Sat, 01 May 2021 03:23:07 +0000 https://biopmllc.com/?p=1270 Continue reading Improving Change Detection]]> Change detection in time-related data is a common application of statistical methods.  For example, we may want to detect if the consumer preferences have changed over time, if a piece of equipment has deteriorated and requires maintenance, or if a manufacturing process has drifted, increasing risk of producing defects.

In my teaching, consulting, and general discussion with students and practitioners, I have noticed that many people are eager to learn the mechanics of different tools, e.g. how to choose a specific type of process control chart or how to determine the right parameters for cumulative sum (CUSUM), so they can get the job done.  But few ask the question: “what makes the tool effective in the real world?”

In the case of a control chart, a crucial condition that makes the control chart effective is process standardization. 

Continuous Improvement (CI) professionals know that standardized work is a fundamental principle of the Toyota Production System (TPS) or Lean.  Standardization minimizes process variation, which enables greater sensitivity in detecting special cause variation by the control chart.

Many people don’t realize that even if a control chart shows no special cause variation, it does not mean that the process is in statistical control.  In many cases, such as processes that lack standardization, there are simply too many uncontrolled variables present and they become part of the process.  But these variables are not inherent to the process and inflate the common cause variation.

The accompanying figure shows a hypothetical example.  The top chart shows a stable process, except that points 51 to 55 have a positive deviation of 20.  The individuals chart (or I-chart) detects the change.  The bottom chart is the same process with the same positive deviation for points 51 to 55, but some random deviations (or noise) are added. The control limits are more spread out, and the special cause variation is not detected.

Additional noise reduces the ability of a control chart to detect change.

In my observation of real processes, many contain both special cause variation and the additional noise illustrated above.  So naturally, the CI professionals tend to focus on reducing the special cause to bring the process back in control. However, with the persisting noise the process never reaches its true state of control.

The additional noise can come from many sources.  A major source is lack of standardization. 

In a regular production environment, operating procedures have room for interpretation and thus can lead to process variation.  In my experience in R&D and manufacturing, many people honestly believe that they follow the same procedure each time.  But upon careful investigation, deviations are common.

Those familiar with gage repeatability & reproducibility (R&R) studies appreciate the potential for human errors or deviations. Using a well-established measurement procedure, the same operator can still have varying results measuring the same items (i.e. repeatability error).  Different operators likely introduce additional variability (i.e. reproducibility error).  In a less standardized process, there are many more opportunities for deviation.

The effectiveness of standardization to reduce noise is limited by our understanding of the design space and critical process variables.  Because many processes are not well studied and designed using Quality by Design (QbD) principles, some residual noise will likely remain after standardization.

In summary, if you want to improve change detection, make sure that you identify the sources of the extra noise in the process and operationally control them.

]]>
1270
Lean Six Sigma Training for Continuous Improvement https://biopmllc.com/organization/lean-six-sigma-training-for-continuous-improvement/ Thu, 01 Apr 2021 02:13:38 +0000 https://biopmllc.com/?p=1264 Continue reading Lean Six Sigma Training for Continuous Improvement]]> Have you provided Lean Six Sigma (LSS) training to your employees?  What was your goal?  How effective was it?

Over 15 years ago, I received my LSS Black Belt (BB) training sponsored by my employer.  It was three weeks of classroom training delivered over three months by external consultants.  It kick-started my Continuous Improvement (CI) journey.  Since then, I have delivered LSS training as an internal trainer or external consultant to many large global organizations.  I also helped organizations in their LSS deployment, led many CI projects, and coached Green Belt (GB) and BB leaders in their projects.

Despite my own positive experience with LSS training, what I have learned over the years is that in most situations the traditional weeks long LSS training is ineffective in driving CI. 

If measured by the number of people trained or certified or the number of methods and tools covered, such training programs are very effective and easily justified for the investment.  

But if we start to measure the improvement of business outcomes, the desired problem-solving skills and behavior of the trained employees, and the positive impact on the CI culture and mindset of the organization, the training is very often ineffective.  Some troubling signs are

  • It took 12 months or more to complete the first GB project.
  • The GB could not recall some basic topics only a few weeks after the training.
  • BB candidates have to create flash cards to prepare for their certification exams.
  • GBs or BBs are no longer engaged in CI after obtaining their certifications.
  • Certified BBs fail to exhibit or apply knowledge of some fundamental concepts, such as process stability, in their daily work.
  • The trained employees do not perform or behave differently from those untrained in the CI methodology

I can see two main factors contributing to this poor outcome.

First, the training program only teaches the general methods and tools and does not improve skills.

Previously, I discussed training and coaching considerations in LSS deployment in The First Six Sigma Project and recommended customized training in Making Employee Training Effective.

Most LSS training programs developed by universities, professional organizations, and commercial vendors are designed for efficiency and profitability. The generic programs do not connect the content to the client organization’s problems and operational reality.  Few external trainers have the subject matter or industry knowledge to tailor the training to each client’s need.  Even if they are able to customize, few clients are willing to pay the substantial premium.

Corporate internal programs are not much better in terms of sufficiently relevant materials that relate to each employee’s job.  Employees do not start learning real problem-solving skills until they encounter problems in their projects, by which time they already forgot most of what was taught in the training. 

Second, the organization overly relies on training to improve business performance.

Two common fallacies can lead to this “improvement training trap.”

  1. Employees have to be trained in the methods and tools or they won’t be able to learn themselves.
  2. Once the employees are formally trained, they will solve all the problems on their own.

Can classroom training help accelerate learning? Absolutely.  Is it necessary or sufficient to develop the skills, mindset, and behavior for CI?  No.

These programs train methods and tools, whereas what the organizations really need is leadership development and behavior modification.  

Management has to understand that employees’ knowledge in CI methodologies is only a small but essential driver in business improvement.  When employees are not engaged in effective CI activities, it is not necessarily due to lack of knowledge – something else is likely limiting.  The root cause is rarely lack of training, and the solution is not more or even better training.  

It is management’s job to critically analyze all aspects of the organization, e.g. processes, structure, policies, resources, people, and culture, to identify the barriers to CI.  When they do, they will likely find out that LSS training is not the solution to their problem.

]]>
1264
The Missing Information in Business Metrics https://biopmllc.com/strategy/the-missing-information-in-business-metrics/ Mon, 01 Mar 2021 02:18:09 +0000 https://biopmllc.com/?p=1254 Continue reading The Missing Information in Business Metrics]]> Modern businesses generate and consume increasingly large amounts of data.  Information is needed to support operational and strategic decisions.  Despite the advent of Big Data tools and technology, most organizations I have worked with aren’t able to take advantage of the data or tools in their daily work.  While greater awareness of human visual perception and cognition has improved dashboard designs, effective decision-making is often limited by the type of information monitored.

It is common to see summary statistics (such as sum, average, median, and standard deviation) being used in reports and dashboards.  In addition, various metrics are used as Key Performance Indicators (KPIs).  For example, in manufacturing, management often use Overall Equipment Effectiveness (OEE) to gauge efficiency.  In quality, process capability indices (e.g. Cpk) are used to evaluate the process’s ability meet customer requirements. In marketing, the Net Promoter Score (NPS) helps assess customer satisfaction.

All of these are statistics, which are simply functions of data. But what does each of them tell us? What do we want to know from the data? What specific information is needed for the decision?

Unfortunately, these basic questions are not understood by most people who use performance metrics or statistics.  I discussed some specific mistakes in using process capability indices last July.  A more general problem is that statistics can hide the information we need to know.

For example, last year I was coaching a Six Sigma Green Belt (GB) working in Quality.  A manufacturing process had a worsening Cpk.  The project was to increase the Cpk to meet the customer’s demanding requirement. Each time we met, the GB would show me how the Cpk had changed.  But Cpk is a function of both the process center (average) and the process variation (standard deviation), which comes from a number of sources (shifts, parts, measurements, etc.).  The root causes of the Cpk change were not uncovered until we looked deeper into the respective changes in the average and in the different contributors to the standard deviation.  

The key takeaway is that when multiple contributors influence a metric, we cannot just monitor the change in the metric alone.  We must go deeper and seek other information needed for our decisions.

Many people may recall in statistics training that the teachers always tell them “plot the data!”  It is important to visualize the original data instead of relying on statistics alone because statistics don’t tell you the whole story.  The famous example to illustrate this point is the Anscombe’s quartet, which includes four sets of data (x, y) with nearly identical descriptive statistics (mean, variance, and correlation) and even the same linear regression fit and R2.  However, when visualized in a scatter plot, they look drastically different.  If we only looked at one or few statistics, we would have missed the differences.  Again, statistics can hide useful information we need.

Nowadays, there is too much data to digest, and modern tools can conveniently summarize and display them. When we use data to inform our business decisions, it’s easy to fall into the practice of looking only at the attractive summary in a report or on a dashboard.  The challenge of using data for decision making is to know what we want and where to get it.

Guess who wrote below about information monitoring for decisions?

With the coming of the computer this feedback element will become even more important, for the decision maker will in all likelihood be even further removed from the scene of action. Unless he or she accepts, as a matter of course, that he or she had better go out and look at the scene of action, he or she will be increasingly divorced from reality.

Peter Drucker in 1967.  He further wrote:

All a computer can handle is abstractions. And abstractions can be relied on only if they are constantly checked against concrete results.  Otherwise, they are certain to mislead.

Metrics and statistics are abstractions of reality – not the reality.  We must know how to choose and interpret these abstractions and how to complement this information with other types1

1. For more discussion on “go out and look” (aka Go Gemba), see my blog Creating Better Strategies.

]]>
1254
Understanding Variation https://biopmllc.com/strategy/understanding-variation/ Mon, 01 Feb 2021 01:45:39 +0000 https://biopmllc.com/?p=1248 Continue reading Understanding Variation]]> Lean and Six Sigma are two common methodologies in Continuous Improvement (CI).  However, neither has a precise definition of what it is.  Many disagree on the definitions or even the value of these methodologies, and I won’t join the debate here.   What I care about is the underlying principles used by these methodologies – whatever the substance that is useful, independent of the label.

The questions about “what is Lean” and “what is Six Sigma” inevitably come up when you train and coach people in CI methodologies.  Without delving into the principles, my answer goes something like this:

  • Lean is about delivering value to the customer, fast and with minimum waste.
  • Six Sigma is about understanding and reducing variation.

None of them is satisfactory.  But practically these messages are effective in stressing the necessary concepts they need to develop, i.e. value and variation — a prerequisite for CI.  These answers are certainly insufficient and not meant to be.  It’s hard to understand the true meaning of life or happiness when we are 5 years old.  Likewise, it takes a lifetime of experience to understand the true meaning and principles of CI and apply them well.

While the concept of value versus waste is intuitive, most people don’t interpret their daily observations in terms of variation.  Because of the (over-)emphasis of statistical tools in Six Sigma by many consultants, many organizations prefer Lean to Six Sigma (see my earlier blog “How is your Lean developing” for potential pitfalls in replying on simple Lean tools).  The lack of appreciation of the concept of variation will eventually constrain the organization’s ability to improve.

There are many applications of the concept of variation in understanding and improving a process.  Most applications don’t require sophisticated knowledge in statistics or probability theory.  One example is management of supply and demand.

Let’s say that you plan your resources and capacity to meet a target demand level.  The demand can be from internal or external customers, and can be for products, services, materials, or projects. For simplicity, let’s assume that it’s a fixed capacity without any variation, e.g. no unplanned downtime or sick leaves.  

If you plan enough resources for the total or average demand but the demand varies greatly (upper left of the figure), you will meet the demand exactly only occasionally. Most of the time, you will either not have enough capacity (creating backlogs or bottlenecks) and miss some opportunities or have too much capacity and lose the unused resources forever.

If it is too costly to miss the opportunities, some organizations are forced to raise the capacity (upper right of the figure). Many optimize the resources to strike a balance between lost capacity and missed opportunities.  What I have observed is that organizations go back and forth between maximizing opportunities and reducing waste.  One improvement project is sponsored (by one function) to reduce the risk of the missed opportunities with a solution that shows a high return-on-investment in the added resources.  As a result, the excess capacity is common, leading to another project (probably by another function) to reduce waste and maximize resource utilization.  The next demand surge will lead to another round of improvement projects.

Many people don’t realize that the real long-term improvement has to address the issue of demand variation.  For example, if we understand the sources of demand variation and therefore develop solutions to limit it, both missed opportunities and lost capacity will be reduced (bottom half of the figure).  A much lower capacity is needed to satisfy the same overall but less variable demand.

Capacity variation has similar effects. 

What is more interesting is that most processes are made of a series of interdependent supply-demand stages, each of which propagates or accumulates the effect of variation.  We can use this understanding of variation to explain many phenomena in our lives, e.g. process bottlenecks, traffic jams, project delays, supply overage, excess inventory, etc.  The Theory of Constraints popularized by Eliyahu Goldratt in his book The Goal is also based on the same ideas of process interdependence and variation.

No matter what CI methodologies you use, I hope you agree that understanding and reducing variation is always a key to improvement. 

]]>
1248
Project Management Skills and Capabilities https://biopmllc.com/strategy/project-management-skills-and-capabilities/ Fri, 01 Jan 2021 03:15:17 +0000 https://biopmllc.com/?p=1234 Continue reading Project Management Skills and Capabilities]]> We are in a project economy.

“The Project Economy is one in which people have the skills and capabilities they need to turn ideas into reality.” — The Project Management Institute (PMI)

When people fail to turn ideas into reality or when businesses fail to turn strategies into results, a common root cause is that people and organizations lack the right skills and capabilities.

The people include project managers (regardless of their formal titles), team members (or contributors), and project sponsors (management).  Almost everyone is involved in a project in organizations.  The project management (PM) maturity of an organization depends on the skills and capabilities of all people, not just the project managers.

What are the skills and capabilities required for project success?

The PMI Talent Triangle® outlines three skill sets:

  • Technical Project Management
  • Leadership
  • Strategic and Business Management

It is the combination of these skills possessed by people throughout the organization that is required to realize the idea or strategy. In other words, we need adequate skill levels in all nine cells in the role vs. skills matrix (pictured above).

As a project manager, line manager, or consultant in the industry in the past two decades, my observation is that most organizations have low PM maturity, and PM skill development is focused in technical project management for the project managers, i.e. only one of the nine cells.

While the traditional PM skills (such as scope and time management) are critical, they are insufficient because of the increasing complexity, ambiguity, and uncertainty in the problems we try to solve using projects.  Empowering project teams is impossible if project managers and members do not understand the business priority and strategy to make the right decisions.  Unless they demonstrate their business acumen and ability to think strategically, project managers will not be fully empowered.  In my previous blog “Project Managers are Managers,” I offered some suggestions to new project managers to help them think strategically and manage stakeholders effectively.

I was very pleased that the PMI developed the Talent Triangle to help close the skill gap in project managers.  Furthermore, I’d say that we have to assess and develop PM skills for everyone in the organization, for two reasons. 

First, we prepare future project managers for their roles.  People don’t become a competent project manager overnight – it takes years of learning and practice before and after they are given the project manager role.  Even if not in a PM role, each person is leading projects of different sizes and complexity and can benefit from the skills.

Secondly, to ensure project success, project sponsors and members should have the PM skills to perform their roles effectively.  Otherwise, the project managers have to spend much time doing technical project management, unable to focus on the big picture – the business and strategic value.  When project sponsors do not know how to manage projects at the strategical level, their management practices can lead to project problems or failures that even the best project managers cannot prevent.  I touched on some of the practices in my blog “Projects on Schedule.”

How are you assessing and developing PM skills in your organization?

]]>
1234
The Practical Value of a Statistical Method https://biopmllc.com/strategy/the-practical-value-of-a-statistical-method/ Tue, 01 Dec 2020 03:58:19 +0000 https://biopmllc.com/?p=1227 Continue reading The Practical Value of a Statistical Method]]> Shortly after I wrote my last blog “On Statistics as a Method of Problem Solving,” I received the latest issue of Quality Progress, the official publication by the American Society for Quality.   A Statistics article “Making the Cut – Critical values for Pareto comparisons remove statistical subjectivity” caught my attention because Pareto analysis is one of my favorite tools in continuous improvement.

It was written by two professors “with more than 70 years of combined experience in the quality arena and the use of Pareto charts in various disciplines” and covers a brief history of Pareto analysis and its use in quality to differentiate the vital few causes from the trivial many.

The authors introduced a statistical method to address the issue of “practitioners who collect data, construct a Pareto chart and subjectively identify the vital few categories on which to focus.”  The main point is that two adjacent categories sorted by occurrence in a descending order may not be statistically different in terms of their underlying frequency (e.g. rate of failure) due to sampling error.  

Based on hypothesis testing, the method includes two simple tools:

  1. Critical values below which the lower occurrence category is deemed significantly different from the higher one
  2. A p-value for each pair of occurrence observations of the adjacent categories to measure the significance in the difference

With a real data set (published by different authors) as an example, they showed that only some adjacent categories are significantly different and therefore, are candidates for making the cut.

I see the value in raising the awareness of statistical thinking in decision making (which is desperately needed in science and industry).  However, in practice, the method is far less useful than it appears and can lead to improper applications of statistical methods.

Here are but a few reasons.

  • The purpose of Pareto charts is for exploratory analysis, not for binary decision-making, i.e. making the cut which categories belong to the vital few.  As a data visualization tool, a Pareto chart shows, overall, whether there is a Pareto effect – an obvious 80/20 distribution in the data not only indicates an opportunity to apply the Pareto principle but also gives the insight in the nature of the underlying cause system.  
  • Using the hypothesis test to answer an unnecessary question is waste.  Overall, if the Pareto effect is strong, the decision is obvious, and the hypothesis test to distinguish between categories is not needed.  If the overall effect is not strong enough to make the obvious decision, the categorization method used is not effective in prioritization, and therefore, other approaches should be considered.  
  • Prioritization decisions depend on resources and other considerations, not category occurrence ranking alone.  This is true even if the Pareto effect is strong.  People making prioritization decisions based solely on Pareto analysis are making a management mistake that cannot be overcome by statistical methods. 
  • The result of the hypothesis test offers no incremental value – it does not change the decisions made without such tests.  For example, if the fourth ranking category is found not statistically different from the third and there are only enough resources to work on three categories, what should the decision be? How would the hypothesis test improve our decision? Equally unhelpful, a test result of significant difference merely confirms our decision. 
  • The claim of “removing subjectivity” by using the hypothesis test is misleading.  The decision in any hypothesis test depends on the risk tolerance of the decision maker, i.e. the alpha (or significance level) used to make the decision whether a given p-value is significant is chosen subjectively.  The choice of a categorization method also depends on subject matter expertise – another subjective factor.  For example, two categories could have been defined as one.  In addition, many decisions in a statistical analysis involve some degrees of expert judgment and therefore introduce subjectivity.  Such decisions may include whether the data is a probability sample, whether the data can be modeled as binomial, whether the process that generated the data was stable, etc.  

Without sufficient understanding of statistical theory and practical knowledge in its applications, one can easily be overwhelmed by statistical methods presented by the “experts.”  Before considering a statistical method, ask the question “how much can it practically improve my decision?”  In addition, “One must never forget the importance of subject matter.” (Deming)

]]>
1227
On Statistics as a Method of Problem Solving https://biopmllc.com/strategy/on-statistics-as-a-method-of-problem-solving/ https://biopmllc.com/strategy/on-statistics-as-a-method-of-problem-solving/#comments Sun, 01 Nov 2020 03:55:59 +0000 https://biopmllc.com/?p=1220 Continue reading On Statistics as a Method of Problem Solving]]> If you have taken a class in statistics, whether in college or as a part of professional training, how much has it helped you solve problems?

Based on my observation, the answer is mostly not much. 

The primary reason is that most people are never taught statistics properly.   Terms like null hypothesis and p-value just don’t make intuitive sense, and statistical concepts are rarely presented in the context of scientific problem solving. 

In the era of Big Data, machine learning, and artificial intelligence, one would expect improved statistical thinking and skills in science and industry.  However, the teaching and practice of statistical theory and methods remain poor – probably no better than when W. E. Deming wrote his 1975 article “On Probability As a Basis For Action.” 

I have witnessed many incorrect practices in teaching and application of statistical concepts and tools.  There are mistakes unknowingly made by users inadequately trained in statistical methods, for example, failing to meet the assumptions of a method or not considering the impact of the sample size (or statistical power).  The lack of technical knowledge can be improved by continued learning of the theory.

The bigger problem I see is that statistical tools are used for the wrong purpose or the wrong question by people who are supposed to know what they are doing — the professionals.  To the less sophisticated viewers, the statistical procedures used by those professionals look proper or even impressive.  To most viewers, if the method, logic, or conclusion doesn’t make sense, it must be due to their lack of understanding.  

An example of using statistics for the wrong purpose is p-hacking – a common practice to manipulate the experiment or analysis to make the p-value the desired value, and therefore, support the conclusion.

Not all bad practices are as easily detectable as p-hacking.  They often use statistical concepts and tools for the wrong question.  One category of such examples is failing to differentiate enumerative and analytic problems, a concept that Deming wrote extensively in his work, including the article mentioned above.  I also touched on this in my blog Understanding Process Capability.

In my opinion, the underlying issue using statistics to answer the wrong questions is the gap between subject matter experts who try to solve problems but lack adequate understanding of probability theory, and statisticians who understand the theory but do not have experience solving real-world scientific or business problems.   

Here is an example. A well-known statistical software company provides a “decision making with data” training.  Their example of using a hypothesis test is to evaluate if a process is on target after some improvement.  They make the null hypothesis as the process mean equal to the desired target.  

The instructors explain that “the null hypothesis is the default decision” and “the null is true unless our data tell us otherwise.” Why would anyone collect data and perform statistical analysis if they already believe that the process is on target?  If you are statistically savvy, you will recognize that you can reject any hypothesis by collecting a large enough sample. In this case, you will eventually conclude that the process is not on target.

The instructors further explain “It might seem counterintuitive, but you conduct this analysis to test that the process is not on target. That is, you are testing that the changes are not sufficient to bring the process to target.” It is counterintuitive because the decision maker’s natural question after the improvement is “does the process hit the target” not “does the process not hit the target?”

The reason I suppose for choosing such a counterintuitive null hypothesis here is that it’s convenient to formulate the null hypothesis by setting the process mean to a known value and then calculate the probability of observing the data collected (i.e. sample) from this hypothetical process.  

What’s really needed in this problem is not statistical methods, but scientific methods of knowledge acquisition. We have to help decision makers understand the right questions. 

The right question in this example is not “does the process hit the target?” which is another example of process improvement goal setting based on desirability, not a specific opportunity. [See my blog Achieving Improvement for more discussion.]  

The right question should be “do the observations fall where we expect them to be, based on our knowledge of the change made?”  This “where” is the range of values estimated based on our understanding of the change BEFORE we collect the data, which is part of the Plan of the Plan-Do-Study-Act or Plan-Do-Check-Act (PDSA or PDCA) cycle of scientific knowledge acquisition and continuous improvement.   

If we cannot estimate this range with its associated probability density, then we don’t know enough of our change and its impact on the process.  In other words, we are just messing around without using a scientific method.  No application of statistical tools can help – they are just window dressing.

With the right question asked, a hypothesis test is unnecessary, and there is no false hope that the process will hit the desired target.  We will improve our knowledge based on how well the observations match our expected or predicted range (i.e. Study/Check).   We will continue to improve based on specific opportunities generated with our new knowledge.

What is your experience in scientific problem solving?

]]>
https://biopmllc.com/strategy/on-statistics-as-a-method-of-problem-solving/feed/ 1 1220
Continuous Improvement is More Than Projects https://biopmllc.com/strategy/continuous-improvement-is-more-than-projects/ Thu, 01 Oct 2020 02:59:10 +0000 https://biopmllc.com/?p=1212 Continue reading Continuous Improvement is More Than Projects]]> In my June blog Achieving Improvement, I discussed what makes a project goal achievable and emphasized that it should not be set based solely on the desirability to improve performance.  We must identify a specific opportunity that can be reliably and effectively converted into results using a proven, systematic approach. Unfortunately, most continuous improvement (CI) projects I have observed do not meet this criterion. 

Understandably, many CI projects are chartered because there is a need to improve business performance.  But if the opportunity or path to improvement is not clear, the project has a high risk of failure1.  Even if the goal was somehow achieved, it likely took far more time and effort than necessary, as evidenced by many 6-12 month long Green Belt (GB) projects. 

While CI professionals are often trained to not assume a solution or even a root cause in Lean Six Sigma projects, the approach to the problem should be well defined for the specific problem.  DMAIC or similar one-size-fits-all frameworks are too generic to be helpful.  Because most project leaders do not have enough experience to identify the right opportunity for a CI project and follow a proven path to improvement, it is essential that the organization implements an effective system to differentiate opportunities into categories suited for distinct approaches.  For example, the categories can include

  • Routine improvement by the operators
  • Kaizen events
  • Lean Six Sigma or DMAIC projects  
  • Technical projects that require Subject Matter Experts (SMEs)
  • Management

The system will vary by the organization.  In a manufacturing or transactional environment where CI methodologies are applied, I recommend process management as a basic component of the system.  Specifically, process management should include, but is not limited to

  • Process mapping to understand how things are being done
  • Standardization to implement the best knowledge currently available
  • Standard Operating Procedures (SOPs) up to date
  • Employees trained and qualified to perform the jobs
  • All preventative maintenance followed
  • Measurement system analysis to ensure reliable data
  • Statistical Process Control (SPC) to monitor and stabilize processes

These foundational activities are prerequisites for any process to perform at its optimal level that is achievable by design.  If these activities are not consistently followed, even an initially high-performing process will deteriorate. 

Any organization striving to improve their processes should start by incorporating these activities into the responsibilities of various roles.  Following these activities will regularly uncover many improvement opportunities, most of which can be accomplished by those who are closest to the process. If needed, Quality and CI professionals can train and coach others proper methods and tools. 

Ideally, process management should be implemented before initiating CI projects in the area.  Routine improvement as a result of process management eliminates countless potential root causes for poor process performance, reducing the need for project-based improvement effort.  Any CI projects, if needed, will have a clearer focus, less encumbered by confounding factors.

When organizations fail to build process management in their operations, CI projects are often initiated as a reaction to emergent problems, which are likely due to years of neglect.  They hope, with the aid of some magic methodology and heroic efforts, that the projects alone will solve the problems.  What they encounter, however, are numerous causes that compound the problem, making a “perfect” situation for inexperienced project leaders in a low CI maturity organization.

Thus projects are a tool of continuous improvement.  They are not a substitute for it.2


1. See my blog Six Sigma Project Management for suggestions to reduce project risks.

2. I borrowed a statement by Peter Drucker when he discussed merger & acquisition — “Thus financial transactions are a tool of business policy.  They are not a substitute for it.” 

]]>
1212
Projects on Schedule https://biopmllc.com/strategy/projects-on-schedule/ https://biopmllc.com/strategy/projects-on-schedule/#comments Mon, 31 Aug 2020 03:17:44 +0000 https://biopmllc.com/?p=1204 Continue reading Projects on Schedule]]> Are your projects on schedule, delivering on time?

Project delays are common.  I heard many executives concerned or frustrated with projects missing critical dates.  Many organizations train project leaders in project management (PM) or even hire professional project managers to ensure that projects will meet the milestones.  Project management certifications, such as Project Management Professional® (PMP), have become a hiring preference or job requirement. 

Yet having trained project managers is seldom enough to eliminate project delays.  What is missing?  

Project managers are supposed to manage risks that can cause delays.  However, they also have to work within the confines of the organization, where senior management operate in ways incompatible with the best PM practices. 

An organization’s management practices often lead to unintended consequences, including project delays and missed deadlines.  These management practices include, for example

  • Adding new projects without prioritization or additional resources
  • Changing the deliverables or expanding the scope of an existing project
  • Optimizing utilization by sharing the same critical resources among multiple projects
  • Relying on fixed target dates in decision making without understanding the associated assumptions

The last one is worth elaborating. 

In most organizations, project managers prepare a project plan, which includes a schedule with dates of key milestones.  Some plans require more detailed activities and corresponding dates shown on a network diagram (e.g. a Gantt chart).   The activities and their durations can be uncertain, depending on the project.  For example, in a R&D or process improvement project, where the activity outcome is unpredictable and/or the method is unproven, the estimated time to complete a task can have a high degree of uncertainty. 

However, this uncertainty is not always communicated effectively to the decision makers.  A typical schedule given to the senior management is highly simplified and shows only one fixed duration or target date for each activity or milestone.  If not properly explained, this simplified schedule creates a perception of certainty that does not exist.  Unfortunately, many sponsors are not experienced in PM or are too busy to question the uncertainty in the schedule.  They subsequently use those dates for operational decisions and hold the project managers accountable for delivering on schedule.  The result is predictable.

There is no universal solution to project delays because there are many causes.  In addition to having competent project managers, senior management must recognize the impact of their own action on project success. 

The one practice that I recommend to all sponsors is to ask the project managers to show the uncertainty of milestone dates – how likely will it be completed by this date and why?

There are many ways to communicate the uncertainty.  One simple way is to show three scenarios1.

  • Most likely (or 50%/50% sooner or later than this date)
  • Most optimistic (or 10% of chance sooner than this date)
  • Most pessimistic (or 10% of chance later than this date)

The exact definitions of the three cases are not as important as the practice of using multiple dates to express the uncertainty.  This dialogue allows us to assess the risk more appropriately and make decisions accordingly.

In project management, the emphasis is on planning, not the plan.


1. Interested readers may want to look up the Program Evaluation and Review Technique (PERT).

]]>
https://biopmllc.com/strategy/projects-on-schedule/feed/ 1 1204