Problem-Solving – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Sun, 13 Dec 2020 20:08:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Problem-Solving – biopm, llc https://biopmllc.com 32 32 193347359 On Statistics as a Method of Problem Solving https://biopmllc.com/strategy/on-statistics-as-a-method-of-problem-solving/ https://biopmllc.com/strategy/on-statistics-as-a-method-of-problem-solving/#comments Sun, 01 Nov 2020 03:55:59 +0000 https://biopmllc.com/?p=1220 Continue reading On Statistics as a Method of Problem Solving]]> If you have taken a class in statistics, whether in college or as a part of professional training, how much has it helped you solve problems?

Based on my observation, the answer is mostly not much. 

The primary reason is that most people are never taught statistics properly.   Terms like null hypothesis and p-value just don’t make intuitive sense, and statistical concepts are rarely presented in the context of scientific problem solving. 

In the era of Big Data, machine learning, and artificial intelligence, one would expect improved statistical thinking and skills in science and industry.  However, the teaching and practice of statistical theory and methods remain poor – probably no better than when W. E. Deming wrote his 1975 article “On Probability As a Basis For Action.” 

I have witnessed many incorrect practices in teaching and application of statistical concepts and tools.  There are mistakes unknowingly made by users inadequately trained in statistical methods, for example, failing to meet the assumptions of a method or not considering the impact of the sample size (or statistical power).  The lack of technical knowledge can be improved by continued learning of the theory.

The bigger problem I see is that statistical tools are used for the wrong purpose or the wrong question by people who are supposed to know what they are doing — the professionals.  To the less sophisticated viewers, the statistical procedures used by those professionals look proper or even impressive.  To most viewers, if the method, logic, or conclusion doesn’t make sense, it must be due to their lack of understanding.  

An example of using statistics for the wrong purpose is p-hacking – a common practice to manipulate the experiment or analysis to make the p-value the desired value, and therefore, support the conclusion.

Not all bad practices are as easily detectable as p-hacking.  They often use statistical concepts and tools for the wrong question.  One category of such examples is failing to differentiate enumerative and analytic problems, a concept that Deming wrote extensively in his work, including the article mentioned above.  I also touched on this in my blog Understanding Process Capability.

In my opinion, the underlying issue using statistics to answer the wrong questions is the gap between subject matter experts who try to solve problems but lack adequate understanding of probability theory, and statisticians who understand the theory but do not have experience solving real-world scientific or business problems.   

Here is an example. A well-known statistical software company provides a “decision making with data” training.  Their example of using a hypothesis test is to evaluate if a process is on target after some improvement.  They make the null hypothesis as the process mean equal to the desired target.  

The instructors explain that “the null hypothesis is the default decision” and “the null is true unless our data tell us otherwise.” Why would anyone collect data and perform statistical analysis if they already believe that the process is on target?  If you are statistically savvy, you will recognize that you can reject any hypothesis by collecting a large enough sample. In this case, you will eventually conclude that the process is not on target.

The instructors further explain “It might seem counterintuitive, but you conduct this analysis to test that the process is not on target. That is, you are testing that the changes are not sufficient to bring the process to target.” It is counterintuitive because the decision maker’s natural question after the improvement is “does the process hit the target” not “does the process not hit the target?”

The reason I suppose for choosing such a counterintuitive null hypothesis here is that it’s convenient to formulate the null hypothesis by setting the process mean to a known value and then calculate the probability of observing the data collected (i.e. sample) from this hypothetical process.  

What’s really needed in this problem is not statistical methods, but scientific methods of knowledge acquisition. We have to help decision makers understand the right questions. 

The right question in this example is not “does the process hit the target?” which is another example of process improvement goal setting based on desirability, not a specific opportunity. [See my blog Achieving Improvement for more discussion.]  

The right question should be “do the observations fall where we expect them to be, based on our knowledge of the change made?”  This “where” is the range of values estimated based on our understanding of the change BEFORE we collect the data, which is part of the Plan of the Plan-Do-Study-Act or Plan-Do-Check-Act (PDSA or PDCA) cycle of scientific knowledge acquisition and continuous improvement.   

If we cannot estimate this range with its associated probability density, then we don’t know enough of our change and its impact on the process.  In other words, we are just messing around without using a scientific method.  No application of statistical tools can help – they are just window dressing.

With the right question asked, a hypothesis test is unnecessary, and there is no false hope that the process will hit the desired target.  We will improve our knowledge based on how well the observations match our expected or predicted range (i.e. Study/Check).   We will continue to improve based on specific opportunities generated with our new knowledge.

What is your experience in scientific problem solving?

]]>
https://biopmllc.com/strategy/on-statistics-as-a-method-of-problem-solving/feed/ 1 1220
An Indispensable Competency in an Agile Organization https://biopmllc.com/organization/an-indispensable-competency-in-an-agile-organization/ Tue, 30 Jul 2019 20:16:02 +0000 https://biopmllc.com/?p=1085 Continue reading An Indispensable Competency in an Agile Organization]]> One of the most common reasons for project trouble or failure is an unsuitably large project scope.   The problem is not necessarily unrealistic goals or inadequate resources. When the large scope extends the project timeline far into the future, it risks becoming irrelevant before meaningful impact is achieved.  External circumstances simply change and invalidate the original plans or assumptions.

We operate in a highly unpredictable environment.  The increasing interest and practice of Lean and Agile methodologies is a clear acknowledgement of the unknowns and unknowables.  Unable to predict the future with high confidence, we have to learn and adapt as we go. 

How can individuals and organizations be more effective in dealing with this new reality?

In project management, multi-generational project planning is used to transform a large project into a series of smaller ones, each achieving a significant milestone relatively quickly.  In addition, the lessons learned and knowledge acquired in each stage refine the goal and reduce the uncertainty in the subsequent planning and execution. 

This is an example of analytical thinking — a critical competency of the project manager — which includes

  1. Breaking down a large, complex problem into smaller, manageable components
  2. Prioritizing the components based on a set of criteria, such as risk, effort, impact, and interdependency
  3. Sequencing the efforts in such a way that minimizes risk and cost while maximizing the desired outcome

In today’s work, analytical thinking is not just a required competency of project managers but any knowledge worker who has to solve technical or business problems.  I use it as an essential criterion for evaluation of job candidates and development of employees.

Fortunately, everyone can improve their analytical thinking at work through continued learning and practice of some well-established conceptual frameworks and scientific methods, for example

1. Product development using Design of Experiments (DOE) 

Frequently, there are numerous variables that we need to understand in order to design a product or process or improve its performance.  When a one-factor-at-a-time or a full factorial design is not most effective, a screening design can be used first to identify the critical few among many potential factors.  Then less resources are required to study the few more thoroughly, e.g. characterizing their interactions and the Response Surface, to achieve optimal outcomes.  When designed strategically, each study augments the previous ones, avoiding unnecessary repeats.

2. Quality improvement by understanding variation

Reducing defects and improving customer satisfaction is a main goal of quality.  Unfortunately, in many organizations, quality issues persist despite repeated improvement efforts.  While subject matter expertise is important, sustained improvement requires understanding of process stability (i.e. lack of special cause variation) and capability (i.e. probability of the process producing a result that meets customer requirements).  That is why process improvement methodologies, such as Six Sigma, rely on the principle of Statistical Process Control (SPC) to identify and separate special cause variation from common cause variation.  Only after eliminating special cause variation, can we truly characterize and improve the process.  Then, if necessary, we can improve process capability by reducing common cause variation and/or re-centering the process to the desired target.

3. Lead time reduction using Lean concepts

The ability to consistently design and deliver a product or service faster than competition is critical to business success.   The lead time (e.g. request-to-delivery time) of many business processes remains long and variable. These processes often involve numerous and convoluted steps across multiple functions.  It is impractical and unnecessary to analyze and improve all steps. Seeing from a customer’s perspective, Lean thinking brings us clarity by separating value-added from non-value added activities (i.e. waste) that impede the continuous flow of value to the customer.   By reducing or abolishing waste, such as waiting, overproduction, and inventory, we can simplify and speed up the processes without investing in new capacity.

These are but a few examples in which proven scientific and management methods help break down complex problems into manageable components that lead to effective solutions. 

Buzzwords like “work smart,” “be agile,” and “fail fast” may create the initial awareness or inspiration.  But they rarely lead to operational effectiveness or material change.  Building an organization that is Lean, Agile, responsive, or adaptive is transformative and requires systematic identification and development of required competencies, such as analytical thinking.

Adapting to a changing world is a challenge that demands analytical thinking as well.   I am optimistic that individuals and organizations will continue to develop new competencies by embracing sound problem-solving methodologies.

]]>
1085