Decision – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Sun, 13 Dec 2020 20:09:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Decision – biopm, llc https://biopmllc.com 32 32 193347359 What Does the Data Tell Us? https://biopmllc.com/analytics/what-does-the-data-tell-us/ Wed, 01 Apr 2020 03:09:14 +0000 https://biopmllc.com/?p=1137 Continue reading What Does the Data Tell Us?]]> It’s March 31, 2020.  In the past 3 months, the novel coronavirus (COVID-19) has changed the world we live in.  As the virus spreads around the globe, everyone is anxiously watching the latest statistics on confirmed cases and deaths attributed to the disease in various regions.  With the latest technology, timely data is more accessible to the public than ever before.  

With the availability of data comes the challenge of proper comprehension and communication of it.  I am not talking about advanced data analytics or visualization but communication and interpretation of simple numbers, counts, ratios, and percentages.    

The COVID-19 pandemic has provided us ample examples of such data.   If not careful, even simple data can be misinterpreted and lead to incorrect conclusions or actions. 

Cumulative counts (or totals) never go down. They are monotonously increasing.  The total confirmed cases always increase over time even when the daily new cases are dropping.  The total is not most effective in communicating trends, unless we compare it with some established models.  The change in daily cases can give a better insight of the progress.

Even the daily change should be interpreted with caution.  A jump or drop in new cases on any single day may not mean much because of chance variation inherent in data collection.  It is more reliable to fit the data to a model over a number of days to understand the trend.

The range of a dataset gets bigger as more data is collected.  Even extreme values that occur infrequently will show up if the sample size is large.  Younger people are less likely to have severe symptoms if infected by the virus.  The initial data on hospitalization or mortality show predominantly older patients, the most vulnerable population.  As more cases are collected, the patient age range will naturally expand to include very young patients who need hospitalization or even die.  But this increase in the number of younger patients does not necessarily mean that the virus has become deadlier for the younger population.

The percentage of hospitalized patients who are under 65 years of age is by itself not a right measure of the disease risk to the younger population.  There are significantly more people younger than 65 than those older in a general population.  Each person’s risk should be adjusted by the size of the age group.  In addition, the severity of each hospitalized patient is different and their pre-existing health conditions also play a critical role in their recovery or survival.

Mortality is the ratio of the number of the deceased to the number of confirmed cases.  The numerator is likely more accurate than the denominator.  It is likely most patients who died of COVID-19 related complications are counted, whereas the confirmed cases represent mainly those infected people who have severe symptoms, which is known to be the minority.  Therefore, the calculated mortality is likely an overestimate at the initial stage of the pandemic when the prevalence of the disease is uncertain.

In the above examples, it only takes some awareness to avoid data misinterpretation.  For critical decisions, we must understand the context of the data, e.g. where the data came from, what data is collected, how it is collected, what data is missing, etc.

We should never forget that the data we often see is collected from a sample of the population we try to understand.  Any statistic (or calculation) from the sample data, such as count or average, is not of most interest.  What we truly want to know is some population attribute estimated based on the sample data.  We cannot measure the entire population, e.g. test everyone to see who are infected, and have to rely on sample data available to us.  Different samples can give drastically different data.  We must understand what that sample is and how it is selected in order to infer from the data.

For example, the sample may not be representative of the population.  The people who have been tested for the new coronavirus represent a sample.  But if only seriously ill people are tested, they do not represent the general population if we want to understand how deadly the virus is.

Equally important is the method of measurement.  All tests have errors.  An infected person could give a negative test result (i.e. a false negative), and an uninfected person could give a positive result (i.e. false positive).  The probabilities of such errors depend on the test.  Different tests on the same people can give different results. 

To analyze data properly, trained professionals depend on probability theory and sophisticated methods.  For most people, though, it helps to know that what’s not in the data could be more important than the data.

]]>
1137
Six Sigma Project Management https://biopmllc.com/operations/six-sigma-project-management/ https://biopmllc.com/operations/six-sigma-project-management/#comments Tue, 28 Jan 2020 19:58:47 +0000 https://biopmllc.com/?p=1122 Continue reading Six Sigma Project Management]]> Six Sigma projects are different from traditional projects in one important aspect – the solution or the path to success is unknown at the start.  In contrast, building a new house, for example, is typically a project with a known path.  Its time, budget, and resources can be planned with reasonable accuracy.  While there is still uncertainty, many risk factors are known and can be managed.

A true Six Sigma project attempts to address a new or long-lasting problem that no one knows the real cause or has a clear solution for.   If the cause or solution is known, it is not a Six Sigma project – Just do it.  This uncertainty obviously makes some people less willing to initiate a Six Sigma project and/or can lead to unsuccessful projects.  In many ways, a Six Sigma project is similar to a high-risk R&D project.

How can we manage Six Sigma projects more effectively?

Assuming that the project is the right one for the organization and receives adequate resources and support, consider the following to reduce project delays and pitfalls.  (If the assumptions are not true, see my earlier post “The First Six Sigma Project” for discussion on some common Six Sigma deployment issues first.)

Train Project Management (PM) Skills

Many newly trained Black Belts (BBs) and Green Belts (GBs) lack sufficient project management skills.  Few received formal PM training, and their previous jobs did not require them to lead cross-functional teams.  A minimum of 2 days of PM fundamentals should be provided as a part of Six Sigma training or a separate program.  If the total training budget or days are limited, some more advanced or less frequently used Six Sigma contents (such as statistical tools) should be removed to accommodate the PM need. 

Having the basic PM knowledge is necessary for project success.  Particularly important, the BB/GB should be clear of their role as a project manager relative to the others in the organization.  The PM skills and experience will benefit the organization beyond the Six Sigma projects.  (See my earlier post “Project Managers are Managers” for suggestions for new project managers.) 

Apply Multi-generational Project Planning

Many project issues are a result of an overly large scope.  A Six Sigma project is already high risk without trying to solve too many problems at the same time.  Both the sponsors and BB/GBs tend to be overambitious and include multiple related metrics in the goal, which leads to diluted efforts and project delays.  If the project lasts more than 5-6 months, it is likely the original business case, assumptions, or metrics will no longer be true before they complete the solution due to external circumstances.  Often projects get cancelled before the benefits are achieved.

Instead, it is better to follow multi-generational project planning and break the goal into a series of smaller ones.  For example, two six-month projects sequentially are better than one 12-month project using the same resources.  Ideally, we follow the Pareto principle to achieve 80% of the goal in the first project and then the remaining in the second one.  In many cases, the second project becomes unnecessary because the business environment has changed by the time we finish the first.  This approach is similar to the Lean and Agile principles used in product development to manage uncertainty.

Use DMAIC Tollgates Properly

Most Six Sigma projects follow the DMAIC methodology, that has a tollgate for each of the Define, Measure, Analyze, Improve, and Control phases.  Many organizations have a list of required and recommended deliverables for each phase and check them at the tollgate review.  Unfortunately, many sponsors and even coaches do not understand why and when a deliverable is required for a particular phase; their insistence on completing the deliverable before the tollgate can cause confusion and project delays. 

Too often organizations make the mistake of using a tollgate to evaluate if the BB/GB has done a good job following the DMAIC methodology.  The primary purpose of a tollgate should be to help the sponsor make the right and timely decisions, such as stopping the project or providing resources.  The tollgates should not be the only times when such decisions are made; many inexperienced project managers make the mistake of delaying decisions until the tollgates.  Organizations can avoid such mistakes by setting the right expectations upfront for the tollgates and decision process for all projects.

In summary, to manage the inherent risks in Six Sigma projects, the sponsor and the BB/GB have to be proactive and methodical in planning and execution.   DMAIC should be rigorous, not rigid.   

]]>
https://biopmllc.com/operations/six-sigma-project-management/feed/ 1 1122
Setting SMART Goals https://biopmllc.com/strategy/setting-smart-goals/ https://biopmllc.com/strategy/setting-smart-goals/#comments Fri, 31 May 2019 01:24:02 +0000 https://biopmllc.com/?p=1067 Continue reading Setting SMART Goals]]> Recently I had conversations with several people on different occasions about effective goal setting.  It is a common practice to use Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) as criteria to create goals.   However, using SMART goals for effective management or decision making is not as simple as it appears.

For example, “improve product ABC yield to 96% or more by September 30” can be a SMART goal.  In a non-manufacturing environment, a similar goal can be “reduce invoices with errors to 4% or less by September 30.”

Suppose now is September 30, and we have only 4% of the products or invoices classified as bad.  Did we achieve our goal?

Most people would say “Of course, we did.”  But the real answer is “We don’t know without additional information or assumptions.” 

Why?

The reason is that the 4% is calculated from a sample, or limited observations from the system or process we are evaluating.  The true process capability may be higher or lower than 4%.   

We can use a statistical approach to illustrate the phenomenon.  Since the outcome of each item is binary (good/bad or with/without errors), we can model the process as a binomial distribution.   Figure 1 shows the probability of observing 0 to 15 bad items if we examine a sample of 100 items, assuming that any item from the process has a 4% probability of being bad.

Figure 1: binomial probability=0.04
Figure 1: Binomial Distribution (n=100, p=0.04)

When the true probability is 4%, we expect to see 4 bad items per 100, on average.  However, each sample of 100 items is different due to randomness, and we can get any number of bad items, 0, 1, 2, etc.  If we add the probability values of the five leftmost bars (corresponding to 0, 1, 2, 3, and 4 bad items), the sum is close to 0.63.  This means that there is only a 63% chance of seeing 4 or fewer bad items in a sample of 100, when we know the process should produce only 4% bad items.  

More than 37% of the time, we will see 5 or more bad items in a sample of 100.  In fact, there is a greater than 10% chance seeing eight bad items — twice as many as expected!

In contrast, a worse-performing process with a true probability of 5% (Figure 2) has a 44% chance of producing 4 or fewer bad items.  This means that we will see it achieving the goal almost half the time.  

Figure 2: binomial probability=0.05
Figure 2: Binomial Distribution (n=100, p=0.05)

Suppose the first process represents your capability and the second one of your colleagues, how do you feel about using the SMART goal above as one criterion for raises or promotions?

The point I am making is not to abandon the SMART goals but to use them judiciously.  In many cases, it calls for statistical thinking – understanding variation in data.  Just because we can measure or quantify something doesn’t mean we are interpreting the data properly to make the right decision.  

It takes “some rudimentary knowledge of science”1 to be smart.


1. Deming, W. Edwards. Out of the Crisis : Quality, Productivity, and Competitive Position. Cambridge, Mass.: Massachusetts Institute of Technology, Center for Advanced Engineering Study, 1986.

]]>
https://biopmllc.com/strategy/setting-smart-goals/feed/ 1 1067