Leader – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Sun, 13 Dec 2020 20:11:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Leader – biopm, llc https://biopmllc.com 32 32 193347359 Revisiting the DMAIC Stage-Gate Process https://biopmllc.com/organization/revisiting-the-dmaic-stage-gate-process/ Sun, 31 May 2020 21:17:58 +0000 https://biopmllc.com/?p=1179 Continue reading Revisiting the DMAIC Stage-Gate Process]]> The DMAIC framework, with its Define, Measure, Analyze, Improve, and Control phases, is the most common method used in Six Sigma projects.  Most Green Belts (GBs) and Black Belts (BBs) are trained to execute Six Sigma projects using this framework.  

Following the DMAIC steps, the project team can think rigorously and approach the problem systematically.  Books and training materials include applicable tools for each phase and checklists for tollgate reviews. Organizations often have DMAIC templates that define mandatory and optional deliverables for each phase.  All of these are supposed to help the GBs and BBs to determine the right questions to ask and the right tools to apply along the DMAIC process.

In reality, the templates are not as helpful.  I observe many project leaders either confused with what to do in each DMAIC phase or doing the wrong things.  For example,

  • Project teams include a tool or analysis simply because it’s a “required” phase deliverable, even if it doesn’t improve the process or our knowledge. 
  • The project leaders are more concerned with presenting visually impressive slides to the management than understanding the process. They re-create a SIPOC or Fishbone diagram on a slide from the flipchart or white board when a snapshot is perfectly legible.
  • Project teams go to a great length to document the current state electronically (e.g. in Visio) as a single process (which is futile), rather than spending the time “Go Gemba” to understand the variation.
  • The project continues even after the evidence and analysis show that the project baseline or business case is no longer valid.  Instead of using the tollgate to stop or re-scope the project, the team shows various tools and analyses to justify the value of going forward.  They are afraid that terminating the project will reflect negatively on them.
  • The project team is sent back to complete a deliverable at the tollgate because it is not satisfactory to the management even when the deliverable is not critical to the next step in the project.  As a result, teams always overprepare for the tollgates in fear of imperfect deliverables.
  • Instead of seeing an inadequate measurement system as an opportunity re-scope the project to address it, the team is asked to demonstrate an adequate measurement system before closing the Measure phase.  They are stuck in Measure to perform Improve activities.

Why are these happening?

I discussed in my earlier blogs about some related challenges in “Starting Lean Six Sigma” and “The First Six Sigma Project.”  By understanding how Lean Six Sigma fits in the organization’s objectives, strategy, and capabilities, the leaders can choose the right deployment approach for the organization.  By selecting the right candidates and projects and by providing the right training/coaching to both sponsors and GBs/BBs, the leaders can avoid many common mistakes when the organization is in the low continuous improvement (CI) maturity state.

While the experience of the project leaders is a factor, I attribute the main cause of many Lean Six Sigma deployment issues to the organization, not the individual GBs or BBs.

Beyond the initial stage of the deployment, the organization’s chance to achieve and sustain a CI culture and high return on investment depends on its leaders.  Many Lean Six Sigma challenges simply reflect the existing organizational and leadership issues. Using the DMAIC methodology as a “plug & play” solution by the leaders only exacerbates the underlying problems.

DMAIC templates and tollgate reviews can help guide newly trained GBs and BBs as they practice scientific problem solving.  But when they become prescriptive requirements and project performance criteria dictated by management, it discourages dialogue and organizational learning, which are basic elements in a CI culture.  Judging project progress against a fixed set of DMAIC phase deliverables without understanding the applicability and true contribution in each case only causes confusion and fear.  It reinforces the “fear of failure” mindset in many organizations. 

The DMAIC stages are not linear, but iterative within the project, e.g. if a solution in Improve is insufficient to solve the problem, the team can go back to Analyze.  A DMAIC project should not be run like a “waterfall” project, but an Agile project with rapid learning cycles. With reasonable justification, the team should be allowed to decide to pass the tollgate and continue to the next phase.  Empowering the teams is risky and comes at a cost, but they should be given the opportunities to learn from their mistakes (if it’s not too costly).  Competent coaching will minimize the risk.

Compounded by the fear, poor training, and lack of experience, project efforts are often driven by management expectations at tollgate reviews.  A polished presentation with a complete set of phase deliverables beautifully illustrated with tables and graphs shows team’s accomplishments and satisfies untrained reviewers.  But it often fails at facilitating critical analysis and deep understanding required to address root causes – it sends the wrong message to the organization that the new CI methodology is all about presentation not substance.

If any of the examples sounds familiar or if you are concerned with building a CI culture and capability, one area for improvement might be in your DMAIC stage-gate process. 

]]>
1179
Capabilities of Future Leaders https://biopmllc.com/organization/capabilities-of-future-leaders/ https://biopmllc.com/organization/capabilities-of-future-leaders/#comments Mon, 22 Oct 2018 04:20:19 +0000 https://biopmllc.com/?p=985 Continue reading Capabilities of Future Leaders]]> What capabilities are required for future leaders in life sciences? How can organizations develop such leaders? A recent McKinsey article, Developing Tomorrow’s Leaders in Life Sciences, addresses this exact question. Using data from their 2017 survey on leadership development in life sciences, the authors illustrated the gaps and opportunities and presented five critical skills.

  1. Adaptive mind-set
  2. 3-D savviness
  3. Partnership skills
  4. Agile ways of working
  5. A balanced field of vision

It is a well written article with useful insights and actionable recommendations for effective leadership development. However, there is one flaw – presentation of the survey data. Did you notice any issues in the figures?

I can see at least two problems that undermine the credibility and impact of the article.

Inconsistent numbers
The stacked bar charts have four individual groups (C-suite, Top team, Middle managers, and Front line) in addition to the Overall. In Exhibit 1, for example, the percentages of the respondents that strongly agree with the statement “My organization has a clear view of the 2-3 leadership qualities and capabilities that it wants to be excellent at” are 44, 44, 30, and 33%, respectively. Overall, can 44% of them strongly agree? No. But that is the number presented.

It doesn’t take a mathematical genius to know that the overall (or weighted average) has to be within the range of the individual group values, i.e. 30 < overall < 44. Similarly, it is not possible to have an 8% overall “Neither agree or disagree” when the individual groups have 11, 9, 16, and 17%. The same inconsistency pattern happens in Exhibits 4 and 5.

Which numbers are correct?

No mention of sample size
Referring to Exhibit 1, the authors compared the executive responses in the “strongly agree” category (“less than 50 percent”) to those of middle managers and frontline staff (“around 30 percent”), stating there is a drop from the executives to the staff. But can a reader make an independent judgment whether the difference between the two groups really exists? No, because the numbers alone, without a measure of uncertainty, cannot support the conclusion.

We all know that the survey like this only measures a limited number of people, or a sample, from each target group. The resulting percent values are only estimates of the true but unknown values and are subject to sampling errors due to random variation, i.e. a different set of respondents will result in a different percent value.

The errors can be large in such surveys depending on the sample size. For example, if 22 out of 50 people in one group agree with the statement, the true percent value may be somewhere in the range of 30-58% (or 44±14%). If 15 out of 50 agree in another group, its true value may be in the range of 17-43% (or 30±13%). There is a considerable overlap between the two ranges. Therefore, the true proportions of the people who agree with the statement may not be different. In contrast, if the sample size is 100, the data are 44/100 vs. 30/100, the same average proportions as the first example. The ranges where the true values may lie are tighter, 34-54% (44±10%) vs. 21-39% (30±9%). Now it is more likely that the two groups have different proportions of people who agree with the statement.

Not everyone needs to know how to calculate the above ranges or determine the statistical significance of the observed difference. But decision makers who consume data should have a basic awareness of the sample size and its impact on the reliability of the values presented. Drawing conclusions without necessary information could lead to wrong decisions, waste, and failures.

Beyond the obvious errors and omissions discussed above, numerous other errors and biases are common in the design, conduct, analysis, and presentation of surveys or other data. For example, selection bias can lead to samples not representative of the target population being analyzed. Awareness of such errors and biases can help leaders ask the right questions and demand the right data and analysis to support the decisions.

In the Preface of Out of Crisis, Edwards Deming made it clear that “The aim of this book is transformation of the style of American management” and “Anyone in management requires, for transformation, some rudimentary knowledge of science—in particular, something about the nature of variation and about operational definitions.”

Over the three and half decades since Out of Crisis was first published, the world has produced orders of magnitude of more data. The pace is accelerating. However, the ability of management to understand and use data has hardly improved.

The authors of the McKinsey article are correct about 3-D savviness: “To harness the power of data, design, and digital (the three d’s) and to stay on top of the changes, leaders need to build their personal foundational knowledge about what these advanced technologies are and how they create business value.” That foundational knowledge can be measured in one way by their ability to correctly use and interpret stacked bar charts.

Now, more than ever, leaders need the rudimentary knowledge of science.

]]>
https://biopmllc.com/organization/capabilities-of-future-leaders/feed/ 2 985