Future – biopm, llc https://biopmllc.com Improving Knowledge Worker Productivity Sun, 13 Dec 2020 20:11:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://biopmllc.com/wp-content/uploads/2024/07/cropped-biopm_512w-32x32.png Future – biopm, llc https://biopmllc.com 32 32 193347359 Capabilities of Future Leaders https://biopmllc.com/organization/capabilities-of-future-leaders/ https://biopmllc.com/organization/capabilities-of-future-leaders/#comments Mon, 22 Oct 2018 04:20:19 +0000 https://biopmllc.com/?p=985 Continue reading Capabilities of Future Leaders]]> What capabilities are required for future leaders in life sciences? How can organizations develop such leaders? A recent McKinsey article, Developing Tomorrow’s Leaders in Life Sciences, addresses this exact question. Using data from their 2017 survey on leadership development in life sciences, the authors illustrated the gaps and opportunities and presented five critical skills.

  1. Adaptive mind-set
  2. 3-D savviness
  3. Partnership skills
  4. Agile ways of working
  5. A balanced field of vision

It is a well written article with useful insights and actionable recommendations for effective leadership development. However, there is one flaw – presentation of the survey data. Did you notice any issues in the figures?

I can see at least two problems that undermine the credibility and impact of the article.

Inconsistent numbers
The stacked bar charts have four individual groups (C-suite, Top team, Middle managers, and Front line) in addition to the Overall. In Exhibit 1, for example, the percentages of the respondents that strongly agree with the statement “My organization has a clear view of the 2-3 leadership qualities and capabilities that it wants to be excellent at” are 44, 44, 30, and 33%, respectively. Overall, can 44% of them strongly agree? No. But that is the number presented.

It doesn’t take a mathematical genius to know that the overall (or weighted average) has to be within the range of the individual group values, i.e. 30 < overall < 44. Similarly, it is not possible to have an 8% overall “Neither agree or disagree” when the individual groups have 11, 9, 16, and 17%. The same inconsistency pattern happens in Exhibits 4 and 5.

Which numbers are correct?

No mention of sample size
Referring to Exhibit 1, the authors compared the executive responses in the “strongly agree” category (“less than 50 percent”) to those of middle managers and frontline staff (“around 30 percent”), stating there is a drop from the executives to the staff. But can a reader make an independent judgment whether the difference between the two groups really exists? No, because the numbers alone, without a measure of uncertainty, cannot support the conclusion.

We all know that the survey like this only measures a limited number of people, or a sample, from each target group. The resulting percent values are only estimates of the true but unknown values and are subject to sampling errors due to random variation, i.e. a different set of respondents will result in a different percent value.

The errors can be large in such surveys depending on the sample size. For example, if 22 out of 50 people in one group agree with the statement, the true percent value may be somewhere in the range of 30-58% (or 44±14%). If 15 out of 50 agree in another group, its true value may be in the range of 17-43% (or 30±13%). There is a considerable overlap between the two ranges. Therefore, the true proportions of the people who agree with the statement may not be different. In contrast, if the sample size is 100, the data are 44/100 vs. 30/100, the same average proportions as the first example. The ranges where the true values may lie are tighter, 34-54% (44±10%) vs. 21-39% (30±9%). Now it is more likely that the two groups have different proportions of people who agree with the statement.

Not everyone needs to know how to calculate the above ranges or determine the statistical significance of the observed difference. But decision makers who consume data should have a basic awareness of the sample size and its impact on the reliability of the values presented. Drawing conclusions without necessary information could lead to wrong decisions, waste, and failures.

Beyond the obvious errors and omissions discussed above, numerous other errors and biases are common in the design, conduct, analysis, and presentation of surveys or other data. For example, selection bias can lead to samples not representative of the target population being analyzed. Awareness of such errors and biases can help leaders ask the right questions and demand the right data and analysis to support the decisions.

In the Preface of Out of Crisis, Edwards Deming made it clear that “The aim of this book is transformation of the style of American management” and “Anyone in management requires, for transformation, some rudimentary knowledge of science—in particular, something about the nature of variation and about operational definitions.”

Over the three and half decades since Out of Crisis was first published, the world has produced orders of magnitude of more data. The pace is accelerating. However, the ability of management to understand and use data has hardly improved.

The authors of the McKinsey article are correct about 3-D savviness: “To harness the power of data, design, and digital (the three d’s) and to stay on top of the changes, leaders need to build their personal foundational knowledge about what these advanced technologies are and how they create business value.” That foundational knowledge can be measured in one way by their ability to correctly use and interpret stacked bar charts.

Now, more than ever, leaders need the rudimentary knowledge of science.

]]>
https://biopmllc.com/organization/capabilities-of-future-leaders/feed/ 2 985