What capabilities are required for future leaders in life sciences? How can organizations develop such leaders? A recent McKinsey article, Developing Tomorrow’s Leaders in Life Sciences, addresses this exact question. Using data from their 2017 survey on leadership development in life sciences, the authors illustrated the gaps and opportunities and presented five critical skills.
- Adaptive mind-set
- 3-D savviness
- Partnership skills
- Agile ways of working
- A balanced field of vision
It is a well written article with useful insights and actionable recommendations for effective leadership development. However, there is one flaw – presentation of the survey data. Did you notice any issues in the figures?
I can see at least two problems that undermine the credibility and impact of the article.
Inconsistent numbers
The stacked bar charts have four individual groups (C-suite, Top team, Middle managers, and Front line) in addition to the Overall. In Exhibit 1, for example, the percentages of the respondents that strongly agree with the statement “My organization has a clear view of the 2-3 leadership qualities and capabilities that it wants to be excellent at” are 44, 44, 30, and 33%, respectively. Overall, can 44% of them strongly agree? No. But that is the number presented.
It doesn’t take a mathematical genius to know that the overall (or weighted average) has to be within the range of the individual group values, i.e. 30 < overall < 44. Similarly, it is not possible to have an 8% overall “Neither agree or disagree” when the individual groups have 11, 9, 16, and 17%. The same inconsistency pattern happens in Exhibits 4 and 5.
Which numbers are correct?
No mention of sample size
Referring to Exhibit 1, the authors compared the executive responses in the “strongly agree” category (“less than 50 percent”) to those of middle managers and frontline staff (“around 30 percent”), stating there is a drop from the executives to the staff. But can a reader make an independent judgment whether the difference between the two groups really exists? No, because the numbers alone, without a measure of uncertainty, cannot support the conclusion.
We all know that the survey like this only measures a limited number of people, or a sample, from each target group. The resulting percent values are only estimates of the true but unknown values and are subject to sampling errors due to random variation, i.e. a different set of respondents will result in a different percent value.
The errors can be large in such surveys depending on the sample size. For example, if 22 out of 50 people in one group agree with the statement, the true percent value may be somewhere in the range of 30-58% (or 44±14%). If 15 out of 50 agree in another group, its true value may be in the range of 17-43% (or 30±13%). There is a considerable overlap between the two ranges. Therefore, the true proportions of the people who agree with the statement may not be different. In contrast, if the sample size is 100, the data are 44/100 vs. 30/100, the same average proportions as the first example. The ranges where the true values may lie are tighter, 34-54% (44±10%) vs. 21-39% (30±9%). Now it is more likely that the two groups have different proportions of people who agree with the statement.
Not everyone needs to know how to calculate the above ranges or determine the statistical significance of the observed difference. But decision makers who consume data should have a basic awareness of the sample size and its impact on the reliability of the values presented. Drawing conclusions without necessary information could lead to wrong decisions, waste, and failures.
Beyond the obvious errors and omissions discussed above, numerous other errors and biases are common in the design, conduct, analysis, and presentation of surveys or other data. For example, selection bias can lead to samples not representative of the target population being analyzed. Awareness of such errors and biases can help leaders ask the right questions and demand the right data and analysis to support the decisions.
In the Preface of Out of Crisis, Edwards Deming made it clear that “The aim of this book is transformation of the style of American management” and “Anyone in management requires, for transformation, some rudimentary knowledge of science—in particular, something about the nature of variation and about operational definitions.”
Over the three and half decades since Out of Crisis was first published, the world has produced orders of magnitude of more data. The pace is accelerating. However, the ability of management to understand and use data has hardly improved.
The authors of the McKinsey article are correct about 3-D savviness: “To harness the power of data, design, and digital (the three d’s) and to stay on top of the changes, leaders need to build their personal foundational knowledge about what these advanced technologies are and how they create business value.” That foundational knowledge can be measured in one way by their ability to correctly use and interpret stacked bar charts.
Now, more than ever, leaders need the rudimentary knowledge of science.
I agree with Fang; however, PhDs, engineers, sig sigma…etc., are really the only folks that get what you are saying. These types of articles ought to have the academic research connected or annotated so folks can take a look at the math. Still, the advanced degreed folks really do not need to apply much effort to see something wrong, whereas the lay person would require considerable mental effort to understand the flaws. Its a bummer.
Thanks Jim. The gap you described is the reality today. And I don’t expect it to change much in the future. But when we talk about the capabilities of future leaders of organizations, I expect more. It’s more important to have the basic concepts than the ability to calculate the numbers. It’s the ability to ask the right questions, not the ability to provide the right answers, that I expect of the leaders when presented with data.