TalentedApps

We put the Talent in Applications

  • Authors

  • Blog Stats

    • 618,915 hits
  • Topics

  • Archives

  • Fistful of Talent Top Talent Management blogs
    Alltop, all the top stories

The Mismeasure of Talent

Posted by Mark Bennett on March 12, 2008


A recent column from WSJ highlights the challenges facing us when dealing with the intangibles that often dominate talent work. It shows that measuring this “invisible work” is a challenge that often leaves talent without a sense of achievement. Moreover, when measurements are insufficient or incomplete, or when the wrong measurements are being used by management to compensate, it can cause more harm than good. When something is hard to measure, we know it often doesn’t get measured and what’s easier to measure gets measured instead. Since intangibles such as “quality”, “productivity”, “satisfaction”, etc. are seen as too difficult/impossible/imprecise to measure, they often don’t get measured. But we’ve seen in books like Patrick Lencioni’s “The Three Signs of a Miserable Job: A Fable for Managers (And Their Employees)” that immeasurability is a key destroyer of engagement. So what does that mean for your talent, where a lot of what they contribute isn’t easily measured, often doesn’t get measured, and thus makes it so they can’t assess their contributions or success?

There is a joke about the drunk who dropped his keys in the dark alley but spends all his time looking under the streetlamp “because the light is better.” That joke often comes up in discussions about measuring the intangibles related to talent. The column has several examples showing folks being measured on things that are readily available like timeliness and budget, but not on harder to measure things like “doing things right” for instance. Measures like timeliness and budget can be very important but often only describe part of the picture and are insufficient to making good business decisions. For example, how can you make the tradeoff between timeliness and “doing things right” that is acceptable from a risk/reward perspective if you aren’t measuring “doing things right”? What ends up happening is people start to focus only on the timeliness measure and both customer satisfaction and employee engagement falter because “doing things right” just isn’t happening like it used to, but nobody is really sure by how much or why (if it’s even noticed at all.)

Of course, the question arises of what does “doing things right” mean, but that doesn’t justify ignoring it. In fact, it misses an opportunity to actually figure out what it means so that it can be measured. Something like “doing things right” or “calming an angry customer” might be activities that produce the very outcomes the company really needs to achieve strategic success. The outcomes are something that can be measured and if you can find a relationship between those activities and an increase in desired outcomes, then you are on your way to making the intangible more visible and measureable. In addition, this helps the employee feel relevant by showing how their job really makes a difference. Measurability and relevance go together and support each other. They are employee engagement concepts that fit directly into a framework for making better decisions regarding talent.

It’s really management’s responsibility to provide the connection between the impact talent has on strategic success and what measurements should be used for determining talent’s effectiveness in achieving that success. Not providing a way to measure that contribution objectively and in the context of the company’s goals exacerbates employee disengagement. However, it’s also management’s responsibility to accomplish this by listening more to both employees and customers and tackling the challenge of taking that input and transforming it into useful measurements. Imagine what benefits would be gained if management listened more to the employees who knew about “doing things right” or to customers who were once angry but now satisfied, as described in the column.

Measurement does not have to be an “all or nothing” affair either. At times, it is sufficient to just know with reasonable confidence that something got better or worse (e.g. went up or down, perhaps) when an input changed. Other times, it’s enough to know with reasonable confidence that something went above or below a certain threshold, or cutoff point, without having to know by how much. Both of those can be determined with lower cost, for instance, than trying to determine exactly how much an outcome changes when an input factor is altered by a certain amount.

In addition to Lencioni’s book that shows how relevance and measurability impact employee engagement, check out “Beyond HR: The New Science of Human Capital,” by John Broudeau and Peter Ramstad. It shows how those two concepts fit very well into their HR Bridge framework that improves your strategic success through better decision making regarding talent. Also check out “How to Measure Anything: Finding the Value of Intangibles in Business” by Douglas W. Hubbard, which shows how to avoid the trap of trying in vain to be overly precise when measuring intangibles when in actuality the most relevant, useful, and actionable information might be obtained at a fraction of the effort. Much of that is enabled by having a purpose to the measurements defined by the framework presented in Beyond HR.

3 Responses to “The Mismeasure of Talent”

  1. […] present on the great day of The Answer, many of us look to our software applications to answer the really hard questions around performance, potential, risk-of-loss, and succession. The promise of predictive […]

  2. Great play on Stephen Jay Gould’s book title. In fact, that makes me think there could be a great blog post tied up in comparing his critique with the measures that we have historically used to measure talent.

  3. Mark Bennett said

    Thanks, Mike, and I am so glad you see that as well. I don’t know if we’d call it “Performance Determinism” or what, but so much of what I see controversial thinkers like Pfeffer and Sutton ( “The Knowing-Doing Gap”, “Hard Facts, Dangerous Half-Truths, and Total Nonsense”, “What Were They Thinking?” ) say parallels what Gould (and Taleb as well, for that matter) have said. That is, stop falling into the traps of:

    1. Believing that the attributes we “measure” somehow really exist in the thing we are looking at and
    2. Believing that the simplifications of ranking we apply to complex adaptive systems somehow reflect reality

    There is definitely a series of posts we could do based on these fallacies.

Leave a comment