Indeed, it is a strange-disposed time;
But men may construe things after their fashion,
Clean from the purpose of the things themselves.
Come Caesar to the Capitol tomorrow?
- from Julius Caesar, Act I, Scene III by William Shakespeare
During this presidential campaign, we’ve been treated to many interpretations of the data provided by a multitude of polls. It’s been Big Data meets Big Auguries.
What can we learn from the ongoing debate about whether poll and their analyses are biased or not and therefore whether they are accurate predictors of events? How can this debate benefit not just how we govern ourselves, but also how we run our organizations?
The point isn’t that polls and their analyses aren’t biased – they are. So are customer or employee surveys. So is the interpretation of sales data. What matters is what are you doing about it. How are you finding out how to correct for the inevitable bias so as to mitigate its distortive effects? How do you pursue objectivity, rather than just dig into your position as the truth or just give up and say objective truth doesn’t exist?
It wasn’t that I didn’t know enough; I just knew too much
Like any other observations, what we construe from polls is influenced by what we already know (or don’t know) and what we believe. To hammer the point home, we see only what we choose to see, consciously or not. Sometimes, there’s simply nothing there; it’s just noise and what you see is an illusion.
Does having more knowledge of the subject help? Knowing more doesn’t necessarily solve the problem – it matters more what it is you do know and that’s hard to know, you know? If that extra knowledge simply aligns with and confirms your prior knowledge and/or beliefs, it just reinforces a possibly misplaced confidence and takes your further from the objective findings.
So, like any other science, we have to understand this problem going in and have a process to gradually reveal what is really going on; i.e. the purpose of the things themselves.
Think twice; that’s my only advice
A good way to do that is to always poke holes in your thinking and the data you gathered. How might your thinking be based on false assumptions? How might the gathered data be misrepresentative? You have to test your assumptions and your data gathering methods continuously.
Thinking twice helps you get out of the cognitive trap our brains are wired for – to go with the first story that fits the facts. Jumping to conclusions is a species survival trait that served us well when we didn’t have time to mull over whether our clan should go out and hunt a mastodon.
Today’s world is infinitely more complex than arranging a hunt for food. So rethink and rethink again as much as you can before making the call. And if you have to make the call before you have sufficiently tested your data and thinking, then look for ways to structure your actions so they both move you torward your goal as well as gather more data and test your thinking. “Continuous Beta” is an example of this.
Bless your soul; to think that you’re in control
Finally, why are you relying on just your view or the “inside view” of your group to question your assumptions and data? As much as you’d like to think you can control you own biases, you and your team’s vested interests, sunk costs, etc. will always influence your objectivity in subtle and not so subtle ways.
As uncomfortable as it can make you or your team, get the “outside view” from other people and teams who will give you unfiltered feedback on your assumptions and evidence. The idea is about improving your pursuit of objectivity.
Get their thinking as well. They may not know as much as you do about the data, or have your level of experience and expertise in ways to interpret it. But they may know things that you don’t that could help you see things in a new, less biased way.
Whatever your political leanings, it’s worth it to check out “The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t” by Nate Silver.
Photo by Crouchy69