Why polls and surveys can get it wrong

iStock-voting booth.jpg

In the wake of the 2020 US election, one of many questions being asked is; “how could the pre-election polls get it so wrong?” Election prediction polls, unlike many surveys we experience, get tested against an actual result; so their success or failures are a public record. Surveys like the ubiquitous employee survey never get tested in the same way, so erroneous conclusions can remain forever untested. What can we learn from prediction poll failures?

The Academic View

Many academic researchers make their living running surveys. Invariably, poor results will be attributed to non-representative samples (includes sample sizes too small), flawed questioning constructions, survey modes (face to face, telephone, Internet, etc.), flawed weightings of factors etc.. There is no lack of science on what constitutes a good polling or surveying methodology. One would suspect the major US election pollsters, Brexit pollsters and the like would not suffer a lack of methodological skills or resources to run large samples. Yet things can go extraordinarily wrong. 

Poll graphic.jpg

While some US pollsters might claim victory this time by getting it right (pending law suits!), the more open and honest ones would acknowledge the 2020 predictions were even more wrong than 2016. 

The Journalist’s View

This refreshing view from Guardian journalist  Mona Chalabi’s article: The pollsters were wrong – again. Here's what we know so far” sums it up as: 

It’s possible, however, that they were actually more wrong this time around – either because they found it even harder to track down and speak to 1,000 adults who accurately represented 240 million voters, or because Trump voters were even more reluctant this time to tell a stranger their preferred candidate. Or both.” 

This is “plain speak” for the problems the academics would attribute to common survey methodology issues. Interestingly though, Chalabi also notes that: 

Exit polls are slightly more accurate – they interview far more people (this Edison survey spoke to 14,318 adults, whereas most polls speak to around 1,000) and they speak to people who have already voted rather than asking people days, sometimes weeks, ahead of time who they maybe plan to maybe vote for.” 

While exit polls can still suffer from representative sampling issues, what they benefit from is they ask people about an action they have just taken. While some respondents may not always be truthful in their responses, the increased accuracy is more likely attributed to respondents reporting on something they have just done, rather than something they might intend to do well into the future. 

Why measuring actions can mediate polls and survey risks 

Survey tick.jpg

No doubt, how people actually voted in the 2020 US elections will be pored over to help better understand the mindsets of the population at large. Inside enterprises we don’t have all-staff elections for the leaders (at least not yet). What we do have is a plethora of digital data identifying how staff are actually working and interacting with each other. As more work went digital during the COVID-19 pandemic, this data set got even richer. While we might regularly run staff surveys to assess employee health and well-being, we need to be aware of the shortcomings of polls and surveys, that could potentially get it very wrong.

We have previously published how digital interaction data might complement traditional survey methods for assessing staff health and well-being. Our proposition in that article was that because we are looking at a far larger sample of staff and analysing for what they are doing, more so than what they are thinking, we can potentially identify “staff at risk” that could easily be overlooked by traditional survey methods. 

While researching our sixth annual Yammer benchmarking report we uncovered another theme for new enterprise insights. Surveys and polls are often used to assess the performance of communities or teams. We track interaction activities, rather than use surveys to conduct our benchmarking, to determine likely high performing online communities or teams. 

Do you really know who your best performing online groups are?

When we traditionally think about how groups are performing, the organisational hierarchy comes to mind; where managers have roles related to group performance monitoring. But what happens with enterprise tools like Yammer and perhaps Teams to a lesser degree, that are largely disconnected from the formal hierarchy? For Yammer, even the largest organisations will only have a handful of staff assigned for such monitoring. Yet enterprises have, on average, 120 active Communities on Yammer, with one organisation having almost 700 active Communities, from the almost 9,000 Communities across 116 organisations we analysed.

SWOOP-Yammer-Benchmarking-2020-Eventbrite-01.png

Our benchmarking practice is to contact the enterprises we identify as high performers, to validate our findings and learn about their success stories and share their best practices. In the current Yammer benchmarking analysis we applied our most comprehensive performance assessment process ever, to arrive at our top 1% of Yammer Communities. We contacted the Yammer Community leads at some 12 organisations who had multiple Communities in the top 1%. A common response we found was; “I’m not aware of that Community at all!” We were able to validate our measurement methods with high performing Communities that they were familiar with. But the surprise for us, and them, was the number of “quiet achieving” Communities that were completely invisible to them.

As our digital workspaces grow beyond the purvey of our formal organisational hierarchies, it is not surprising these spaces are rapidly becoming blind spots for both good and bad performance. Especially those smaller and less public ones. 

No amount of polling or surveying will uncover these blind spots. Hence, the need for another way.

Our benchmarking practice is to contact the enterprises we identify as high performers, to validate our findings and learn about their success stories and share their best practices. In the current Yammer benchmarking analysis we applied our most comprehensive performance assessment process ever, to arrive at our top 1% of Yammer Communities. We contacted the Yammer Community leads at some 12 organisations who had multiple Communities in the top 1%. A common response we found was; “I’m not aware of that Community at all!” We were able to validate our measurement methods with high performing Communities that they were familiar with. But the surprise for us, and them, was the number of “quiet achieving” Communities that were completely invisible to them.

As our digital workspaces grow beyond the purvey of our formal organisational hierarchies, it is not surprising these spaces are rapidly becoming blind spots for both good and bad performance. Especially those smaller and less public ones. 

No amount of polling or surveying will uncover these blind spots. Hence, the need for another way.

Previous
Previous

How to run a successful Yammer network during the COVID-19 pandemic

Next
Next

SWOOP’s new look: a collaborative achievement