Inflated Popularity

Send to a friend

Send this article to a friend.

Is it fair to discount 'undecided' voters from poll results?

This post has changed somewhat from earlier today. My catchy opening sentence, about how Premier Williams popularity had dropped from "high eighties," was incorrect. I was looking at voter intentions; not overall satisfaction with government, which is indeed high at 86 per cent. My error. (Sorry, Danny.)

And that sound you hear? That's me, kicking myself in the arse.

However, the main point of my post remains unchanged. I still have a problem with how Corporate Research Associates (CRA) reports the results of their Atlantic Quarterly survey. Here's how they crunched the numbers for voter intentions on December 9, 2008:

Williams Government 72 %

Liberal Party 19 %

NDP 9 %

TOTAL: 100 %

Undecided: 22 %

If you do the math, it adds up to 122 per cent which is impossible, of course. They've removed those who are undecided, do not plan to vote or refuse to state a preference, from the mix, before calculating final numbers.

Now that, to me, does not seem right. It isn't right. The undecided can be an important factor in any poll, as any politician will tell you when staring into the teeth of an election.

Here are how the real numbers shake out, after the undecided are factored back in:

Williams Government 56 %

Liberal Party 15 %

NDP 7 %

Undecided 22 %

TOTAL: 100 %

I sent a note to Don Mills, President & CEO of CRA, to probe this question. Off the top, I noted that the first paragraph of the Nova Scotia release, also of December 9, included the undecided in its overall numbers, whereas the other three Atlantic provinces didn't. I asked if clients request that findings be reported in this way.

"First, we do not take any instruction from any client in the release of our information publicly," Mills said, in his reply. "Secondly, we are and always have been a non-partisan company and do not do any work for any political party."

I pointed out that the undecided are a critical factor, and if their percentage is large, that diminishes any party's popularity and raises another set of questions. For example, in New Brunswick, the real story may well be the number of undecideds - at 38 percent - rather than how the rest of the pie is divided.

I asked: Why not report the results in a uniform way with the raw data, and let the provinces draw their own conclusions - and draft their own releases - with that? My concern is that media will often report these numbers without saying "decided" in their coverage.

"While there may be a high percentage of undecideds, especially in between elections, most of these individuals vote along the lines of those with a stated voting preference," Mills said. "This conclusion has been demonstrated time and again over more than twenty years of tracking our results with the outcomes of both provincial and federal elections in this region.

"We have accurately predicted not only the outcomes of each election in the past twenty years, but also the popular vote in each case within the stated margin of error," Mills continued. "That was true again in the last election in Newfoundland and Labrador. As a consequence, we stand behind our reported results and our unblemished record in this regard. We are perhaps the only industry that risks our reputation each and every time we send out a release. I can tell you that we take that responsibility very seriously and do everything in our means to ensure that our research is conducted to the highest standard possible."

My main concern about extrapolating stated intentions, and ignoring the undecided? It's an assumption. And I don't like assumptions. I like facts. A wide slice of undecided voters is a dynamic thing; their opinions are malleable and intentions subject to change. (Of course, when party popularity is extremely high, as it is now, it's much easier to make these assumptions. It's when fortunes wane that I become more concerned.)

Say, for example, the government in power is extremely popular, but then they start doing boneheaded things, and a lot of their support let's say 50 percent becomes undecided. I would be uncomfortable extrapolating anything from the thin slice of stated voter intentions. At some point, the undecided becomes the story.

There is a wide body of belief that many people make voting decisions based on what they read in the polls. So we may have a chicken and egg' thing here. By discounting the undecided factor, is CRA "accurately predicting" the outcome or in some ways influencing it?

If you discount and remove the undecided vote, it inflates the front-runner's perceived "popularity" and could become a self-fulfilling prophecy. That is, would as many fence-sitters vote for the Williams Government at 56 percent, than at 78 percent? I really don't think so.

And what happens when the undecided is unusually high? What if it goes as high as 60 percent? Are we really to believe that the remaining 40 percent can be reliably used to interpret voter intentions? You might say yes. CRA might say yes. Call me old fashioned, but I would say no.

Facts are fragile things, and should not be diluted or manipulated. Give me real numbers, please. Nothing more, nothing less.

I have no objection to CRA offering its interpretation of the numbers, but couldn't they do so further down in the release, after reporting the straight poll results? I could certainly live with that.

This story was first brought to public attention by blogger Ed Hollett back in September 2008.

The question now is this: will media continue to repeat verbatim CRA's version of poll results, in which the undecided do not exist, or will they break out their calculators, do the math (it takes just two minutes), and tell us the actual numbers?

I will be watching to see what happens. And if they don't do it, I will.

  • 1
  • 2
  • 3
  • 4
  • 5

Thanks for voting!

Top of page

Comments

Comments

Recent comments

  • Ed
    July 27, 2010 - 14:53

    Geoff:

    You quote Don Mills as saying We [CRA] have accurately predicted not only the outcomes of each election in the past twenty years, but also the popular vote in each case within the stated margin of error.

    The last CRA poll before the 2007 general election was released on 05 September based on data collected in August.

    The margin of error was given as +/- 3.5%, 19 times out of 20.

    In the party choice question, CRA reported that 76% of decided respondents indicated they would vote Progressive Conservative.

    The actual result - expressed as popular vote - was slightly less than 70%.

    Unless I am missing something, that is more than the margin of error. In fact it is pretty close to double the margin of error.

    Oddly enough, the results for the two opposition parties were not off by as much when reported as they were. The actual results of the election polls were actually quite close to the August CRA numbers.

    A blind person could have predicted who would win the election. It's another thing to predict the percentage of popular vote.