When the NY Times or Gallup reports that Obama or Romney has a lead in the polls, how do they know this? Typically, they pay people to randomly call people and they extrapolate from this sample, using established statistical methods, to make generalizations to the population. Some groups won’t respond, especially young adults who often have cellphones and screen their calls. Many people I know are like this.
Polling guru, Nate Silver, has written about this issue extensively, and Pew has researched it as well. Cell phone users tend to be younger and more liberal. Pollsters are used to correcting for such selective non-response (e.g. men selectively non-respond more than women) by weighting their answers. However, this critically relies on having a variable that you can use to do this weighting. If cellphone users simply differ on demographic dimensions, weighting should work, but if they differ on other dimensions such as Big 5 personality traits or values, then pollsters will be unable to weight their data.
Do cell phone users differ from landline users on psychological dimensions? The answer is fairly common sense as this is an issue that we all have lots of anecdotal data on. Of course they do. The below chart compares cellphone users to landline users based on visitors to yourmorals.org who answered a question about their phone usage, with traits related to landline use at the top and traits predicting cellphone use at the bottom.
Cellphone users value stimulation, achievement, and hedonism more. They value tradition, conformity, and security less. They are less conscientious, more liberal (especially on social issues), and are younger. Some of these variables are things that pollsters can address by weighting their results (e.g. youth and liberalism), but other variables are things that pollsters do not measure and therefore cannot directly weight for.
Since some of these things vary by ideology, gender and age as well, we can statistically control for these factors and see if we get fewer significant predictors of cellphone usage. Valuing stimulation and achievement are the remaining significant predictors with valuing tradition and being socially conservative as marginally significant predictors. Other psychological variables such as being conscientiousness and valuing hedonism are accounted for by controlling for factors that pollsters likely can weight for. As such, perhaps these psychological variables are less problematic. It is worth noting that valuing stimulation remains by far the best predictor of cellphone usage (after age) in regression analyses controlling for demographic variables (beta = .13, p<.001).
The yourmorals.org sample is not a representative sample, but I think that might be better in this case. Trying to measure characteristics of people who use cellphones, which I would assume correlates with screening calls and generally being less responsive to surveys, might be better done using non-phone means so that your measurement interacts less with what you are measuring. The educated, internet savvy users who tend to answer yourmorals surveys are exactly the kind of people you might want to examine and be unlikely to poll via phone. Further, we aren’t interested in whether the overall population has differences between cellphone users and landline users. That could be a function of youth (the biggest predictor here). Rather, we are interested in whether people who have the exact same demographic characteristics and vary only in terms of their cell phone usage may differ in meaningful ways as it is this variance that would confound pollsters. Using a particular non-representative sample can actually be better for answering questions about the relationship between variables, as certain differences are naturally controlled for with the whole sample being generally internet savvy, educated, and white. But certainly these findings (like all social science) need to be replicated by others in other datasets to have more confidence.
The take home message? As noted by Pew and Nate Silver, polls will have to have cellphone samples in order to avoid bias that likely skews against liberal candidates. Second, if my intuition that heavy cell phone users are unlikely to respond regardless is correct, then even pollsters that poll cellphones may have to start thinking about weighting for non-traditional variables that are a proxy for these psychological variables that predict non-response. Silver suggests “urban/rural status, technology usage, or perhaps even media consumption habits“. Third, the psychological profile of cellphone users (seeking novelty, being socially liberal and not valuing tradition) suggests that polls might exhibit more bias on social issues such as gay marriage, and other issues which could reasonably be said to correlate with being a novelty seeker. These effects aren’t big, but in a world where a few percentage points is big news, they are worth considering when digesting poll results.
- Ravi Iyer