A Sense of False Certainty
We’re in a new era of quantitative research that’s replacing our political imaginations
Starting with the Obama campaign’s “technological revolution,” better quantitative research methods have promised new certainty and objectivity in political analysis. But the rise in this rationalist approach to politics has lulled social scientists and political practitioners into a false sense of certainty and objectivity: subjectivity and ideology influence these quantitative methods.
The desire for concrete, causal answers has reduced the complexity of social systems into simulacra of the real world. In many ways this is good – measurement is important. It has honed our focus on policy outcomes rather than intentions and the rise in polling has given Democrats a better sense of their constituencies.
Unfortunately, things in politics are less certain and more confusing than rationalists think. It is for this reason, a lesson we keep learning over and over, that we should expand the methods we use to interpret the world, embrace uncertainty, and unleash our political imaginations.
False Certainty
As the election rolls nearer, we’ve seen several House, Senate, and gubernatorial election prediction models present new forecasts for the cycle. Models like these rely on making assumptions to determine the most likely outcomes and relationships between variables and rely on (a) good measurement of the variables being studied and (b) the researcher being able to construct a good model1. Setting aside issues with measurement (these are well known, just look at polling accuracy in 2020), it’s important we recognize decisions made by analysts and their impact on findings.
When building models, the data scientists doing the work choose which variables to use, how they relate, and what to predict. Some, like Nate Silver, say they’re looking for “signals in the noise,” but the delineation of what is noise is ultimately arbitrary and subjective – this relies on the researcher’s assumptions.
One way to see the arbitrariness at play is to observe that there are humans who assess a model’s performance, particularly when it bucks expectations. Nobody releases a model they don’t find personally believable. But when a model deviates from what a person expects the researcher will usually determine that it’s the model that’s missed something. This is really important – the assessment of a model’s accuracy is arbitrary based on the human’s preconceptions about politics2. The researcher uses their judgment to guide which questions to investigate, how to conduct their analysis, and to determine whether or not the causal diagnosis is reasonable.
To illustrate this point, one needs to look no further than the questionable modeling assumptions made by the FiveThirtyEight modeling team in 2020. Or in 2016, when pollsters weren’t accounting for education levels in their polling because they didn’t think it mattered (it’s easy to forget that weighting decisions come down to someone deciding which targets to use). Outside of election forecasting, Nate Cohn recently published a piece breaking down the failed predictions of “The Emerging Democratic Majority.”
Not only is empiricism inherently laced with the ideology of its conductor, but politics and social order are prone, as we know from history to dramatic and at-the-time unpredictable schisms and reorderings. That’s not to say that low probability outlier events render empirical analysis moot. But it means we should have some humility when trying to rationalize systems made up of irrational people.
History will always feel inevitable in hindsight, so training models on past events makes them seem like foregone conclusions to the analyst. But voters in 2016 or 1980 would probably beg to differ.
Polls are Constitutive (and Arbitrary)
The arbitrariness and influence of ideology aren’t just limited to modeling. There’s a misconception that polling presents a neutral “snapshot” or temperature check of a population, which makes polling out to be much less subjective than it is. Polling doesn’t just quantify democracy, it can form it!
As Jamelle Bouie points out in a discussion with journalist and author Elliott Morris, we’ve turned a blind eye to the power that polling has to constitute communities. He makes the point that the act of counting a community helps “create and bring that community into being” and that modern political polling takes this a step further by positing that (a) there is a “general will,” and that (b) it can be measured.
Surveys are part of a process the philosopher Louis Althusser calls “interpellation,” a system by which the governed are labeled, and as a consequence, begin to label themselves. The census illustrates this very neatly – our understanding of the world (e.g. how many white people there are) is downstream of both how the census chooses to define whiteness and measure it. These decisions occur in polling too – ideology guides pollsters as they wield the power to choose which questions to ask, how to ask them, and how to present their findings to the public.
A quintessential question, one that remains unanswered, is how pollsters and social scientists choose to represent and measure “swing,” “cross-pressured,” or “moderate” voters. How you define the measure (i.e. using policy questions or presidential vote choice) can have dramatic effects on how many moderate voters you think there are.
Polls have taken on an outsized presence in politics that used to be occupied primarily by constituency representatives: advocates that lobbied on behalf of labor, black Americans, women, businesses, and generally any group that had a political interest. In doing so, we’ve transferred the constitutive power away from the people who made up civic organizations and outsourced them to “experts” who professionally manage advocacy groups and design questions and survey methods to reach the desired population.
With exceptions, this process is reductive and fraught. A poll of 1,000 Hispanic Americans would almost certainly produce a different “general will” than a demonstration of 1,000 Hispanic Americans outside of the Capitol or a town hall meeting of 1,000 Hispanic Americans. Which one is more valid? We’ve been fooled into thinking we have a more complete view of voters’ demands, while the ways of measuring the complexity and nuance of their preferences have been lost.
Embrace the Uncertainty
We’ve been tricked into thinking confirmation bias is always a bad thing. Here’s the thing: confirmation bias in politics works because an expert’s internal expectations/models for politics are generally very good. These conceptions of politics are informed by years of experience, human interaction, news consumption, and having been proven right or wrong over time.
As we’ve increased our reliance on high-quality quantitative methods, they haven’t really decreased uncertainty. In fact, the effect may have been more to reveal contradictions in opinion than to surface true preferences. We still rely on the intuitions and guidance of political experts, and prudent practitioners continue to measure the body politic with multiple methods.
We now face pressures to replace softer ways of quantifying the world with code, but instead, we should embrace both these “hard” modes of data collection and analysis, and “soft” ones like qualitative research and community advocacy3. Mixing those, and adding in a party that advocates for a proactive vision, is a recipe for long-term success.
Not only has our understanding of politics become, in some ways, more limited, but so has our political imagination. Democrats need to expand their democratic toolbelt and embrace increased uncertainty. It can be politically risky to take action without a clear sense of consequences, and it is hard to advocate for positions that may seem politically unpopular, but progressives should exchange their desire for causality in explaining outcomes for credit for causing them.
More Democracy, More from Democrats
The left is still figuring this out, but it’s clear that we need more democracy. We need to rebuild some social fabric and associations that can organize a shared sense of reality and offer some nonpartisan cues about what’s important. Politicians need to hear more from constituencies!
Just as importantly, we also need politicians who use their (a) intuitions to interpret these complex signals and (b) charisma to convince people to change their minds. Whether this comes from movement leaders, politicians, or talk-show hosts, Democrats need to break free from the chains of our current political imagination.
We’ve inherited our political position from a series of Faustian bargains made by our political forefathers, and our current strategy seems to be to continue the same: a short-term optimization strategy of pandering to swing voters with no long-term vision or persuasive effort. Stop clinging to certainty, and be imaginative!
There’s a bit of hubris to all of this, reducing complex social systems to simple models. Particularly irking is when the same forecasters complain that consumers of the analysis read too much certainty into their predictions.
We don’t even always agree on what happens after elections occur – prominent organizations have put out evidence of electoral trends that clash with existing data.
This dialectic taps into a core challenge of democracy itself – we have lost many of the ways that people once organized and transmitted their desires. In parallel with the rise of polling and quantitative analyses of politics, civic associations have broken down, tipping the dominoes on advocacy organization and voter faith in civic institutions
Great analysis . . . I would only quibble with "we also need politicians who use their (a) intuitions to interpret these complex signals" . . . I think we need politicians to use their JUDGMENT, what Aristotle would have called "practical wisdom," rather than rely on a quantitative analysis to provide answers