• ADVERTISEMENT

    What The Failure of Election Predictions Could Mean for Media Trust and Data Journalism

    by Ozan Kuru
    November 29, 2016

    The election marathon finally ended, but the scandals and disputes we have seen will probably linger for a while in the public consciousness. One specific problem will be the public credibility of data journalism and trust in the media: The pre-election conspiracies regarding the polls could be reinforced with what many considered to be an across-the-board prediction failure.

    Whereas many polling post-mortems debated the issues that led to this prediction failure, the focus on the implications for public perceptions still deserves attention, especially given the persistence of Donald Trump’s “rigged polls” claim even a month after the election. Moreover, some features of current data journalism in 2016 could contribute to this public cynicism and mistrust of the media.

    Before the election, we witnessed unprecedented levels of partisan framing over factual evidence on the performance of the candidates. Especially, the claims a rigged election, the misinterpretation of scientific methodological decisions such as oversampling, hidden Trump vote in “rigged” polls, and discussions on the low-quality online polls on who won the presidential debates were good examples of partisan dispute over factual evidence, and more broadly, post-truth and post-fact politics. It seemed like that all evidence showed a Clinton win, but some partisans discredited all this data-driven evidence.

    ADVERTISEMENT

    These partisan biases were not so surprising. Echoing other research on perceptions of numerical evidence, our recent studies provide some systematic insight into these perceptual processes. In large national survey-experiments on public perceptions of polls, we found that motivated partisans could use methodological details of polls as fodder for discrediting them when their results were unfavorable, even when they are directly told by objective experts who praised polls with robust methodology and debunked poorly-conducted ones.

    And yet, against all expectations and data showing otherwise, Donald Trump won the election. This was, overall, a prediction failure according to many experts and pundits. And yes, polls (and all other data) turned out to be somewhat “rigged,” although not due to intentional manipulation, but rather some methodological assumptions.

    The Dangers of Diverse Data

    Photo by Cory M. Grenier on Flickr and reused here with Creative Commons license.

    Photo by Cory M. Grenier on Flickr and reused here with Creative Commons license.

    ADVERTISEMENT

    Regardless of the reasons behind this prediction failure, the damage in public perceptions is already done and is grand. In a context where polls took a direct hit as being “rigged” by the winning candidate, this polling failure will reinforce pre-election conspiracies on rigged polls and exasperate the trust in the media at large.

    Moreover, some features of today’s data journalism, which relies on constant methodological critique, transparency, and analytical demonstrations, could substantiate these public cynicism and motivational biases. For one, data provided in today’s news reporting is much more diverse: it includes traditional polls, polling aggregations/averages, forecasting models, google search-term analytics, automatic social media analytics and election prediction markets. These reports are much more dynamic, real-time, user-interactive, visual, analytical, and immersive (smartphone apps). Such diverse data is increasingly integrated in mainstream reporting, and not knowing their distinctions well might fuel cynicism for some of the public.

    Second, these reports are far from being conclusive and fully objective, especially in how they are perceived. There are often multiple statistics: for example, poll results compete with each other and any policy-relevant number from one source is fact-checked and debunked by others. Of course, not all numerical evidence is sound and based on rigorous methodology, and some is better than others. In brief, there is much that emerges as counter-evidence, fact-checking, or outright questioning of one’s statistical claims.

    Relatedly, this competitive news environment fuels heavily opinionated reporting, with columns and op-eds in which these diverse data and results are being critically evaluated. There are also frequent analytical demonstrations in outlets like Vox, Politico, NYT Upshot showing why different forecasting models show different results, how different researchers/pollsters can come up with completely different results using the same raw data, how weighting can change poll results depending on even a single respondent, how the margin of error is in fact greater than reported, how changes in results are just a mirage, and how election maps lie to us. There is even a discussion of open-source and reproducible data outlets which allow members of the public itself to access and analyze data themselves.

    A Different Kind of Horse Race

    Photo by Dan Howard and used under Creative Commons license

    Photo by Dan Howard and used under Creative Commons license

    Previous research has shown that the traditional horse race coverage of the elections, which focuses on the election prediction, vote share, and campaign strategies of the candidates while undermining policy coverage, could lead to a cynical public. Although later research has shown that these effects might be limited, today’s data-driven coverage could foster an information environment that might facilitate biases for some of the public, if not the all.

    What we are seeing today is that statistical information itself is being critiqued at unprecedented rates, and this is done not just by discrediting the sources reporting the data, and coming up with alternative interpretations of the same results, but also by slowly walking the news readers through alternative analyses that lead to conflicting or more nuanced results.

    These reports will succeed in informing and engaging news readers with sound data. But they could also confuse some readers. They might heighten a sense and an understanding that all data are volatile, moldable, and disputable; that they could be analyzed, fact-checked, and presented in different ways; and hence are amenable to different and/or more nuanced conclusions. The implications of this data-conscious public will be substantial for beliefs in specific reports and trust in media at large, and they should be studied carefully in the post-election period.

    Ozan Kuru is a PhD Candidate in Communication Studies at the University of Michigan. The views expressed here are his own. Ozan studies communication of public opinion, and its psychological underpinnings by looking at public perceptions of and reactions to the coverage of public opinion. Ozan published in various academic journals, including Public Opinion Quarterly (forthcoming), Mobile Media and Communication, Computers in Human Behavior, and a chapter in Social Media and Politics: A New Way to participate in the Political Process. He has also blogged on Center for Political Studies of University of Michigan and Monkey Cage of Washington Post. His ongoing research projects have been funded by the NSF Time-sharing Experiments in Social Sciences Project and received the Doris Graber Best Student Paper Award at Midwest Association for Public Opinion Research 2016 Conference. The author’s cited research here are collaborations with Professors Josh Pasek and Michael Traugott.

    Tagged: data journalism data-driven reporting donald trump election 2016 hillary clinton polling

    Comments are closed.

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift