• ADVERTISEMENT

    A Quantitative Basis for Measuring Media Impact at the Media Consortium

    by Jo Ellen Green Kaiser
    September 26, 2016
    Examples of stories published during a 10-month experiment by The Media Consortium and researchers at Harvard University.

    How journalism is presented and distributed may be changing radically, but the goals of journalism have not changed: journalists aim to inform and educate the public with the hope that the public will be influenced by these stories to create a better, more just society. When we look to define the impact of journalism, we should focus on the end goal  —  how has our reporting influenced the public? Can we draw a bright line between our reporting, the influence our reporting has on the public and the public’s efforts to create a better society?

    In the pre-digital era there was almost no way to draw that line from reporting to influence to action. In fact, it was almost impossible to determine if a story was seen, heard or read, let alone whether it influenced the public. The only clear impact came when a story caused an immediate action  —  a story on corruption led to the corrupt official being sacked; a story on a polluted river led to a quick cleanup effort.

    ADVERTISEMENT

    Those kinds of “throw the bums out” stories represent a great type of muckracking journalism, and are still being produced today, of course. However, most news stories — and sometimes the most important news stories — chip away at large, complex issues that don’t have a clear, obvious or quick fix. Within the Media Consortium’s network of independent news outlets, journalists take on topics like immigration reform, the charter school debate, climate change and racial justice. Stories on these kinds of issues usually won’t lead to change that can be easily tallied.

    In short, for the stories that do the hardest work of educating and informing the public over months and even years, it has, up to now, been impossible to measure their incremental impact. The Media Consortium has been working with a team of researchers and 36 participating news outlets since 2013 to change that, and the early results are encouraging. The data from this research will be published in an academic paper later this year, but MediaShift is able to report now on how the experiment was devised.

    A New Way to Measure Impact: Sentiment Analysis

    Today it’s possible to envision a new way of measuring impact. Web analytics give us insight not only into who is consuming our content, but also where and how they consume it. For outlets willing to make the ask, we can now trace the line from reporting to action as long as the action is simple and immediate, like signing a petition, making a phone call, or donating to a cause.

    ADVERTISEMENT

    These online tools, however, still don’t get at the core of the journalism experience — the way a great piece of journalism can change how an individual thinks and feels about a complex topic. None of our existing analytics tools measure impact — they don’t tell us how our reporting has influenced the public to create a better society.

    Such a tool that measures influence can be built, however. Over the past three years, with the support and guidance of Voqal, the Media Consortium has worked to develop a tool to measure journalism impact with Professor Gary King, the Albert Weatherhead III Professor and Director of the Institute of Quantitative Social Science at Harvard University. King, also the founder of the company Crimson Hexagon, is widely recognized as a leader in the field of sentiment analysis. Sentiment analysis is a type of data mining that uses social media to determine how individuals feel about a certain topic.

    Sentiment analysis allows us to ask a different type of question than any other tool currently available. For example, if an outlet publishes a story demonstrating the effect of climate change on a coastal town, sentiment analysis allows us to ask whether the story has changed not only readers’ attitudes about climate change, but whether it has changed the public conversation about climate change.

    King’s method relies on Twitter as the source of sentiment data. That’s because Twitter allows academic researchers like King to access the entire Twitter feed (known colloquially as the Twitter firehose), over 400 million real-time tweets a day. Rather than looking simply at hashtags or keywords, King’s team uses machine + human learning to analyze the content of tweets. Using this method, King and his team can analyze every single tweet produced before and after a story comes out, looking to see if sentiment, on climate (to continue the example) changes in a statistically meaningful way.

    Putting Theory into Practice at Media Consortium Outlets

    Twitter is a very “noisy” medium in that there are any number of factors that can produce dramatic swings in the volume and subject matter of conversation.  The question we faced at the beginning of our experiment was how to measure changes in sentiment created by Media Consortium outlets. Outlets like In These Times, Yes! Magazine and Truthout have a significant reach, with audiences in the hundreds of thousands, but that total is still small compared with the 400 million tweets in the firehose. How would researchers be able to detect any effect created by these journalism outlets?

    tmc_impact_02Our first decision was to magnify the effect by measuring the impact of collaborations rather than the impact of single stories. Using the theory of collective impact developed by John Kania and Mark Kramer, we hypothesized that if small outlets co-published and co-promoted a story, their impact would be greater than the sum of their user numbers would suggest. So instead of measuring sentiment change caused by a story published only by one outlet, we would measure sentiment change caused by the co-publication of a story by at least three outlets.

    The second decision we made was driven by research requirements. It takes a great deal of repetition to detect changes in sentiment with the statistical confidence required by the standards of quantitative social science. That meant we needed to create 35-40 discrete experiments (what the researchers termed “interventions”) in order to know that the sentiment changes we might see would be statistically significant.

    Third, to ensure that we were measuring the effect of our collaborations and not some other effect, the researchers insisted that we randomize our experiment. But how do you randomize journalism? You can’t assign stories to randomly selected outlets, nor can you randomly dictate what outlets cover. After months of talks with King and his team, Ariel White and Benjamin Schneer, we came up with a solution: we randomized the date of publication.

    Not all news stories are breaking news. Many are the result of several weeks of work. News stories produced through collaboration, in particular, have to be planned weeks in advance. We took advantage of that to randomize the week of publication. Once outlets agreed to collaborate on a story, they chose a two-week period in which they would be willing to publish — for example, the week of November 7 or November 14. The researchers then flipped a coin and told the outlets which week to run the story.

    Finally, we had to limit the scope of the stories. To measure sentiment change, researchers had to set up a baseline of expected responses. Setting such a baseline takes months of work. So in 2013 we had to choose five categories of stories that would be relevant when the experiment actually launched in 2015. We chose five recurring topics at Media Consortium news outlets: reproductive justice, immigration, fracking, education and economic justice. In 2015, we made two adjustments, expanding “fracking” to “climate change” and adding racial justice.

    The Value of a Network for Impact Research

    Frankly, when we began talks with the researchers in 2012, what they required seemed impossible. Asking outlets to agree to co-publish stories on a narrow range of topics seemed hard enough. Asking them to agree to randomize the dates seemed almost impossible. And then we were supposed to do that 35 times?

    Yet we did it. Over 10 months, we organized 36 outlets to produce 35 co-publishing instances following a precise research protocol.

    What allowed us to achieve this goal was the strength of the Media Consortium network. Beginning in 2012, the researchers attended our annual conferences to explain the aim of the research and to invite member participation. We surveyed members in 2013 to find the least intrusive means of randomizing the experiment. We ran some tests in 2014 with a small group of members to determine where the pressure points for the project would lie. Most of all, we made sure that the outlets involved would see an immediate benefit from the experiment.

    For example, we were told by outlets that they would require a financial incentive to participate. Outlets were worried that the work of coordinating on topics and timing would take time away from their other activities. When we launched the actual experiment in 2015, we gave outlets small grants of $250 to $500 for each time they participated.

    Our first experiments in 2014 also revealed that we needed a project coordinator. While outlets sometimes suggested topics and partners for the experiment, most of the time our coordinator had to suggest topics and partners to the outlets. Outlets also needed an outside coordinator to remind them of publishing dates and protocols, and to ensure that all the outlets involved in a particular instance co-published and co-promoted on the same day. We were fortunate to hire Manolia Charlotin in mid-2015 to coordinate the project.

    A financial incentive and a paid coordinator were essential to the success of the project. The intangible that made the project really work, however, was the willingness on the part of Media Consortium members to work with each other, with the organization and with the researchers to make this project happen. This project would not have been possible without a trust-based network in place.

    Outcomes of the Impact Research

    The primary goal of this experiment was to discover whether sentiment analysis would allow us to see a change in attitudes toward a particular topic as a result of a news story. Early data are promising, but researchers are still crunching the numbers. One of the lessons we learned about rigorous academic research is that the actual experiment is the shortest part of the process — the planning (which took us three years) and the data-crunching (which will take about a year) are more time-consuming than the experiment itself.

    tmc_impact_03However, we can share several important outcomes for impact research and journalism as a whole.

    1. An Experimental Infrastructure

    We now have built infrastructure at the Media Consortium for future research. To quote Prof. King: “We have learned about the incentives and capabilities and willingness of the different outlets. We know what we can ask of them, know how to get good evaluations without getting in the way of their normal business operations, and when necessary, how to motivate them. These are among the hardest parts of any experiment, and we now have all this infrastructure in place.” The Media Consortium network today is a perfect petri dish for further experiments on impact.

    2. A More Highly Developed Culture of Collaboration

    At the beginning of the project, outlets asked about funding first, then followed through on the protocol to fulfill contractual obligations based on that funding. By the end of the experiment, outlets were proposing potential collaborations and partners, with funding considerations secondary (though still important).

    A number of mid-size outlets, in particular, found that the collaborations brought them content and audience they would not otherwise have had access to on a wider range of topics. Specialty Studios, for example, found that partnering with Making Contact radio helped them reach NPR stations, adding radio to their TV market.

    The collaborations also helped outlets learn from each other. For example, Yes! Magazine’s James Trimarco told us that the project “put us in closer touch with [digital-only outlet] Truthout at a moment when we were still focused primarily on print. The collaboration expanded our ability to think strategically in web terms.” Perhaps surprisingly, outlets in the same space also found they engaged their own users better when they worked together. Some of the most successful partnerships were between Bitch magazine, feministing.com and the Ms. blog, all of which focus on gender and sexuality and reach a primarily female audience.

    Rigorous Measurement is Possible with Funding

    Creating a rigorous study on impact tools is neither easy nor inexpensive. We could not have even thought of engaging in this work without the support of Voqal, which provided research funding to the Harvard researchers and assisted the Media Consortium in locating funding for our program coordinator and grants to our outlets.

    The slow, steady pace of research is also difficult to maintain in the journalism sector, where individuals and organizations are used to quick decisions and fast outcomes, and the digital news landscape is constantly changing. Even when we get the final data from the researchers, our five-year project was only the first step. If we prove sentiment analysis works, and if we find that medium-size outlets have a measurable impact, we will still be a long way from developing an impact measurement tool. That will take more research — and more money.

    What we have proven, conclusively, is that a strong network of outlets can undertake this kind of rigorous research when there are concrete benefits during the project and a goal that will benefit outlets at the project’s end.

    We also have shown — to the outlets themselves as well as the larger sector — that creating partnerships can be its own reward. As a result of this project, many more Media Consortium outlets are in long-term collaborations, finding value in each other’s editorial skill sets and opportunities in expanding their reach across each other’s native audiences. For the Media Consortium, the next frontier is to dig deeper into these kinds of long-term collaborations, even as we continue to work with researchers on the next stage of our impact project.

    We look forward to sharing the final results of the research as soon as they are available.

    Jo Ellen Green Kaiser is the executive director of The Media Consortium.

    Tagged: media impact quantitative research sentiment analysis the media consortium twitter firehose voqal

    Comments are closed.

  • MediaShift received funding from the Bay Area Video Coalition (BAVC), which receives support from the Bill & Melinda Gates Foundation, to launch the MetricShift section to create a vibrant hub for those interested in media metrics, analytics and measuring deeper impact.

    About MetricShift

    MetricShift examines the ways we can use meaningful metrics in the digital age. We provide thoughtful, actionable content on metrics, analytics and measuring impact through original reporting, aggregation, and audience engagement and community.

    Executive Editor: Mark Glaser

    Metrics Editor: Jason Alcorn

    Associate Metrics Editor: Tim Cigelske

    Reader Advisory Board

    Chair: Anika Anand, Seattle Times Edu Lab

    Brian Boyer, NPR

    Clare Carr, Parse.ly

    Anjanette Delgado, Gannett

    Hannah Eaves, consultant, Gates Foundation

    Alexandra Kanik, Ohio Valley Resource

    Ian Gibbs, Guardian

    Lindsay Green-Barber, CIR/Reveal

    Celeste LeCompte, ProPublica

    Alisa Miller, PRI

    Connect with MetricShift

    Facebook group: Metrics & Impact

    Twitter: #MetricShift

    Email: jason [at] jasalc [dot] com

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift