X
    Categories: BusinessImpactMetrics

How Much is Investigative Journalism Worth?

Photo from Unsplash

What is the impact of investigative reporting? This question has been asked, and sometimes answered, with increasing frequency over the past three years by commercial media, non-profit media, media funders and audiences alike. Now in a new book, “Democracy’s Detectives: The Economics of Investigative Journalism,” Stanford Professor James T. Hamilton applies economic theory to show that if we think about impact as return on investment, then investigative reporting — as expensive as it is — will generate social returns that far outweigh its initial costs.

Recently I had the opportunity to talk with Prof. Hamilton, and here is what he had to say.

Q&A

In “Democracy’s Detectives,” investigative reporting is important insofar as it holds institutions and those in power accountable. What do you think are the implications of focusing on institutions and power-holders (elected officials, employees of government agencies, corporate employees or heads, etc.) with respect to the agency of everyday denizens of the U.S.? Or, put another way, because political institutions — including voting — are exclusionary in nature, do you see a risk in focusing only on institutional impacts of investigative reporting?

Investigative reporters focus on the operation of institutions for several reasons. The gap between the professed policies that a government agency or company are aiming for and the actual implementation of those policies sets up a problem for journalists to explore. In addition, institutions operate as a network of delegated decision-making relationships. Voters delegate policy choices to Congress members, who provide discretion to agencies to regulate, who in turn may give companies options to meet particular standards or achieve given outcomes. Along the way those with delegated power may make choices different than those who gave them the power envisioned. That is because people with delegated power can take actions that are hard to see. This is where investigative reporters come in — they can explore how delegated decision-making power is exercised in an organization. Reporters in for-profit outlets like to focus on whether people in institutions are pursuing their stated goals and whether power is being exercised as envisioned, because this means the journalists are not seen as pursuing their own worldview as the starting point. They take the policies as a given and the delegation of responsibilities as given, and then ask whether that is truly how the world is actually working. Reporters also like to focus on institutions for another reason: if their stories change an institution’s policies and procedures, that impact can be magnified across many people. This is the difference between a story that spots a bad doctor and a deeper (and likely more costly) story that spots a poorly performing hospital system.

In describing how investigative reporting is defined in “The Investigative Reporter’s Handbook,” the group Investigative Reporters and Editors (IRE) notes that “journalists have expanded the definition to include analyzing and revealing the breakdown of social or justice systems and documenting the consequences.” This includes coverage of topics that are hiding in plain sight, such as what working conditions are like for women who clean buildings at night (the focus of the series “Rape on the Night Shift,” produced by a collaboration among Frontline, Investigative Reporting Program at UC Berkeley, KQED, Center for Investigative Reporting, and Univision). I find in “Democracy’s Detectives” that radio, often the province of non-profits, offers more coverage than other types of outlets of topics such as discrimination and injustice.  Stories about injustice are also more likely than other topics to generate hearings. Seeing the impacts of stories that do not directly lead to collective action is harder, but in the long run as you’ve pointed out in your research investigative work can lead to conversations that change beliefs, norms and expectations.

You mention that you find there to be a lack of young investigative reporters, especially applying for and winning national reporting awards. Do you see there being a relationship between this and not only the downsizing of the journalism profession across the country, but also characteristics of the awards themselves and the types of media outlets, reporting and/or platform that they privilege?

One way I tried to measure changes over time in accountability journalism was to examine award citations for investigative work in four prize competitions: Goldsmith, Pulitzer, Selden Ring, and Worth Bingham. I found several hopeful patterns. The work cited in investigative reporting competitions now involves more journalists per project, a recognition that these series are more likely to be team efforts that draw on multiple talents (e.g., interviewing, data analysis, video production, writing). Awards more recently are likely to involve partnerships across media outlets, a cooperation born in part by the need to share costs and the recognition that cooperation can allow you to reach wider audiences. Some patterns were unsettling. The top five media outlets accounted for 30 percent of major investigative reporting awards and citations in the 1990s, a figure that increased to 47 percent in the 2000s. I think the concentration of awards in part reflects that it is harder for many local newspapers to undertake major investigative projects as their revenues and staffing decline. In an era when prospective journalists face uncertain outcomes and little time for training in the workplace, the average age of reporters winning Pulitzers for investigative work has increased by nearly a decade since the 1980s. That raises the question of where the next generation of reporters doing accountability work will get training and experience.

Journalism awards are important in part because they help the careers of reporters. They translate into recognition, raises and job mobility. Compared to finalists, those who win a Pulitzer end up publishing more books. In a world where it is hard for news outlets to be fully rewarded for the changes that arise from their public affairs reporting, the individual incentives for reporters created by the awards system help reduce the market failures that at the company level discourage the production of expensive investigative work. Journalists who are beat reporters will often work nights and weekends on their own time in order to discover and tell important investigative stories. As the nature of journalism has changed, there have been additional awards added to reflect new ways of delivering work (especially on the internet). The Online News Association awards honor a greater variety of work than reflected in the Pulitzers, and the breakdown of ONA awards by size and topic allows innovative contributions (often by younger journalists) to be recognized. The Global News Editors Data Journalism Awards, IRE’s Philip Meyer award for CAR reporting or journalism that uses the methods of social science, and the RFK Journalism Awards focused on issues of “poverty, political inclusion, and justice” are all examples of how awards have evolved to recognize types of reporting that may not be frequently highlighted in traditional competitions.

In this book, you find that the frequency with which reporters are called as experts in congressional hearings has been on the decline, a negative potential detriment to democracy. But the U.S. non-profit sector has been consistently growing throughout the 20th and 21st centuries. For example, the Urban Institute Center on Nonprofits and Philanthropy found that there was an 8.2 percent growth in the number of non-profits from 2002 through 2012. So, just to play devil’s advocate, could it be that journalism is not less important to policy makers, but rather that the “chain of causation in policy making” that you identify has shifted over time?

Journalists were more likely to turn up as witnesses at U.S. congressional hearings in the 1970s than in the 2000s. In part this may reflect a decline in the development and sharing of expertise by reporters, another sign that it is difficult today for a journalist to gain the human capital and experience that would lead them to be called by a congressional committee as an expert. You are right that non-profits are partially filling the void left by reporters. NGOs once did studies that led reporters on a particular beat like the environment or foreign policy to write about their work. With hard-news reporting ranks reduced, these NGOs took the natural step of producing their own reporting. Human Rights Watch has observers on the ground that write about life in many countries where human rights are threatened. The Sierra Club produces a magazine that offers reporting on topics like climate change. Early reporting on the water quality crisis in Flint, Mich., was spurred by the efforts of Curt Guyette, an investigative reporter employed by the ACLU of Michigan.

Another trend in D.C. is that as the public affairs journalism supported by local newspapers has ebbed, reporting aimed at those with a business or lobbying interest in public policy has expanded. In Democracy’s Detectives, I found at a set of 14 federal agencies and departments that while FOIA requests from local newspapers had dropped by about 50 percent between 2005 and 2010, FOIAs increased by 42 percent from other media such as AP, Bloomberg and niche outlets aimed at those who follow the details of policymaking for a living in their roles as lobbyists, company officials or governmental employees. The Pew Research Center similarly found that in the U.S. Senate Press Gallery there are now more reporters from niche outlets than from daily newspapers. These trends are important because the type of stories generated by reporters focused on the business of government may relate to contracts and regulatory tweaks rather than stories about inspections and policy implementation at the local level.

Screenshot of the CIR site

When talking about the impact of media, I often get questions about causality; the how can we know that an investigative project results in an individual, societal or political change? How do you avoid fallacious causal relationships in the three impact case studies you dive into?

The set of problems which end up as the target of public policies is not random. Markets break down in situations such as public goods, positive and negative spillovers from private actions, and information asymmetries. The very difficulties of measuring, quantifying and contracting around particular goods such as environmental protection, social justice, public safety and education mean that these areas end up involving government institutions. Investigative reporting about how these government institutions break down will thus involve scrutinizing areas that may be hard to measure and quantify to begin with. Making claims about how reporting about institutions later changed lives and laws is further made difficult by often competing models of how individuals, markets and governments operate.

Despite these difficulties in measuring and making causal claims, I do not believe that the perfect should be the enemy of the good in the field of media impacts. I think that it makes sense to try and determine how the world changed because of the new information and ideas generated by investigative work. The key is to be transparent about assumptions you are making in the analysis, be open about alternative explanations, and be cautious in what you claim.

In “Democracy’s Detectives,” I develop three case studies of the impact of investigative work. For the KCBS series “Behind the Kitchen Door,” a key question is what impact changes in the health code generated by the reporting had on actual human health outcomes in L.A. County. I chose to examine this case in part because economists Ginger Zhe Jin and Phillip Leslie had written a highly cited article in a top journal (Quarterly Journal of Economics) entitled “The Effect of Information on Product Quality: Evidence from Restaurant Hygiene Grade Cards.” They carefully try to control for confounding events when they do their analysis, and I take a lower bound from their estimates and use that as my estimate that the KCBS reporting caused restaurant hygiene to improve so that there were 54 fewer hospitalizations for food-related illnesses in LA County.

For the analysis of how the world changed after the Washington Post “Deadly Force” series on police shootings, I relied in part on the acknowledgment by the D.C. police department that their policies on training and use of force changed because of the Post series. I noted how training changed in the wake of the Post stories on police shootings in D.C., and ascribe the drop in fatal shootings to the policy changes (a link also made by the D.C. chief of police). For the case study of how murders by probationers dropped after policy reforms brought about the News & Observer series “Losing Track: North Carolina’s Crippled Probation System,” I similarly show my analysis and assumptions, the key one being that the drop in the percentage of murders committed by people out on probation from 15 percent prior to the policy changes to 12 percent after the policy changes was a result of the paper’s investigative series.

Media outlets interested in charting their impact could run the following experiment: Keep track of your potential story ideas, particularly the ones that you would like to pursue but lack the budget to develop. Wait several years. Then go back and compare the results from the set of investigations you did undertake and the set you almost undertook but just barely said no to because of funding. If these two sets of stories are initially similar, and the budget constraint is the reason you did not pursue the roads not taken, then comparing the outcomes is a rough measure of your impact. I once wrote a non-profit media outlet that was just in the stages of formation that this type of test would be possible if the organization kept track of those almost but not quite pursued. But the editor wrote back that the exigencies of hiring and starting up would make it unlikely that this list would be created.

Screenshot from the ICIJ site.

You emphasize the point that investigative reporting is expensive, and that the current funding streams are insufficient, uncertain, and problematic. In addition to reducing the cost of investigative journalism through computer-assisted reporting, algorithms and the like, what do you see as opportunities for ongoing funding and support for this type of work?

I do believe that lowering the costs of discovering stories through better use of data and algorithms is one way to support investigative reporting. There are so many people right now at universities trying to turn unstructured information into structured data to analyze that I believe the tools they develop can be repurposed and reconfigured for story discovery in journalism. A prime example: the International Consortium of Investigative Journalists used the software Linkurious to visualize the connections among financial records in the Panama Papers, making it possible to spot connections among offshore entities. That software was based on open-source code developed by digital humanities researchers to visualize the connections among letter writers in the Age of Enlightenment. Looking for investigative stories is akin to drilling for oil, and I believe advances in algorithms and data availability will allow reporters more readily to spot patterns leading to potential stories.

Three other opportunities for increasing funding for investigative work include personalization, philanthropy experiments and policy priorities. First, imagine that there were a site that knew a great deal about your information life — what you’ve read, what you’re interested in, what you know, how you like to experience storytelling (e.g., documents? videos? human interest tales?) That site could offer you news that is more engaging and directly of interest, a form of product differentiation that might mean you would be willing to pay for its work. Some preliminary work like this is the MetroMaps project developed by Dafna Shahaf and Jure Leskovec, which takes a collection of stories and represents them in a “London subway map” format that allows you to jump on a story’s chronology depending on what you already know. Second, advances in behavioral economics have generated field experiments into what types of frames and options motivate the individual decision to donate to charity. I do not think though that sufficient work has been done to experiment with what motivates individuals to donate to non-profit media, particularly local non-profit media focused on public affairs. Finally, I think that adding journalism as a field to support in federal R&D competitions focused on algorithms, data and tech would make it easier for the overlap of computation and journalism to advance. Right now people in agencies are reluctant to support work that would help journalists, since any aid to journalism from the government is controversial in some quarters. But support for R&D for journalism tools could be content neutral (i.e., not meant to favor one topic over another) and platform agnostic (e.g., not meant to favor one medium at the expense of another).

The idea to convert social impact to a monetary value is interesting and the benefit is obvious for being able to assess ROI for projects. I wonder, though, if there are risks to reducing complex, social processes to dollar amounts. For example, in the case of Rape in the Fields, would it be ethical to apply this type of analysis to generate a dollar value to rapes and sexual harassment averted because of the investigation and the resulting social mobilization?

Economists are often said to know the ‘price of everything and the value of nothing,’ in part because of their drive to quantify so many aspects of life. I do believe in the value of trying to estimate how the world changes when stories change policies and lives, and that may include an attempt to put dollar values on things such as the value of a statistical life saved because of an intervention generated by reporting. I think benefit-cost analysis is helpful when people make their assumptions and methods clear, because that allows you to debate whether the calculations make sense. The analyses also help you understand the tradeoffs public officials are making in policies, which can allow you to hold officials accountable. Putting a dollar value on a statistical life saved is controversial, but I think that the values used are defensible because they derive from decisions people make in their daily lives. As workers, people get higher wages in exchange for taking on a slightly higher probability of death from working at a risky job. Many people are willing to pay extra too for safer cars and safer homes. Economists take the data from these market decisions and use that data to derive how society values the prevention of an additional death through regulation.

Putting a dollar value on an outcome is less successful if we don’t have decisions measured in dollars we can examine that relate to the matter under study. In analyzing the impact of the Rape in the Fields project, for many reasons I do not think that you should try to put a price on what society gains from averting a rape. In his California Law Review article “The Limits of Quantification” (December 2014), Cass Sunstein discusses how a federal agency analyzing a proposed regulation to reduce rape in prisons did try to do a benefit-cost study which involved asking people in surveys (called contingent valuation surveys) how much they believed society would be willing to pay to avert rape. This framing of the question is something that people do not encounter in their daily lives, and he criticizes (I believe correctly) this approach for many reasons. He suggests though that an analyst could try and determine the range in estimates in reductions of rape that might come about because of policy change, the range of costs of the potential regulatory change, and derive an estimate of the expenditures involved in averting a rape. This would allow an analyst to compare different policy options in rape prevention and identify for a given regulatory budget which option could avert the greatest number of sexual assaults. Or the analyst could simply say that all of these categories are too hard to estimate, and point out the categories of benefits to society that may arise from a change in public policy.

Measuring the impacts of reporting is more than an intellectual puzzle. Done well and transparently, analyses of impacts can help reporters, policymakers and even funders make sense of how they are spending their resources with an eye toward making people better off than they would be if the information had not been generated by journalists. Benefit-cost analysis does have an implicit moral framework, namely utilitarianism. It does have flaws, such as the fact that market prices used in the analyses are derived from the willingness of people to pay based on the income distribution today. Ultimately though I believe that benefit-cost analysis can sometimes raise the benefits to society that arise from a public policy, a goal which is both laudable and difficult to achieve.

 

Lindsay Green-Barber is CIR’s director of strategic research. She can be reached at lgreenbarber@cironline.org.

Lindsay Green-Barber :Green-Barber is the director of strategic research at The Center for Investigative reporting. She works to identify, assess and rigorously test areas of programmatic work where CIR can have catalytic impact through its content distribution and engagement. She leads research and analysis and serves as an expert both internally and for external partnerships. Previously, Green-Barber was an American Council for Learned Societies public fellow and served as media impact analyst at CIR. She earned a Ph.D. in political science from the City University of New York Graduate Center. Her doctoral research, conducted from 2011 through 2013 in Ecuador, focused on indigenous organizations’ use of new information and communications technologies for social mobilization. She also taught political science courses at Hunter College.

Comments are closed.