• ADVERTISEMENT

    Is There a Master Metric for Evaluating Public Media?

    by Jessica Clark
    February 16, 2010
    The dynamics of networked media are changing the ways media makers assess their impact.

    This article was co-authored by Katie Donnelly

    Over the past few months, we’ve been presenting MediaShift readers with a picture of a more dynamic, engaged, public media future. But how are Public Media 2.0 projects measuring their success in informing and engaging publics?

    Few station executives can quote quantitative measures of either goals or achievements related to their digital offerings." - Embracing Digital report

    Embracing Digital: A Review of Public Media Efforts Across the United States, a report by Gupta Consulting, gives us an idea of the scope of the challenge:

    ADVERTISEMENT

    Very few stations define success with concrete metrics. Most examples are anecdotal. (“I just have a sense.”) What they consider to be “successful” is very subjective. Those that do have an idea of what success means to them include metrics such as page views, unique users, and calls into station when online offerings fail to work.

    This lack of clarity is compounded by the fact that Public Media 2.0 projects cross a range of communities of media production practice, from journalism to documentary film to user-generated content to videogames. The various projects that now engage publics around issues are marked by distinct goals and assumptions, sometimes based on the capacities of the platform, and sometimes on the intentions of the makers.

    This used to be less convoluted. In the broadcast era, impact was measured most often in terms of audience size, with companies such as Nielsen, Arbitron and the Audit Bureau of Circulations serving as impartial audience talliers for different platforms. Now, in the era of converged media, there are many more moments of media consumption to count, and fewer clear comparisons.

    While auditors exist for websites and blogs — for example, Alexa, Quantcast, and Technorati — the tools for tracking audiences online are still a bit blunt. Mark Glaser wrote at length about the problems with web measurement on MediaShift in 2007, noting that “the complexity of web measurement and new technologies makes it nearly impossible for a one-size-fits-all measurement to become the currency of online ratings.” Plaforms like Twitter and increased mobile and app-based media consumption have only further muddied the waters.

    ADVERTISEMENT

    What’s more, given social media’s capacities for fostering participation, both public and commercial producers have to track a variety of new indicators, including engagement, online influence, and the inclusion of a range of users in media creation. The ecosystem of communication has shifted significantly. Media makes its way to users not just via selected screenings or broadcasts at fixed times, but through overlapping networks of individuals and institutions, as depicted in the visuals that Jessica developed with her co-author, Tracy Van Slyke, for their recent book, Beyond the Echo Chamber.

    For public media makers, commercial measurements designed to track purchases or ad impressions aren’t always enough to assess whether a media project contributes meaningfully to public life. Instead, assessing such projects’ success depends first upon identifying the core stakeholders and intended outcomes, and using both qualitative and quantitative approaches to measure against related goals.

    Is it even possible to create a master metric? What are the differences and similarities in evaluating different kinds of projects? Let’s take a look at four communities of practice within public media to see how they are tackling this thorny issue.

    Communities of Practice

    Traditional public broadcasting
    Public broadcasters are still casting about for standards. The Embracing Digital report found that “few station executives can quote quantitative measures of either goals or achievements related to their digital offerings,” not to mention having a sense of how to measure whether they’re meeting a public mission. In fact, some stations were not even able to provide a rationale for creating particular digital offerings. Clearly, accurately measuring impact is difficult — if not impossible — if producers are not able to identify their own motivations.

    Within public broadcasting, there is wide variation in the ability of stations to implement evaluation and analyze metrics. The stations in the Embracing Digital report used varying methods, including both quantitative (subscription numbers) and qualitative (user feedback). Stations were excited about the implementation of PBS’s new Comprehensive Online Video Ecosystem (COVE), which will allow them to better track digital downloads. The report concluded that “there is a real need for stations to understand metrics, what data to collect and how to use information to their advantage.”

    While some stations may still be catching up, there is help available at the national level for public broadcasting. The CPB-funded National Center for Media Engagement (NCME) has developed a framework for strengthening impact through effective community engagement. This framework includes a spectrum of community engagement that lays out internal and external engagement indicators.

    Internal indicators include station culture, station strategy, station commitment and board endorsements; external indicators include community partnerships, interaction, results, and public perception. There is also a diagnostic tool to measure current community engagement levels and a PlanIt! Tool” that helps users identify inputs, activities, outputs, changes in informal systems and networks, and benefits for people. Using these two tools together underscores the importance of incorporating evaluation processes at the beginning of the production process.

    Social issue documentaries
    Documentary filmmakers are often more explicit about the social and political goals of their work, which opens the door for more creative, outcomes-driven impact assessment. The Fledgling Fund, a foundation that supports social-issue media projects, developed a framework for assessing impact for creative projects with a mission. Noting that “it can be surprisingly difficult to make a firm connection between the power of a film or other media and social change,” the Fund created Assessing Creative Media’s Social Impact.

    This toolkit lays out five dimensions of impact, including:

    • Quality film/media project
    • Increased public awareness
    • Increased public engagement
    • Stronger social movement
    • Social change
    i-e3a8e3546f037935307144f5ada8e84a-fledgling.jpg

    The guide includes sample measurements for each of these categories. For example, “increased public awareness” can be measured by audience size, press coverage, and audience diversity. “Increased public engagement” can be measured by website hits, participation in online discussions, and commentaries and response letters. “Stronger social movement” can be measured by the number of advocacy organizations making use of the documentary, screenings with influentials and decision-makers, and the film’s longevity. For more sample measures, see Figure 3, “Sample Measures for Dimensions of Impact,” in the report.

    Mission-driven gaming
    Games for Change published an evaluation toolkit that includes video commentary from experts in the field about how to assess the social impact of videogames. The gaming community has some advantages when it comes to measuring results. For example, assessment vehicles can be directly embedded into games and data can be gathered with relative ease. Additionally, because gamers often return to games again and again, assessment tools like play diaries can help determine individual impacts, particularly in terms of knowledge acquisition or attitude shifts.

    Game producers are currently asking the same qualitative questions as social issue documentarians: Did users engage in dialogue? Did users’ attitudes change? Did they take action? Did a movement develop? Did a policy change?

    However, this toolkit also urges game producers to consider expectations, external pressures, and economic realities, noting that producers can face difficulty in getting funders to continue their support without providing them with traditional quantitative metrics, such as how many times the game was downloaded. Interestingly, the accompanying guide for this toolkit encourages game producers to occasionally “Just say no,” and let games stand as art (“art resists metrics”) by avoiding evaluation altogether.

    Youth media
    Many youth and community media makers are focused more explicitly on empowering users with production tools than creating and disseminating content. This changes their impact assessment routines significantly. In the Youth Media Evaluation Toolkit [PDF file], a guide prepared for the Open Society Institute and the Surdna Foundation by Social Policy Research Associates in 2005, it’s clear that youth media evaluation differs from other communities of practice in that young producers need to be evaluated as much as or even more than their productions. The report states:

    Research has found that when youth authentically conceive, develop, and produce news articles, radio commentaries, etc., youth media can foster important individual level outcomes, such as youth voice, critical thinking, research, literacy, writing, media skills and broader youth and career development outcomes.

    In fact, research on youth producers has typically been better supported and funded than research on the impact of their productions. However, both can be measured simultaneously by directly involving youth in the evaluation process. This helps assert their position as stakeholders as well as equalize the youth/adult power dynamic.

    This toolkit recommends measuring youth media productions by breaking potential outcomes into a series of categories, including:

    • Affective/Feel: How has the youth media production affected how people feel about the issues presented?
    • Cognitive/Think: How has the youth media production increased awareness and knowledge about the issues presented?
    • Behavioral/Act: Has the production led individuals or groups to take action?

    i-68e0da7d866a78e52a3f960867d56b81-youth media outcomes.jpg

    Youth Media Outcomes Map

    Similar to the Games for Change toolkit, this guide reminds practitioners that they must always consider the organization’s mission, the current political and social climates and the available resources (or lack thereof). And like game producers, youth media organizations often do not have the capacity or funding to conduct the types of formal evaluations that funders typically require. In fact, like some traditional public broadcasting stations, many youth media organizations struggle just to document number of viewers.

    Shared Questions

    While there are some differences among them, these communities of practice have many shared questions when it comes to impact: What are the audience numbers? Have users’ thoughts or actions changed? Has a movement developed? Has policy shifted?

    The most pressing question that these communities all share, however, is: What is the best way to measure social impact?

    These questions are not unique to public media projects; foundation-funded non-profits and socially responsible investors have been asking them for several years. For example, Social Impact Assessment: A Discussion Among Grantmakers, a 2003 report from The Rockfeller Foundation and the Goldman Sachs Foundation, notes:

    The field has yet to establish a common understanding of “social impact” — what it is or how to measure it. Currently, measures of impact vary from funder to funder, and organization to organization. The more sophisticated measurement tools integrate organizational and process metrics with quantifiable outcome data, but in the absence of a common measure (like shareholder value) investors and grantmakers are making it up as they go along.

    So for now, it seems there is no master metric for Public Media 2.0. Both qualitative and quantitative tools are needed, and mission needs to play a guiding role in determining what constitutes success for each project. However, public media funders, makers and outlets can still work together to develop more precise and standardized tools and approaches for measuring the elements of impact that they all care about, and that relate back to their social goals.

    We’ve identified five elements to explore, which we address in greater depth over at the Center for Social Media’s Public Media 2.0 Showcase. Each of these elements represents a measurable category of activity that helps media projects convene publics around issues:

    • Reach: How many people encounter the project across various screens and streams: TV, radio, streaming audio, blogs and websites, Twitter, iTunes, mobile applications, and more?
    • Relevance: Is the media project topical within the larger news cycle? Is it designed to stay relevant over several news cycles? Is it particularly relevant to targeted publics concerned with a specific issue, location, or event?
    • Inclusion: Does the project address a diverse range of targeted audience, not just in terms of race, but in terms of gender, age, class, geographical location and beliefs? How open is the architecture for participation, collaboration and discussion?
    • Engagement: Does the project move users to action: to subscribe to a site, contribute material, to write a letter in response, to pass on a link, donate time and money, sign a petition or contact a leader?
    • Influence: Does the project challenge or put the frame on important issues? Does it target “influentials”?Is it it “spreadable” or buzzworthy?

    Reach is the most familiar category, but, as we noted above, some public media projects are struggling to even measure this. Many public media projects, across practice communities, are starting to measure engagement through outcomes such as time spent on sites, site subscriptions, and user feedback. The Fledgling Fund recommends social issue documentarians measure relevance through longevity, while social media metrics tools like Addict-o-matic can give some measure of how relevant online media projects are in the real-time web. Gamers, social issue documentarians, and youth media practitioners all aim to track influence — including assessing whether media projects target civic leaders, and help to spawn or support viable social movements.

    Over the next few months, we will be holding a series of “impact summits” around the country to explore best practices in assessment for Public Media 2.0 projects. Stay tuned for insights from the funders, makers and media leaders we’ll be talking to.

    Jessica Clark is the director of the Center for Social Media’s Future of Public Media project, and the co-author of Beyond the Echo Chamber: Reshaping Politics Through Networked Progressive Media.

    Katie Donnelly is the associate research director at American University’s Center for Social Media, where she blogs about the future of public media. When she’s not researching media, Katie spends her time working in the environmental field and blogging about food.

    Tagged: engagement measurement public broadcasting public media 2.0 videogames youth media

    8 responses to “Is There a Master Metric for Evaluating Public Media?”

    1. This is fascinating and I appreciate all the legwork you did to pull this information together. I am struck, however, by the intimidating level of intricacy required. As public media organizations struggle to stay afloat, it seems like they need a full-time impact analyst for every producer on staff! In some ways my favorite metric is still the simple but revealing, “Would you recommend this to a friend?” People recommend things they’re passionate about, and that they believe in. If people believe in/are passionate about public media content – doesn’t that mean we’re doing our jobs? (This is probably too simplistic but it’s so tempting…)

    2. Thanks so much for posting this great article.

      As a member of the Youth/Community Media community of practice, the article is very helpful to our organization in evaluating how we measure success, here at Cambridge Community Television in Cambridge, MA.

      We’ve found surveys to be a simple and successful approach that helps us measure the effectiveness of our community-based media training programs and their impact within the community.

      Surveys drive most of my work developing our hands-on media production, social media, and digital literacy training programs for youth, parents and seniors. Pre-surveys provide us with data to help us create more targeted workshops based on individual and organizational media needs. Post-surveys provide us with information that helps us improve our services to the community and develop future workshops.

      How does this relate to Public Media 2.0 content?

      This approach ultimately helps us to help our community create more culturally-relevant and more-informed public media that is more focused on our residents’ information and communication needs. We provide the tools, they create the media. Surveys provide us with a way to measure how we, and our community, are doing in creating more informed and engaged local communication spaces.

      I look forward to exploring the article’s suggestions more to help us elaborate on our existing methods and develop a more holistic approach to measuring our impact here in Cambridge.

      Thanks, again!

    3. Thanks for your comments! We’d love to hear stories from other readers about how you’re measuring impact.

    4. This is a really good summary article and one that frames the issue expertly. I wonder how it leads to a playbook approach for both traditional public media stations and new public media players that Amanda Hirsch seems to be implying…

      It would seem that the Knight Foundation in its Informing Communities report has also started to think about digital media impact assessment, but I have some trouble with their score card approach. While the suggested approach in their report is more subtle, it is very easy to fall into a “your in” or “your out” mentality.

      I think that some of the framing that you are bringing to this post is around trying to answer the “what changed?” question in a very big, very open media ecosystem. This is where this stuff gets murky and until all of our actions are codified in structured data, we are going to have to embrace some of the qualitative and art of evaluation and impact measurement.

      When at One Economy we had a very simple formula that guided our actions and I hold on to that clarity in my present job.

      1. Public Service Media is defined as using media to solve problems that are worth solving.

      2. Impact: Engagement + Information = Taking Action (personal, community, civic)

      Simplistic, but as a guide powerful.

    5. Thanks Rob–I like that formulation. It’s very similar to our thumbnail definition of “public media” at the Center for Social Media: i.e., “media for public knowledge and action.”

      In working through these “elements of impact” categories, we’re aiming to develop a clear framework that producers, outlets and funders can run through to both strategize and evaluate the impact of their projects. The idea is to match what people tend to measure already up with benchmarks related to a more mission-driven focus.

      But before we can confidently provide a “playbook” as you suggest, we want to make sure that we scan the field to capture effective approaches. That’s why we’re holding these impact summits, and based on the first one that we just held in Chicago, they’re going to provide some great insights.

    6. Interesting to read this discussion and then read the latest post on the blog for MQ2 (a CPB-funded production effort):

      http://mq2.org/node/524

      …which begins thusly:

      “Although few NPR listeners may recognize his name, Bill Siemering is one of the most important makers of public radio. He created NPR’s All Things Considered, launched Fresh Air with Terry Gross, and Soundprint, the documentary series. He’s a MacArthur “genius” who founded Developing Radio Partners, which supports community radio as a lifeline in countries around the world. Years ago, I heard him speak about his work in Mongolia. One of his inspiring quotes has become somewhat of a mantra in public radio circles:

      ‘The original meaning of broadcast is to scatter seeds. Some take root, some don’t. But it’s a wonderful nurturing kind of image for a community, to be scattering seeds, to be casting out ideas, information, the arts…and enabling people to be nurtured by them.’

      I think this is an apt metaphor. But how do we balance this idea of scattering seeds through media, w/ the critical but seemingly impossible challenge of tracking every flower?

    7. I think that some of the framing that you are bringing to this post is around trying to answer the “what changed?” question in a very big, very open media ecosystem. This is where this stuff gets murky and until all of our actions are codified in structured data, we are going to have to embrace some of the qualitative and art of evaluation and impact measurement.

    8. Thanks for your comments! We’d love to hear stories from other readers about how you’re measuring impact.

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift