This post is an adaptation of a longer research paper by Jacob L. Nelson and James G. Webster.
Audience ratings data have been of critical importance to marketers and media managers for almost a century. These measures of audience size and composition function as “currencies” that support the operation of commercial media. Today, different ratings can be derived from different sources of “big data.” For example, some have argued that social listening data, like when brands look for mentions across platforms like Facebook or Twitter, can be turned into metrics on engagement that might supplant traditional measures of exposure, like pageviews.
This article reviews the role that data on exposure have played in audience ratings and argues these data are the most likely to support currencies going forward, although these need not be confined to size and composition. We analyzed online exposure data to explore the relationship between the most plausible contenders for audience currencies: size and engagement as measured by time spent. We found these metrics to be uncorrelated, which, as we will explain, suggests that each captures something different and, therefore, might have a role to play as a currency.
Where do audience currencies come from?
Marketers and media managers can use many metrics to assess media audiences. Traditionally, most measures have been based on representative samples of the population. Respondents have been measured with questionnaires, diaries, and meters. These techniques might assess people’s characteristics, their likes and dislikes, their ability to recall messages, and especially their behaviors, all of which are then projected to the larger population.
More recently, servers have harvested a wealth of data including pageviews, expressions of approval (e.g., “likes”), sharing behaviors, and assorted comments. While server-centric data (like the data collected by services like Google Analytics, Parse.ly and Chartbeat) usually fall short of a true census, they are certainly big. Metrics from any one of these sources can provide managers with new insights into audiences. But very few of these metrics rise to the level of currencies.
Currencies are a class of metrics that quantify the audience attributes of value to advertisers. As such, they constitute a common coin-of-exchange that all buyers and sellers of media can use to conduct business.
Currencies have a number of distinguishing characteristics:
- They capture attributes that are easily understood and relevant to buyers.
- They are created in a relatively transparent way that provides reliable estimates of those attributes. This is often achieved through the use of established scientific methods and auditing.
- They are cost effective to produce.
- They are agreed to by the affected parties, i.e. buyers, sellers, and currency providers.
- Becoming a currency depends on quality as well as a mix of economic and institutional factors.
Exposure and Engagement
From the earliest days of broadcasting, exposure was the audience attribute used to create currencies. Archibald Crossley, the founder of the first audience ratings company in the United States, decided to measure ‘exposure’ in his radio ratings analysis — who listens, for how long and with what regularity. This did not mean that audience researchers did not collect data on whether people did or did not like the radio programmes. But for the purpose of buying and selling radio airtime, or programmes, a metric that showed the fact of tuning in to a programme and the amount of time listening to a programme had the simplicity that was essential for bargaining in highly competitive environments. All competitors, though, had to agree on the measure being used.
Ever since, currencies have been based on measures of exposure. Over the years this has ranged from tuning behavior, to program choice, to page views, to “viewable impressions.” None is a perfect measure of human attention, but they offer relatively straightforward, easily captured stand-ins.
Though exposure-based currencies have traditionally privileged measures of audience size and composition, many media companies, and some marketers, would like to conduct business on the basis of how engaged audiences are. Historically, two problems have plagued audience engagement as a viable alternate currency. First, no one can agree on how to define it. The Advertising Research Foundation gathered industry experts to discuss the meaning of “engagement,” and they identified no less than 25 different definitions. Second, given tepid interest in many quarters of the industry, there appeared to be no cost effective way to generate an engagement currency.
The latter problem seems to have been solved with the arrival of digital media metrics provided by companies like Google and their analytics platform. These companies are able to produce massive amounts of data. Such “big data” can be mined in two ways to produce engagement metrics.
First, they can support social listening analytics, which use “web scraping” to quantify online conversations about media. Evidence suggests that audiences who post comments about the television shows they are watching have better ad recall. Consequently, advertisers might find this attribute valuable. And niche television programmers with small audiences might see it as an appealing alternative to traditional measures of size. Unfortunately, different types of listening data produce different results and the firms using these data have yet to agree upon their value as engagement measures.
The second option is to use some measure of time spent with media. Many industry stakeholders believe this reflects engagement or loyalty. For example, Rentrak, a firm that uses set-top-box data to produce TV ratings, offers a “stickiness index” that measures duration of viewing.
But nowhere are measures of time spent more appealing than among online publishers who have lobbied for an alternative currency. Despite industry-wide uncertainty surrounding the definition of engagement, there is widespread agreement that it refers at least in part to content retention. Because many believe attention to be a prerequisite for retention, industry stakeholders have begun advocating for measures of attention as a way of quantifying engagement.
The term that has gained most traction among online publishers is “attention minutes.” These exploit existing measures of exposure to quantify behaviors that might better reflect engagement. The technologies that track program choices and page views can also measure the time that users spend looking at particular programs, pages, and stories. And since no new, specialized form of measurement is required, it is a relatively simple matter to report metrics on time spent.
But even if time spent becomes an agreed upon stand-in for engagement, there are political challenges to moving from one set of metrics to another. These metrics constitute what N. Anand and Richard A. Peterson call “market information regimes,” whereby media industry stakeholders “make sense of their actions and those of consumers, rivals, and suppliers that make up the field.”
Any change in the methods used to manufacture these regimes inevitably produces winners and losers, and those who are disadvantaged will resist the change. Such resistance has been observed in the television, music, and book industries. So the adoption of new metrics is very much about the political economy of measurement. And no players in an ad-supported system wield more influence than advertisers.
Advertisers might value a metric that better reflects the amount of time audiences spend on a site over general impressions. But how firms should use measures of engagement that are based on exposure remains unclear. For example, Telmar, a leading media planning software provider, recently announced a way to calculate audience reach using measures of time spent, while also noting the tool was not intended to serve as a currency.
Perhaps the most basic question is whether currencies based on audience size and time spent will produce different winners and losers. If they do, it will require a good deal of industry “politicking” to find consensus on which is the more valuable currency for conducting business. If they are highly correlated, it seems much ado about nothing.
Popularity vs. Engagement: The Double Jeopardy Effect
It might seem odd to expect a positive correlation between the popularity of an online outlet (e.g., unique visitors to the site) and the engagement of its users (e.g., time spent on the site). But it is well established in cultural markets that popular offerings enjoy greater loyalty than unpopular offerings. This is referred to as the Law of Double Jeopardy. First documented by William McPhee in 1963, it has been observed by many others in a variety of media and brand contexts.
People are less aware of the less popular brands, and the few that are aware of them are also aware of the more popular ones. This asymmetrical exposure makes it more likely for a popular brand to have a more loyal following. People who know about both popular and less popular restaurants will go to both, while those that only know of the popular restaurant will only go to that one.
Double jeopardy effects have long been observed in television viewing behavior. Studies of television program viewing have found that high rated television series retain audiences episode-to-episode more than shows with lower ratings. This held true from the 1980s into the early 2000s, despite a tremendous increase in programming.
Double jeopardy effects are also present in other types of media. It has been observed in traditional news media platforms like radio and print. In a 2008 study of DVD rentals in Australia, Elberse found the most popular titles were also the highest rated. Double jeopardy has also been observed in search engines, online retailers, and on magazine websites.
There is, however, reason to question whether double jeopardy plays out as often in an ever-expanding media market. A more recent study of U.S. television viewing habits found only a modest correlation between the size of a channel’s audience and the time they spent viewing. And what little correlation there was could be attributed to the “big three” television networks.
So any consideration of the viability of time spent metrics as a currency in online environments should begin with answering a fairly straightforward research question:
To what extent are measures of audience size and time spent correlated?
Since publishers of online news sites have been the most vocal in lobbying for attention minutes as a currency, we address that question by looking at the audiences for online news.
Method and Results
We performed a regression analysis of U.S. online news consumption habits to see if a relationship exists between the amount of time visitors spend on news sites and the number of unique visitors a news site receives. Our data came from comScore, a web analytic company that reports monthly estimates of web audiences. Such data have been used in scholarly analyses of online audience behavior.
comScore collects its data from a panel of about 1 million people ages two and older, who load comScore tracking software on their computers. That software tracks the URL’s the user visits and the time they spend looking at each address. comScore fuses these data with server-based counts of traffic that come from tagging websites. In September 2015, comScore recorded a total internet audience of about 230 million unique visitors. Of those, about 165 million visited online news sites. By any reasonable definition, this is big data. And, more generally, it is indicative of the kinds of data on exposure that servers can capture.
comScore uses its panel software and tags to determine the Internet use frequency of panelists, which we included as a variable in our analysis. We ultimately looked at the 861 news websites visited by panelists in September 2015.
In our analysis, we looked at these variables:
- The average minutes per visitor spent on a news site
- The number of individuals who loaded a news site at least once during the month (unique visitors)
- The news site’s language (English or non-English)
- Average minutes per page
- The visitor’s overall internet usage (heavy, medium or light)
Table 3 identifies the most and least visited news sites that are measured and monitored by comScore. The measures of time spent are fairly similar across both the top 10 most and least visited sites. However, visitors spent an average of about 200 minutes on Hespress.com, an unpopular Arabic-language news site, which is much more time than was spent on any of the most popular sites.
Figures 1 and 2 provide a visualization of variation in unique visitors and time spent, respectively. Figure 1 reveals a typical long tail distribution of unique visitors to the top 100 most popular news sites in the sample, with high concentration at the head and small audiences distributed throughout the rest. Figure 2, which arranges sites in the same rank order as Figure 1, shows that the amount of time spent on these sites varies widely and suggests that it is uncorrelated with measures of popularity.
These figures illustrate what our analysis revealed, which is that contrary to an expectation of double jeopardy, there is no significant correlation between a news site’s unique visitors and its average minutes per visitor. There is also no relationship between the number of light, medium, and heavy Internet users a site has and the amount of time visitors spend on that site.
The wealth of data produced by organizations delivering digital content and offering social media platforms in which people comment on media provide a range of new, low cost ways to measure engagement. This raises new questions about the viability of currencies based on something other than audience size.
By answering a simple research question — are measures of audience size and time spent correlated? — we come closer to being able to answer some of those questions. The lack of correlation between audience size and time spent suggests that unique visitors and a metric like “attention minutes” are measuring very different things. This means that the latter might provide a useful alternative in valuing transactions. In other words, adopting time spent as a new coin of the realm would have a transformative effect on the online news publishing industry, similar to the those observed by the introduction of People Meters to television, SoundScan to music, and BookScan to books.
Perhaps there would be a plunge in slideshows of celebrities or cat photos and an increase in long-form, investigative journalism, as many hope. Or perhaps online news publishers would simply find new gimmicks to keep users on their content for longer, for instance by making their content harder to navigate. “‘Time spent’ is a measure of consumption, not necessarily satisfaction,” warn the publishers at Inside Breaking News, a publisher of short news alerts that would be the most likely to suffer in the case of a currency change to time spent.
If the conversation surrounding engagement does lead to the adoption of a new audience measure as currency, we think it is likely to be based in audience behavior metrics for two reasons. First, these metrics are fairly simple—advertisers and publishers are already used to thinking about exposure. Second, these data are readily generated. Server-centric data allow us to extract everything of value we can from the information we’re already gathering. We can see whether return visits are related to time spent, or whether it might vary by the type of site. For this project, we’ve only looked at news sites, but an analysis of a fuller range of genres might reveal further implications of the adoption of a new currency that prioritizes engagement in the form of time spent.
However, the issue still remains that when it comes to online news, time spent may not be a satisfactory measure of audience engagement, for either publishers or advertisers, because different news sites make different demands on their visitors. As Caitlin Petre observed, the missions of different outlets shape their relationships with audience metrics. The New York Times often produces long, award-winning, investigative pieces that require time to get through, while outlets like Gawker flood the Internet with short, light fare meant to be passed around on social media. The New York Times’ motto is “All the News That’s Fit to Print,” while the motto for Gawker is “Today’s gossip is tomorrow’s news.” Both are news sites seeking large audiences, but because the content each provides is so different, how their audiences engage with them is likely to differ as well.
So, what will happen? For a variety of reasons — the lack of agreement on engagement measures and the political obstacles that arise when methods of tracking success change — it seems unlikely that big data will produce revolutionary changes in audience currencies. In all likelihood, currencies will continue to be based on measures of exposure. Measures of audience size and composition (i.e., ratings) will probably remain the predominant currency, because they capture information of central importance to advertisers. Metrics on engagement, derived from creative uses of time spent and visitation data, may well serve as a supplementary currency, but not much more.
Jacob L. Nelson is a PhD student in Media, Technology and Society at Northwestern University where he researches journalism production and consumption.
James G. Webster is a Professor of Communication Studies at Northwestern University. His primary research interest is media use.