X
    Categories: MarketingShift

Why Publishers Are About to Go Data Crazy

The following is a guest post from Sachin Kamdar, the CEO and co-founder of Parse.ly. Currently in stealth, Parse.ly provides a new set of performance metrics, specifically tailored to publishers’ needs. Here, Kamdar explores the new age of data and how publishers will be a part of it.

We spend too much time talking about how publishers are adapting to the rise of the web, and very few moments trying to understand the unique challenges their businesses face.

Many pundits have criticized the industry’s inability to adapt their business models to a new web-first world. But it’s not the publishers that aren’t adapting — it’s their toolbelts that haven’t evolved to meet most acute needs.

The printing press is a great example of a technology that was quickly and widely adopted, and believe it or not, evolved rather quickly over the course of the last century. I’d argue that publishers are better at adapting to change than we give them credit for.

For example, we rarely ever acknowledge that Bloomberg and Thomson Reuters were into “big data” long before it became a buzzword.

And while the advances in media consumption technology for readers have been rapid, the publisher side of web technology hasn’t kept up with the pace. Publishers have been running a marathon in a pair of shoes that are four sizes too small.  

2012 will be the year that publishers get access to sophisticated, innovative technologies that are purpose-built for their needs, and this is precisely what’s going to change in the next year. Rather than publishers having to make due with the innovations in consumer technology, the ecosystem of technology vendors will realize the huge opportunity to address publishers’ needs. The result will be great news for a publishing industry that has been stunted by poor tools for too long.  

Here’s what it’s going to look like.

Social Isn’t Just For Distribution

For as long as most of us can remember, publishers have been using the likes of Twitter and Facebook to grow readership, improve content reach, and build community. As they’ve gotten more sophisticated, it has also become apparent that they need more insight into the cause and effect of social sharing. They need to move beyond just looking the part and making nice conversation.

The social web is great for distribution, but it’s also good for measuring the performance of content.  

Unfortunately, traditional measurement and analytics tools are designed for radically different business models — typically B2B (business to business) and B2C (business to consumer) companies that sell physical goods or services. The resulting metrics are tracking for leads, or sheer volume, or purchase cause and effect. But content is an entirely different game.

After years of “one size fits all” social media measurement platforms, 2012 will be the year that publishers are going to be served with a variety of completely new offerings that are purpose-built for content-centric businesses (instead of bending an all-purpose tool to their will).

Publishers need to know what exactly caused an article to go viral — was it timely content that created a new trend? The guest author and her accompanying network? A particularly influential commenter? A confluence of factors?

Publishers generally already know what happened in the past. But what about the future?

Publishers need to know what content will perform well tomorrow, not just what did well last month. Cause and effect analysis on content that spreads through the social web is going to make the difference between tracking performance and optimizing for the future. It’s the difference between reactive and proactive.

You can expect to see a significant effort in the social media space to address the needs of publishers and content-driven organizations in 2012. For example, rising stars like LiveFyre help publishers center the conversation around their content by aggregating all comments from around the social web back to one place. Similarly, emerging technologies, such as Infochimps Social API, are serving publishers by providing data about influence to inform which readers are likely to tip a potentially viral story.

As social media tools that actively address the specific needs of publishers find their way into capable hands expect it to give birth to a completely new breed of journalist.

The Rise of The Cyborg Journalist

If you saw “Page One,” the documentary about The New York Times, you might remember several scenes where editors sat around a big table discussing what stories should make the front page for the next day’s paper.  

It’s almost comical, looking forward to 2012, to think of a newsroom going purely off of gut and intuition when making those decisions.

Next year, these editorial decisions will still require the knowledge and experience of editors who know their readership intimately well; but those editors will soon have a wealth of data at their fingertips to inform their opinions and, ultimately, editorial decisions.

Predictive analytics will give them a sense of how a story will perform, and real-time analytics will give them an up-to-the-second understanding of the collective interests of their readership.

But hunches and instinct will take a back seat to new kinds of technology-driven metrics.

Many newsrooms already use data to inform editorial decisions, but in 2012, it will become common practice to “interview the data” when designing an editorial calendar, or selecting featured articles and posts for the near future. In fact, many newsrooms will require it.

Some in the industry are concerned that the data-driven approach undermines the merits of existing methodologies. For instance, are we just creating an echo chamber if we do what the data says? Shouldn’t we publish an article that may not be in demand, but is important for our readers to see?

I’d argue that data-driven journalism isn’t so much about the data as it is absorbing it into the existing editorial decision-making processes. This creates a 1+1=3 effect whereby editors are given a new set of highly functional capabilities that improve their abilities to do what they do best. Think augmentation, heightening and exploring — not replacement or marginalization.

Such is the rise of the cyborg journalist — an editorial role that is able to simultaneously blend instinct and intuition with data and technology. It’s a powerful combination, and the publishers that best incorporate real-time and predictive analytics into their editorial processes will have a tremendous advantage.

Trending Data Overtakes Snapshot or Historic Data

Imagine the data that Apple has on its iPad sales.

The company certainly has some research on customer satisfaction, device glitches, and sales by region, etc. — but that’s standard stuff. It also has data that analyzes sales and satisfaction over time. In the case of a business that sells a physical product like an iPad, the trending data over time is more valuable — with it, Apple can adapt its sales and marketing strategy.  

Note the weakness of the analogy, however — an iPad has a shelf life of several years. Content is an entirely different game.

Stories, in some cases, can have evergreen value, but the best-performing content on the web typically has a 24-hour window before traffic to the article or post falls significantly. It is for this reason that publishers have struggled to access trending data to inform their decisions. It’s incredibly difficult to collect, crunch and deliver data and insights in that period of time. You have to be really fast. Our systems and tools are only now starting to catch up.

SiteSimon is one of the startups popping up to help publishers understand content.

There is a light at the end of the tunnel, and in 2012 publishers will finally get their hands on trending data that matters. Global trending hashtags are only the very tip of the iceberg.

This year the market is going to be flooded with real-time data analysis tools for content. Not real-time analysis for web pages — and that’s an important difference. We’ve had real-time web analytics, but those analytics are designed for the concept of a webpage, not the nuances and heartbeat of content.  

SiteSimon and Trap.it are two companies that have built technology to analyze and understand content. These companies apply this understanding to their users’ behavior to accurately identify interests and provide personalized recommendations. Percolate, another startup, parses social streams to aid brands and businesses with topic-oriented content curation — and there’s no reason why publishers can’t leverage similar technologies to optimize their content, too.

Content is a living thing, constantly zigging and zagging — if you don’t believe me, perhaps you should take a look at Google’s Currents, or Yahoo’s CORE engine, or any of the new, interactive aggregators (Flipboard, Zite, Pulse). Or simply ask any editor about the sense of constant flux they feel throughout each workday. This business turns on a dime. Are we just as nimble?

A snapshot of a traffic spike tells very little to publishers. On the other hand, knowing how content participates in a trending conversation in real-time is a powerful insight. Great trending data is how editors will be able to infer and plot out what readers want to read next.

Publishers Turning Inward — It’s Still A Business, Baby

If there’s one thing that the data-driven journalism trend points to, it’s a back-to-basics view of the content business. What’s old is new again, as they say.

Ultimately, data-driven journalism isn’t about the data, it’s about the insight the data gives you. The same is true when analyzing content. It’s about supply and demand and learning what content serves readers and the business best. It’s about old-fashioned interviewing. Nowadays, the subject is just … different.

Content performance data gives publishers the ability to better understand their markets (audience), optimize their supply chain, and meet the demands of their customers.

These insights help publishers make more money off of their existing inventory (think old articles that might have new relevance after a new world event). Insights from older data help publishers create new content (product) that is going to be in demand.  

And that’s really what it’s all about. Data-driven publishers don’t have a competitive advantage because they’ve adopted new technology first. They have an advantage because they understand the unique nuances of their market better than the competition.

And as always, the very best publishers will excel at the things that never go out of style — great writing, good reporting, and a unique point of view. The only thing that’s changed is the more intimate relationship of art and science, in service of better and more timely content.

Sachin Kamdar is an NYC entrepreneur and the CEO and co-founder of Parse.ly. He graduated with a bachelor’s in Economics from NYU and a master’s in Education from Pace University. After graduating from NYU, Sachin was an NYC Teaching Fellow, using cutting edge technology to educate students in math and economics at an alternative high school in Brownsville, Brooklyn. He then started an EdTech consulting company that built, implemented and managed systems across schools in NYC.

Sachin Kamdar :Sachin Kamdar is an NYC entrepreneur and the CEO and co-founder of "Parse.ly":http://www.parsely.com/, an audience insights platform for digital media publishers.

Comments are closed.