How Do We Categorize All Journalistic Errors?

    by Scott Rosenberg
    November 11, 2009

    How many different kinds of errors is it possible for journalists to make? And how would you classify them or organize them into useful categories?

    These questions are not my attempt to concoct a tactful paraphrase for “How many different ways is it possible to screw journalism up?” Rather, they represent one of the interesting issues we face as we move work on MediaBugs from the project-organizing phase to the “let’s build something” stage.

    There’s a wealth of established practice in the software field for the kinds of data you can associate with a bug that a user finds in a program: how important the bug is, where the bug is located, how work on it fits in to the rest of the project, and so on. In software development, the purpose of the bug tracking system is, mostly, to define and organize the work of fixing bugs.


    As we attempt to apply this model to the world of journalism, we find little in the way of similar established practices in our field. Individual news organizations sometimes track their own errors internally, but, as far as we’ve been able to determine, there is no common, industry-wide nomenclature for categorizing those errors — no Library of Congress classification or Dublin Core metadata standard.

    We’re pretty much on our own. So we’re doing our best to devise an initial set of categories, knowing that we’ll probably need to revise them once we get real data from real users. (We’ve already drawn much from the invaluable work of my colleague Craig Silverman, in his book Regret the Error.)

    Here’s the list of categories we’re playing with right now:

    • misquotation
    • mistaken identity
    • other simple factual error
    • ethical issue
    • faulty statistics or math
    • error of omission
    • typo, spelling, grammar
    • other

    I’d love to hear what you think of this. Have we left out something obvious? Is this valuable or interesting?

    Any set of categories will need to meet two goals:

    1. It should make sense to users who are trying to make quick decisions about categorizing the errors they’re reporting.
    2. The breakdown of the total universe of errors that the list provides should ultimately be useful as we try to understand why errors happen, and how we can minimize them.

    We know that there’s no bright, shining line one can draw between errors of objective fact and subjective problems with media coverage. Errors don’t fall into two distinct buckets labeled “fact” and “opinion”; there’s a spectrum between the two.

    We want MediaBugs to favor the “fact” side of that spectrum, so our choice of categories is weighted in that direction. I believe this is where we’ll find the most common ground between journalists and the public, and make the fastest progress in our effort to bring the two together. We’ll know a lot more soon!

    Tagged: categories corrections errors mediabugs taxonomy
    • Gordon Haff

      Those strike me as overly focused on fairly simple mechanical errors of one sort or another (with the exception of the item about ethics. But there are lots of other problems that a story can have:
      – Insufficient sources
      – Lack of balance
      And other things of this sort. The problem is that as you get to these more fundamental issues they get grayer and harder to identify with certainty.

    • How about headlines that are shortened in such a way that they become meaningless, erroneous, or worse, humorous?

      Quotations may not necessarily be incorrect, but sometimes too correct. When a person is quoted exactly, they don’t always make sense, so neither does the quote. Far better to insert clarifications in []’s or even leave off the quote than to include a quotation that only confuses the reader.

      Anyway, these are a couple of my observations as a reader.

    • Hortensia Gooding

      I find this list valuable, interesting and necessary. Quality control is truly lacking in our media. I agree with Gordon that lack of balance is an error. Particularly in written material, both (several) viewpoints should be quoted whenever possible. If the opposing parties chose not to communicate, than that should be stated in the article. I believe that the categories should be a clear as possible. While I understand that it will be difficult to create a category for all types of errors, I suggest that “other” be kept as a truly last resort that accounts for less than 5% of the errors. Good job.

    • the error or omission. it usually involves not putting one fact in a story that brings the whole perspective of the story together. for example: “the ceo of company x makes 400 times the wage of the average company employee,” then never saying what that average employee’s wage is.

      and a personal peeve is the use of the word “could” in news stories. “this ‘could’ lead to employee unrest.” “this ‘could’ signal an end to negotiations.” oh puleeze….”this ‘could’ make me not buy your newspaper or watch your newscast!” :)

    • Thanks for these suggestions — keep ’em coming!

      @BJ Muntain, thanks — I think “headline problems” might well be another useful category. “Misquotation” will I hope cover many different kinds of problems with a quote, including the case where the quote is confusing or unclear. We’ll need to explain all that as best we can.

      @songweasel: I think we’re hoping that “error of omission” will cover “that one piece of information you left out that changes everything.”

      Obviously our challenge is to keep the options broad enough to cover as much as possible of what one might consider a correctable error without opening up so wide that we are becoming a general purpose outlet for any act of media criticism. I realize that the categories might seem too narrow or constricting but that’s part of trying to concentrate on issues and problems that are correctable. So, to Gordon Haff’s point, I think we’ll be trying to cover the “lack of balance” or “insufficient sources” scenario in the case where that omission leads to a story that is so misleading or flawed that it warrants a correction. And of course a member of the public and a media outlet might well disagree over where to draw that line.

    • Jill Easterday

      I suggest irrelevant information as a category. It might be just filler or fluff, but it could also be facts that simply don’t belong in the discussion. More subversively, it could be propaganda.

    • Cheryl Bowman

      Copyright violation would be a one I would recommend including.

      I’d second Insufficient credentialed sources and Lack of balance.

    • Frank Day

      I am not sure how this would be classified but simply taking something someone says as fact. For instance, someone might say “so and so study shows this so cigarettes really are good for you.” The quote and the attribution would be correct but, if one were to check, one might find that the study really didn’t show what the person says or there are so many biases in the study (who paid for it, etc) that the study is pretty much worthless. Yet, such statements are frequently given equal time to the other side in the interest of “fairness”.

    • I am currently an out of work journalist, and I agree that in the past I have a made a few mistakes. I do my best to make sure that I do not. I double and triple check stats/figures, if I use them in a story. I want to make sure I perform my due diligence. I currently live in a small town with a small town newspaper. There are quite often mistakes in their reporting because they don’t have a fact checker or copy editor. The staff writes, edits, and lays out the paper, and thus mistakes are often made. I have applied to work there, and I hope to improve it with my writing style and attention to detail that they sometimes fail in.

    • dcwriter

      This one from Muntain is very helpful to me as a writer so thanks for adding that.
      For me, when you turn your stuff over to the editor who then changes something either intentitionally or unintentionally and makes you look like a dumb a**, even something minor like putting the quote marks in the wrong places or changing your title.

    • We’ve been talking about this at Planet 3.0, with respect to debugging the press’s spread of climate disinformation.

      Scott, IMO you might want to collect a bestiary of press misfires before settling down to categorize them; so let me contribute one that we can think about.

      This one had cascading flaws in the trajectory of its development and spread.
      (And what we see as “bugs” are seen as “features”, by interests that *don’t* want the press to shoot straight – so they’re ripe for exploitation.)

      Here’s how it works:

      Step 1: Someone with a credential but without relevant expertise writes a schlocky study with nefarious-entity-friendly conclusions.

      Step 2 works right: the study gets panned by experts. (“worst I’ve ever seen”, etc)

      Step 3: a local-paper story about the study and its reception errs with false balance, and doesn’t get to the experts’ criticisms until Page 2.
      (But it does mention them in headline, and provide a gentle reference in the lede.)
      Outcome: 50% of readers are misled.

      Step 4: the national bureau picks up the story, and – playing Telephone – replaces the “Some criticize study” headline with a credulous “Credentialed Person raises alarm” one, and only displays (the pre-criticisms) Page 1, linking to the local paper for the rest.
      Outcome: rarely will readers bother to click through; 95% are misled.

      And finally, the coup de gras –
      Step 5: the local paper stashes all but Page 1 (the “happy” page) behind a paywall.
      Outcome: rarely will readers bother to pay up; 99% are misled.

      So, 99% of readers who stumble on the story get a take-home message that’s 180 degrees from reality, and it’s nobody’s fault.

    • Scott, I know you’re familiar with some of Jay Rosen’s criticisms of media. I think one of the main issues that he brings up, if I might paraphrase, is unrecognized or unacknowledged bias. Simply by subtle choice of words, the selection of interviewees, the “framing” of a piece, the image conveyed to the mind of the reader may be strongly influenced away from what others might see as objective reality, toward the “reality” the journalist sees.

      Some of this is very blatant – Nedra Pickler’s reporting on Democratic presidential candidates for example. A lot of it is far more subtle. A good picture of it, which again I have to attribute to Jay Rosen, is the concept of the “sphere of legitimate debate”, vs. spheres of consensus and an outer “sphere of deviance”:


      That is, the most important source of journalistic error is bad “sphere placement”: writing articles that imply that totally discredited ideas deserve “legitimate debate”, or conversely treating perfectly legitimate debatable subjects as completely off-limits, in the sphere of “deviance”.

      How do you correct those bugs? I don’t have a clue where to begin.

    • Anna Haynes

      How do you define what constitutes a journalistic error? How do you avoid the trap of focusing on easily-measured yet insignificant errors, compared to more major ones that setiously disinform the readers?

      One type of error would be a large-scale error of coverage omission – e.g. the journalist deliberately avoiding covering a whole class of stories on his beat that would have given readers the information they need to be free and self-governing – instead allowing them to remain misinformed, and/or become misinformed by others.
      Or, worse yet, actively misinforming by serving as an “agent of influence”, playing in the disinformation symphony by reporting little, true things that nonetheless push the public toward misunderstanding the big picture.

      Another major error: printing disinformation in the non-news sections of the newspaper (letters to editor, op-eds, advertising), without making it clear to readers that what they read there is not to be believed (after all, if you did successfully convey the don’t-believe-it msg to readers, there’d be no incentive to buy the ads or send in the letters&op-eds.)

    • Anna Haynes

      Errors of obscurantism(?)

      e.g. an “ask the newsroom” Q&A where the Qs to be answered are selected by the askee, and unanswered Qs are (quietly) never shown.

      or displaying user comments in an unfiltered, or ineffectually filtered, form; ensuring that the gems stay buried amongst the dross.

      Moderating comments by enabling the Pluck feature that quietly makes one’s comments visible to oneself but not to anyone else.

    • Thanks again for all these thoughtful comments — plenty for us all to chew on here!

      Our mandate and mission at MediaBugs is explicitly to try to concentrate on *correctable* problems or “bugs.” What that means in practice is something we’re going to hash out as we go along. But it does, I think, impel us away from issues of bias and subtler kinds of informational malpractice — not because those aren’t worth exploring and exposing, but because there are already institutions and fora devoted to that dialogue.

      For example, with Anna’s example of the discredited study: there’s a role for MediaBugs to play in this scenario, and it is simply to air the discussion about that study. Was it accurately represented in the story or not? And if not, should the publication correct its coverage?

      We recognize that there are going to be plenty of people who will be disappointed by this focus. Our hope is that (a) there already, thankfully, exist on the Web a vigorous and broad-based culture of general media criticism; and (b) by creating a forum for working out specific, correctable issues, MediaBugs can start reversing the climate of distrust between the media and the public.

      We’ll learn a lot more, of course, once we have a service ready for your use!

    • ari


    • Regarding the category of faulty statistics and math, if you haven’t seen them, you should look up Mark Liberman’s postings on Language Log about how to present statistics. You can start with this post, http://itre.cis.upenn.edu/~myl/languagelog/archives/004992.html, and then wander around

  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media