How many different kinds of errors is it possible for journalists to make? And how would you classify them or organize them into useful categories?
These questions are not my attempt to concoct a tactful paraphrase for “How many different ways is it possible to screw journalism up?” Rather, they represent one of the interesting issues we face as we move work on MediaBugs from the project-organizing phase to the “let’s build something” stage.
There’s a wealth of established practice in the software field for the kinds of data you can associate with a bug that a user finds in a program: how important the bug is, where the bug is located, how work on it fits in to the rest of the project, and so on. In software development, the purpose of the bug tracking system is, mostly, to define and organize the work of fixing bugs.
As we attempt to apply this model to the world of journalism, we find little in the way of similar established practices in our field. Individual news organizations sometimes track their own errors internally, but, as far as we’ve been able to determine, there is no common, industry-wide nomenclature for categorizing those errors — no Library of Congress classification or Dublin Core metadata standard.
We’re pretty much on our own. So we’re doing our best to devise an initial set of categories, knowing that we’ll probably need to revise them once we get real data from real users. (We’ve already drawn much from the invaluable work of my colleague Craig Silverman, in his book Regret the Error.)
Here’s the list of categories we’re playing with right now:
- mistaken identity
- other simple factual error
- ethical issue
- faulty statistics or math
- error of omission
- typo, spelling, grammar
I’d love to hear what you think of this. Have we left out something obvious? Is this valuable or interesting?
Any set of categories will need to meet two goals:
- It should make sense to users who are trying to make quick decisions about categorizing the errors they’re reporting.
- The breakdown of the total universe of errors that the list provides should ultimately be useful as we try to understand why errors happen, and how we can minimize them.
We know that there’s no bright, shining line one can draw between errors of objective fact and subjective problems with media coverage. Errors don’t fall into two distinct buckets labeled “fact” and “opinion”; there’s a spectrum between the two.
We want MediaBugs to favor the “fact” side of that spectrum, so our choice of categories is weighted in that direction. I believe this is where we’ll find the most common ground between journalists and the public, and make the fastest progress in our effort to bring the two together. We’ll know a lot more soon!