• ADVERTISEMENT

    Targeted, Democratic Content Moderation

    by Dan Schultz
    December 1, 2007

    In
    an
    earlier post
    I suggested a process intended to maintain journalistic
    standards in a globally accessible, user-maintained aggregated news site. Its
    key feature was a purgatory section where new articles would be rated by
    readers for quality, apparent credibility, and a few other traits before being
    published. If a report didn’t get high enough numbers it would be deleted from
    the system or, in the case of a close call, maybe it would be reviewed by
    designated members of the relevant community.

    That
    description probably sounds very similar to Digg’s Upcoming section, but this post should
    help differentiate the two. I’ll describe a quick twist that turns an open and
    fairly loose peer review scheme into a targeted one that (I think) stands a
    decent chance at providing accurate regional and topic specific news without
    losing article integrity. Please keep in mind that this is all a
    continuation of that “perfect
    news system
    “ discussion from way back.

    What
    does someone living in Wyoming know about the local elections of a Pennsylvania
    suburb? How can someone who doesn’t follow the latest in physics rate the
    potential validity of an article about the 100 billion light year void in
    space? I’m betting similar questions can be asked about most stories that don’t
    have to do with iPods or national politics.  This is a problem because the
    whole point of this giant news system is that it would be comprehensive in its
    coverage and would contain exactly the type of niche news that is only familiar
    to niche audiences. This means I’m going to have to take some extra precautions
    during the review process or people will be voting on articles without any
    understanding of the related community standards or the surrounding issues.

    ADVERTISEMENT

    To
    address this predicament I’ll steal a page from academia and try to target
    domain experts during the peer review process. Of course it wouldn’t be much of
    a democratic system if I restricted the voting process to proven experts, and
    figuring out who is an expert simply isn’t feasible in such a dynamic setting,
    so I’ll take the next best thing – residence and interest. Since the system is
    centralized and users specify regions of residence and topics of interest
    anyway, it should be easy to have members of a given physical or topical
    community be the ones to review their community’s news.

    Now,
    using the targeting method, when an article is uploaded, categorized, and
    tagged to relevant locations it can be pre-screened by users who are somewhat
    connected with the affected community.  Clearly a connection doesn’t make one
    an expert, but it does indicate a clue. My underlying assumption here, which I
    would love for Ben Melançon (and others) to comment on at some point, is that
    informed individuals (i.e. those with a clue) can do a fine job of
    democratically moderating the content that falls within their collective
    domain.

    The
    next thing to do is give the journalists a unique say of their own on top of it
    all.  That will be a bit trickier, but I have hope that it can be done in
    an elegant way that makes as many people happy as possible.  I’ll open up
    that can of worms the next time I continue this thread of discussion.

    ADVERTISEMENT
    Tagged: moderation targeted moderation user content

    3 responses to “Targeted, Democratic Content Moderation”

    1. Very good thoughts, Dan, and well-presented.

      Knowledgeable input is essential. You’re absolutely right, and it’s a question I’ve avoided. Not sure what the answer is, but it seems you’re on a good track.

      Very quickly: In PWGD’s conception of democratically moderated information, all submitted articles would be available somewhere, and the extent of distribution (push to e-mail, RSS, SMS, prominence on a web site, etc.) would be determined per-community of interest or geographical area. Unlike Digg, which has whoever happens along (or lives on the internet watching Digg) do the ratings, a random sample of people in relevant communities would be asked “should this go out to people” (in Pennsylvania, or who care about elections of some type). By fortune, this system seems a bit like the one you’re proposing.

      Ultimately, I think the expertise versus mass input question should be decided within the context of democracy. That is, a democratic “what’s important, what’s true” system that uses the knowledge of those who have it in its decisionmaking.

      The hope for PWGD is that this sets up a communication system where learning is possible: those who are wrong don’t get the same weight in public opinion the next time around.

    2. tfe says:

      You know, a cool way to handle this would be through some kind of bootstrapping… You talk about finding a way to identify proven experts in such a dynamic environment, but I believe the problem has already been solved. If you seed the network with a few known experts, hand selected, you can easily bootstrap your way up from there. Those editors review the work of new editors, and as gradually the new editors become more trusted and are able to help identify the next iteration, and so on. A lot of work has already been done in this area (look up trust networks or webs of trust).

      A perfect example of what I’m proposing conveniently exists at the bottom of this very comment box. Take a look at that reCAPTCHA… do you know how it works? The system presents two non-computer-readable words to the user. If the user types them in correctly, he is allowed to post the comment. But how does the system know the user entered the correct text? The key is in the bootstrapping… two words are presented, but one of them has already been solved and confirmed by previous reCAPTCHA users. If the user solves the known word, the system assumes he solved the other one correctly too, and both can now go back into the pool of words with higher confidences as to their correct solution. By building off of this, the system is building ever-higher confidences in the validity of its pool of words.

      Perhaps similar methods could be applied to the problem of democratic content moderation.

    3. Hey tfe, that’s a really cool idea.

      Does it mean people can’t know which is the test and which is the real? Because it’ll be a little harder for people to provide news, analysis or editorial services not knowing if they will be used at all!

      And that aspect of not knowing is key for the recaptcha approach I think.

      Got another trusted web solution?

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift