• ADVERTISEMENT

    Can Ushahidi Rely on Crowdsourced Verifications?

    by Heather Ford
    November 28, 2011

    During the aftermath of the Chilean earthquake last year, the Ushahidi-Chile team received two reports — one through the platform, the other via Twitter — that indicated an English-speaking foreigner was trapped under a building in Santiago.

    “Please send help,” the report read. “i am buried under rubble in my home at Lautaro 1712 Estación Central, Santiago, Chile. My phone doesnt work.”

    A few hours later, a second, similar report was sent to the platform via Twitter: “RT @biodome10: plz send help to 1712 estacion central, santiago chile. im stuck under a building with my child. #hitsunami #chile we have no supplies.”

    ADVERTISEMENT

    i-96903c65717574645d212bda91738f0c-earthquake.jpg

    An investigation a few days later revealed that both reports were false and that the Twitter user was impersonating a journalist working for the Dallas Morning News. But this revelation was not in time to stop two police deployments in Santiago that leaped to the rescue before they realized that the area had not been affected by the quake and that the couple living there was alive and well.

    Is false information like this one just a necessary by-product of “crowdsourced” environments like Ushahidi? Or do we need to do more to help deployment teams, emergency personnel and users better assess the accuracy of reports hosted on our platform?

    ADVERTISEMENT

    Ushahidi is a non-profit tech company that develops free and open-source software for information collection, visualization and interactive mapping. We’ve just published an initial study of how Ushahidi deployment teams manage and understand verification on the platform. Doing this research has surfaced a couple of key challenges about the way that verification currently works, as well as a few easy wins that might add some flexibility into the system. It’s also revealed some questions as we look to improve the platform’s ability to do verification on large quantities of data in the future.

    What We’ve Learned

    We’ve learned that we need to add more flexibility into the system, enabling deployment teams to choose whether they want to use the “verified” and “unverified” tagging functionality or not. We’ve learned that the binary terms we’re currently using don’t capture other attributes of reports that are necessary to establishing both trust and “actionability” (i.e., the ability to act on the information). For example, the “unverified” tag does not capture whether a report is considered to be an act of “misinformation” or just incomplete, lacking contextual clues necessary to determine whether it is accurate or not.

    We need to develop more flexibility to accommodate these different attributes, but we also need to think beyond these final determinations and understand that users might want contextual information (rather than a final determination on its verification status) to determine for themselves whether a report is trustworthy or not. After all, verification tags mean nothing unless those who must make decisions based on that information trust the team doing the verification.

    The fact that many deployments are set up by teams of concerned citizens who may have never worked together before and who are therefore unknown to the user organizations makes this an important requirement. Here, we’re thinking of the job of the administering deployment team providing information about the context of a report (answering the who, what, where, when, how and why of traditional journalism perhaps) and inviting others to help flesh out this information, rather than being a “black box” in which the process for determining whether something is verified or not is opaque to users.

    As an organization that is all about “crowdsourcing,” we’re taking a step back and thinking about how the crowd (i.e., people who are not known to the system) might assist in either providing more context for reports or verifying unverified reports. When I talk about the “crowd” here I’m referring to a system that’s permeable to interactions by those we don’t yet know. It’s important to note here that, although Ushahidi is talked about as an example of crowdsourcing, this doesn’t mean that the entire process of submission, publishing, tagging and commenting is open for all. Although anyone can start a map and send a report to the map, only administrators can approve and publish reports or tag a report as “verified.”

    How Will Crowdsourcing Verification Work?

    If we had to open up this process to “the crowd” we’d have to think really carefully about the options we might have in facilitating verification by the crowd — many of which won’t work in every deployment. Variables like scale, location and persistence differ in each deployment and can affect where and when crowdsourcing of verification will work and where it will do more harm than good.

    Crowdsourcing verification can mean many different things. It could mean flagging reports that need more context and asking for more information from the crowd. But who makes the final decision that enough information has been provided to change the status of that information?

    We could think of using the crowd to determine when a statistically significant portion of a community agrees with changing the status of a report to “verified.” But is this option limited to cases where a large volume of people are interested (and informed) about an issue, and could a volume-based indicator like this be gamed especially in political contexts?

    Crowdsourcing verification could also mean providing users with the opportunity of using free-form tags to highlight the context of the data and then surfacing tags that are popular. But again, might this only be accurate when large numbers of users are involved and where the numbers of reports are low? Do we employ an algorithm to rank the quality of reports based on the history of their authors? It’s tempting to imagine that an algorithm alone will solve the data volume challenges, but algorithms do not work in many cases (especially when reports may be sent by people who don’t have a history of using these tools) and if they’re untrusted, they might force users to hack the system to enable their own processes.

    An Enduring Question

    Verification by the crowd is indeed a large and enduring question for all crowdsourced platforms, not just Ushahidi. The question is how we can facilitate better quality information in a way that reduces harms. One thing is certain: The verification challenge is both technical and social, and no algorithm, however clever, will entirely solve the problem of inaccurate or falsified information.

    Thinking about the ecosystem of deployment teams, emergency personnel, users and concerned citizens and how they interact — rather than merely about a monolithic crowd — is the first place to look in understanding what verification strategy makes the most sense. After all, verification is not the ultimate goal here. Getting the right information to the right people at the right time is.

    i-ac0f814030182e91bc4d65b76e1c0f1c-chile1.png

    Image of the Basílica del Salvador in the aftermath of the Chilean earthquake courtesy of flickr user b1mbo.

    Tagged: accuracy crowdsourcing data quality misinformation ushahidi vandalism verification

    Comments are closed.

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift