• ADVERTISEMENT

    The Fake News Challenge Puts AI to the Test

    by Bianca Fortis
    May 30, 2017
    There has been some debate as to whether fake news played a role in the outcome of the presidential election. Photo by Matt Johnson on Flickr and used with Creative Commons license.

    Long before Nov. 8, 2016, research scientist Dean Pomerleau was concerned about fake news. His Facebook News Feed had been filled with political disinformation during the presidential campaign, and he saw that the stories appeared to be influencing reader’s attitudes toward the candidates.

    Though skeptical, he wondered whether it were possible to create a machine learning tool that could flag fake news stories, and discussed the idea with colleagues on Twitter. Then he issued a challenge: he bet that it were not possible to do, and asked his colleagues could prove him wrong.

    Delip Rao, the founder of Joostware, which builds Artificial Intelligence products, contacted Pomerleau and offered his help – and thus began the Fake News Challenge.

    Dean Pomerleau. Photo courtesy the Fake News Challenge.

    ADVERTISEMENT

    Now there are more than 100 teams from 25 countries vying to find the solution to fake news.

    The participants communicate over a Slack channel which has more than 700 members. Most are hackers and researchers who work in artificial intelligence, but there are also quite a few journalists and fact-checkers in the group, Rao said.

    The Challenge

    The organizers realized early on that creating an algorithm that could flag fake news was impossible because machines are incapable of picking up nuances and subtleties in language. There’s also the problem of trying to evaluate opinion or satire pieces, which don’t make a definitive statement about truth.

    ADVERTISEMENT

    Now Pomerleau and Rao are focused on creating tools that can help human fact-checkers. One of the biggest problems for journalists and fact-checkers is the volume of stories which which they must evaluate, they said.

    The top fact-checking organizations employ only a handful of fact-checkers who face a “fire hose” of fact-checks every day, explained Pomerleau.

    And the earlier a fake news story is caught, the easier it is to contain, Rao said. So the goal of the Fake News Challenge is to develop AI tools that would facilitate fact-checkers doing their jobs more efficiently.

    The participating teams were given training data to build their solutions. On June 1, they’ll be given a new set of data, on which they will test their algorithms. The three teams that score the highest will be awarded prizes, which will be determined at a later date, according to the organizers. The top three projects will be provided open source so that news organizations may use them.

    Once the first challenge is complete, its organizers hope to continue with future challenges. One idea for the next challenge is to expand beyond text content to multimedia content.

    “That’s closer to the reality of the fact-checking that people do day to day,” Pomerleau said.

    Stance Detection

    The first challenge is focused on the concept of stance detention.

    Delip Rao. Photo courtesy the Fake News Challenge.

    For fact-checkers, it’s helpful to see what other news organizations have reported about a certain topic or claim. Stance detection automates that process by sorting news articles relevant to a specific claim into three categories: stories that agree with the claim, stories that disagree with the claim and then stories that discuss the claim but don’t take a firm stand.

    The user would be given a group set, or bin, of search results, rather than an unsorted list of relevant content. That way, a human fact-checker would be able to more easily determine the validity of a claim by looking at the news sites included in the bins.

    The purpose of this is to assess the credibility of an article or the argument within it. For example, the algorithm could help determine whether an article includes citations or just makes unsubstantiated claims.

    “The goal is to determine which has the best argument,” Pomerleau explained. “Not just which is the most popular or widely cited or read, the way a search engine does.

    The Role of Technology in Fighting Fake News

    In the discussion that have taken place around fake news since the presidential election, there’s been a common thread that the answer to the problem is sociological – teaching media literacy, for example, rather than technological.

    Neither Pomerleau nor Rao disagree with the idea that education is an important aspect of the solution.

    “This is a small part of what needs to be a larger effort to educate people to be savvy consumers not be duped by manipulative content online,” Pomerleau said.

    Rao said fake news can be used as information warfare.

    “People are vulnerable in general, and people who supply fake news are taking advantage of that vulnerability,” he said. “We need to be out there talking to everybody about what to believe and not to believe. If you see something sensational, your first reaction should not be to hit ‘retweet’ and share it with your friends, but to look at it and ask where it’s coming from. Be introspective before you become trigger happy.”

    However, Rao said, education alone will not suffice.

    He pointed to advanced-fee scams, better known as the “Nigerian prince” con that scammed people via their email addresses. Mail service providers worked to detect and block spam, but there was also an educational component in telling people to ignore those types of emails. While fake news is a far more complicated problem to solve, the phishing scams show that both education and technology came together to solve that problem, Rao said.

    Bianca Fortis is the associate editor at MediaShift, a founding member of the Transborder Media storytelling collective and a social media consultant. Follow her on Twitter @biancafortis.

    Tagged: algorithms artificial intelligence fake news machine learning the fake news challenge

    One response to “The Fake News Challenge Puts AI to the Test”

    1. Andrey Mir says:

      If they succeed, they will create such a monster, such a tool of censorship, compared to which fake news is the pinnacle of democracy.

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift