• ADVERTISEMENT

    Mediatwits #151: How Google’s Truth Algorithm Could Change the Equation

    by Jefferson Yen
    March 13, 2015
    The New Scientist's Hal Hodson has concerns about Google's "Knowledge Vault" fact database.

    Many of us take for granted the efficacy of search engines in producing accurate results. Just try to remember what it was like browsing the web in the early aughts and you can see how much things have changed. It’s pretty clear that Google’s come out on top in the search engine wars. Even though they’ve been pursuing a number of other fields that doesn’t mean they’ve forgotten about search. Last month, Google researchers published what might be the next leap forward in search. The new algorithm would assess websites based on how trustworthy they are. This change could help push misinformation and hoaxes into the background. But some are already crying foul, calling into question how the algorithm determines facts. Google’s dominant position as the go-to search engine means website can see traffic freefall after they change their algorithm. We’ll discuss the future of search this week with Hal Hodson, tech reporter at New Scientist; Lily Hay Newman, staff writer at Slate; and Joanna Rothkopf, assistant editor at Salon. PBS MediaShift’s Mark Glaser will host and Jefferson Yen will be producing.

    Don’t have a lot of time to spare but still want to listen to the Mediatwits? Then check out our new Digital Media Brief below!

    "It kind of comes down to whether you want a dictatorial society or not." - Hal Hodson

    Mediatwits: Full Episode

    ADVERTISEMENT

    Digital Media Brief

    ADVERTISEMENT

    Listen to the Mediatwits and follow us on SoundCloud!
    Thanks to SoundCloud for providing audio support.
    Subscribe to the Mediatwits audio version via iTunes.
    Follow @TheMediatwits on Twitter.
    Subscribe to our YouTube Channel.

    MEDIATWITS BIOS

    mark-glaser-ISOJ-headshot-150x150Mark Glaser is executive editor of MediaShift and Idea Lab. He is a longtime freelance writer and editor, who has contributed to magazines such as Entertainment Weekly, Wired and Conde Nast Traveler, and websites such as CNET and the Yale Global Forum. He lives in San Francisco with his wife Renee and son Julian. You can follow him on Twitter @mediatwit.

    SPECIAL GUESTS

    Hal Hodson is a staff journalist with New Scientist magazine. Born in the UK but based in Boston, Hal covers novel technologies that shape our future, and the impacts current systems have on real people today. He has written for The Guardian, The Independent and Wired UK. His reports are routinely cited by the world’s largest media organisations. You can find him on Twitter here.

    Lily Hay Newman is a staff writer and the lead blogger for Slate’s Future Tense. She has worked at Co.LabsGizmodo, IEEE Spectrum, Popular Mechanics, Discover and MetroNY. Warble with her on Twitter.

    Joanna Rothkopf covers science, health and society for Salon.com. Before that, she spent a year as a Robert Wood Johnson Foundation Fellow in Science and Health Journalism at Columbia University where she focused on bioethics. She has contributed to Vanity Fair, The Atlantic, The Huffington Post, Foreign Policy and Epicurious.com, among other publications. She lives in Brooklyn. She tweets @JoannaRothkopf

    BACKGROUND

    The new system proposed would move away from judging webpages by links and instead rely on an internal database to determine the probability that a statement is true. The heart of the new algorithm is the “Knowledge Vault,” what the researchers call a “structured knowledge repository” (think Wikipedia) that is “substantially bigger” than others previously published. Unlike Wikipedia, which uses human editors to add information, the “Knowledge Vault” would use a machine to automatically extract facts from the web. The goal of the project “is to become a large-scale repository of all of human knowledge.” A goal which the researchers acknowledge would be difficult to accomplish even if their machine was perfect.

    Some of the loudest voices opposing the paper are worried that the new algorithm could squash contrarian thinking. Though they believe the current system of listing based on page-rank to be problematic, they contend that trying to rank based on “facts” introduces an evaluative bias. They also are concerned about the system’s ability to assess lexically complex claims.

    Should we welcome the “Knowledge Vault?” Do you think it’s possible for any machine to contain the entirety of human knowledge? Should Google or any other private organization be in charge of that information? Is there a risk that we will be viewing the world through Google-tinted glasses?

    Jefferson Yen is the producer for the Mediatwits Podcast. His work has been on KPCC Southern California Public Radio and KRTS Marfa Public Radio. You can follow him @jeffersontyen.

    Tagged: censorship google google search internet hoaxes knowledge vault misinformation science

    2 responses to “Mediatwits #151: How Google’s Truth Algorithm Could Change the Equation”

    1. Tom Olin says:

      It all begins with controlling the dissemination of information …

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift