Many of us take for granted the efficacy of search engines in producing accurate results. Just try to remember what it was like browsing the web in the early aughts and you can see how much things have changed. It’s pretty clear that Google’s come out on top in the search engine wars. Even though they’ve been pursuing a number of other fields that doesn’t mean they’ve forgotten about search. Last month, Google researchers published what might be the next leap forward in search. The new algorithm would assess websites based on how trustworthy they are. This change could help push misinformation and hoaxes into the background. But some are already crying foul, calling into question how the algorithm determines facts. Google’s dominant position as the go-to search engine means website can see traffic freefall after they change their algorithm. We’ll discuss the future of search this week with Hal Hodson, tech reporter at New Scientist; Lily Hay Newman, staff writer at Slate; and Joanna Rothkopf, assistant editor at Salon. PBS MediaShift’s Mark Glaser will host and Jefferson Yen will be producing.
Don’t have a lot of time to spare but still want to listen to the Mediatwits? Then check out our new Digital Media Brief below!
Mediatwits: Full Episode
Digital Media Brief
Listen to the Mediatwits and follow us on SoundCloud!
Thanks to SoundCloud for providing audio support.
Subscribe to the Mediatwits audio version via iTunes.
Follow @TheMediatwits on Twitter.
Subscribe to our YouTube Channel.
MEDIATWITS BIOS
SPECIAL GUESTS
Lily Hay Newman is a staff writer and the lead blogger for Slate’s Future Tense. She has worked at Co.Labs, Gizmodo, IEEE Spectrum, Popular Mechanics, Discover and MetroNY. Warble with her on Twitter.
BACKGROUND
The new system proposed would move away from judging webpages by links and instead rely on an internal database to determine the probability that a statement is true. The heart of the new algorithm is the “Knowledge Vault,” what the researchers call a “structured knowledge repository” (think Wikipedia) that is “substantially bigger” than others previously published. Unlike Wikipedia, which uses human editors to add information, the “Knowledge Vault” would use a machine to automatically extract facts from the web. The goal of the project “is to become a large-scale repository of all of human knowledge.” A goal which the researchers acknowledge would be difficult to accomplish even if their machine was perfect.
Some of the loudest voices opposing the paper are worried that the new algorithm could squash contrarian thinking. Though they believe the current system of listing based on page-rank to be problematic, they contend that trying to rank based on “facts” introduces an evaluative bias. They also are concerned about the system’s ability to assess lexically complex claims.
Should we welcome the “Knowledge Vault?” Do you think it’s possible for any machine to contain the entirety of human knowledge? Should Google or any other private organization be in charge of that information? Is there a risk that we will be viewing the world through Google-tinted glasses?
Jefferson Yen is the producer for the Mediatwits Podcast. His work has been on KPCC Southern California Public Radio and KRTS Marfa Public Radio. You can follow him @jeffersontyen.
View Comments (2)
It all begins with controlling the dissemination of information ...
sorry you jumped on this so fast guys
https://www.seroundtable.com/google-fact-ranking-not-happening-19979.html