The term “fake news” entered the mainstream consciousness in 2016, but it has carried on through 2017, for better or for worse.
Whether we call it yellow journalism or fake news, deliberate misinformation has always existed in the media world. But thanks to a combination of social media, Macedonian teenagers and the dawn of a post-truth political era, the conversation around fake news intensified this year. Even the pope issued a warning about the dangers of bogus news.
Many educators expanded their efforts to teach media literacy, but that’s likely not enough. At the beginning of the year, we asked whether it was possible to solve the problem of fake news at all. That remains to be seen, but below is a look at the some notable projects that are working to do just that.
Duke University Tech & Check Cooperative
The Tech & Check Cooperative is the brainchild of Bill Adair, the individual responsible for the creation of Politifact at the Tampa Bay Times in 2007. Now the Knight Professor of the Practice of Journalism and Public Policy at Duke University, Adair’s goal is to bring fact-checking to new audiences. The Cooperative involves three parts: a suite of fact-checking apps, AI tools that can help automate fact-checking and collaboration with other fact-checking organizations around the world.
Adair also thinks smart speakers hold some promise for fact-checking efforts. The Cooperative developed an app called Share the Facts, which allows users to fact-check a claim simply by asking Amazon Echo or Google Home. And now that it comes with a screen, the Home could be particularly useful if the screen is used to display headlines, Adair said.
The Fake News Challenge
The Fake News Challenge was a collaborative effort that tasked programmers with developing machine learning tools that could help flag fake news stories as a way to support fact-checkers. The goal of the Challenge, which attracted more than 100 teams from 25 different countries, was to build functioning tools and make them available to news organizations. The three top teams won prizes and their solutions are provided open source online. The organizers have plans to host more challenges in the future.
Storyzy is another project that uses machine learning to counter fake news, but it specifically focuses on fact-checking quotes. While a written story can contain half-truths, a published quote is either correct or not – a more black-and-white scenario where machine learning can be more effective. Storyzy also tracks the growing number of fake news sites (about 22 per month in the U.S. this year) and CEO Stan Motte has written about how that could hurt brands who use programmatic advertising.
Facebook, Google & Twitter
In partnership with the Trust Project, Facebook, Google and Twitter announced last month that they will display “trust indicators” on their platforms that will provide extra context about a news site for readers interested in knowing more about a publication before reading a story. The indicators, which are already being used by publications like the Washington Post, the Economist and the Globe and Mail, will also provide information about whether a piece of content is news or advertising.
All three platforms have been heavily criticized for allowing misinformation to run rampant and for allowing Kremlin-linked Russian operatives to purchase hundreds of thousands of dollars worth of advertising with the goal of sowing discord among Americans during the 2016 election. Facebook in particular has come under fire.
While Facebook has taken some steps to smooth over its relationships with publishers and journalists, such as launching the Facebook Journalism Project, it remains to be seen how effective its efforts to stamp out fake news will be. Third-party fact-checkers hired earlier in the year to weed out false stories on the platform said that effort was largely a failure. Facebook also tried an experiment in which it promoted comments under news stories that used the word “fake,” but that served to only infuriate users on the platform. Most recently Facebook announced that it has launched the News and Journalism Accelerator Program to foster start-ups working on media projects; pitches related related to fighting fake news are welcome, but only start-ups located in Canada can apply.
CUNY’s News Integrity Initiative
The News Integrity Initiative at the CUNY Graduate School of Journalism is a $14 million fund that aims to provide support for organizations, projects, applied research and events that connect journalists, technologists, academic institutions, non-profits and other journalists. The NII is interested in funding projects that counter misinformation. Here’s information about how to apply.
Photo and Video Verification Tools
There’s evidence to indicate news consumers are generally more trusting of TV news than other sources of information. But video, as well as photos, can be manipulated too. InVID, the product of a three-year collaborative effort in Europe, offers a suite of tools that simplifies the video verification process.
One of the keys to verifying photos and video is to establish place and time for the images you are trying to verify, according to Aric Toler, an analyst at Bellingcat.
“The vast majority of materials can be verified, or confirmed as fakes, by just establishing where and when they were shot,” he wrote for MediaShift. Geolocation is one of the tools he uses to do this. For any readers interested in learning more about photo and video verification, Toler has an upcoming online training on the subject.
Bianca Fortis is the associate editor at MediaShift, an independent journalist and social media consultant. She is a founding member of the Transborder Media storytelling collective. Follow her on Twitter @biancafortis.