“Boyfriend Maker,” an app that allowed users to create and chat with a virtual “boyfriend” was at the top of the App store in Asian markets and was rated #1 in Japan, the Daily Dot had reported. The app used Artificial Intelligence that allowed it to learn language from its users. Within months, it was pulled from the app store when its chat responses became racist, sexist and sometimes even violent. (A Tumblr page that compiles screenshots from the app lives on.)
“It was horrible,” according to Hilary Mason, the CEO and founder of Fast Forward Labs.
She spoke about the app at a media and tech conference recently to illustrate the point that much of the work humans have done around machine learning and artificial intelligence has been done outside the mainstream tech community.
“We have woken up to the fact that we have an opportunity, and in fact a responsibility, to address these problems using the best we have to offer,” she said. “What does armor look like? What do we build as a community to support truth?”
Mason was speaking at Machines + Media, hosted by the NYC Media Lab, which was centered around the future of artificial intelligence and machine learning related to media.
The Future of Machine Learning
Mason said new technical capabilities, which will be able to personalize, generate and filter content, will transform the media.
While these tools will “augment our capabilities to eliminate cognitive drudgery in a lot of the work that we do,” they may also have terrifying capabilities, she said.
“So when we think about, will it eliminate jobs, will it create jobs, I’m thinking of it much more around, what will media even look like when we have full penetration of these technologies and how quickly can we adapt to them?” she said.
She said the process of getting there will likely be chaotic. But it will be a kind of chaos that requires more creativity, more risk-taking and more experimentation.
Amanda Stent, a natural language processing researcher at Bloomberg, said data scientists want to work with journalists to create mutually-beneficial solutions.
“I see the newsroom working with data science to create very forward looking, fast and accurate journalism, both text and multimedia,” she said. “I think it’s incredibly exciting and really helps people who are consumers of media to make better decisions and be better informed.”
John Borthwick, the CEO of Betaworks, said that humans need to begin to discuss the ethical implications of machine learning and to understand how machines interact with human experiences.
“Our sense as human beings that we have this ability to be able to do things that machines can’t do is going to be challenged, and we need to start thinking about this,” he said.
Borthwick pointed to the fact that political bots had a significant impact on how Americans thought about the 2016 presidential candidates and how they voted.
“That is the most fundamental thing we have in our society, the electoral process,” he said. “The fact that the news system was rewired through that process, I think should say to us, this is happening now. We’ve got to start all the re-training and all the thinking and discussion about the ethics needs to start now.”
Borthwick said there’s a dangerous tendency to believe in a “techno-utopia” of the future, which has caused us to ignore unintended consequences of technology, including the way it’s impacted the news industry.
Platforms and Publishers
During a panel discussion about the relationship between platforms and publishers, Cornell Tech Professor of Law James Grimmelmann described that relationship as symbiotic – they both need each other, he said.
“The platforms, in the long run, care much more about publishers in general than any one company,” he said. “In fact, they’re best off when there’s a huge number of interchangeable people all using the platform to get their messages out. Because then none of them have negotiating leverage against the platform.”
Platforms encourage media companies to compete each other, leaving none of them with much power, he said.
And, Grimmelmann said, by law, platforms are immune from liability for the content posted on them.
If platforms were liable for every harmful thing users said, they couldn’t exist, according to Grimmelmann. But that gives platforms immense discretion. And it also means they don’t need to take any steps to curb the proliferation of fake news.
And “general lies” are protected under freedom of speech laws, he said.
Gilad Lotan, the head of data science for BuzzFeed, pointed out that some platforms have begun to take steps to address fake news.
“Technology won’t solve the problems, but some tools may help mitigate some of the issue,” he said.
Newsroom Best Practices
Marc Lavallee, the executive director of the New York Times’ Story [X] said that evolving technology challenges the current value system of journalism that is built around winning Pulitzers and other awards.
“There are people who think more about journalism as a process and are open to all kinds of new tools and opportunities to be able to have that same impact, but in way that incorporate anything they can do to deliver that kind of result,” he said. “Five years down the road, is a journalist someone who is essentially tuning a bot? Yeah I think so.”
Jeremy Gilbert, the director of strategic initiatives for the Washington Post, said algorithms have freed up time for journalists. Algorithms are also used to show readers different content, such as regional stories, depending on their location.
Andrew Montalenti, the chief technology officer of Parse.ly, said said journalists would be wise to ask themselves what the most unique, non-commodity thing they can do is.
“Because, honestly, that’s the question software engineers ask themselves,” he said. “When we do our work, we write code that is unique. […] And we make sure we don’t spend a lot of time on tools that have already been built and for which there is already automation in place to take care of that problem. And I think that will happen in every industry.”
Something that should never be automated is a publication’s editorial voice, Beth Loughney, the founder of Zorroa, said.
Loughney said she subscribes to both the Washington Post and the New York Times
“I subscribe because they are different in their editorial focus,” she said. “They are reporting often on the same facts, but they are taking a different approach to it. That’s something I think you can not automate away.”
Bianca Fortis is the associate editor at MediaShift, a founding member of the Transborder Media storytelling collective and a social media consultant. Follow her on Twitter @biancafortis.