X
    Categories: Uncategorized

How to Get Social Media Platforms to Support Private Speech

There are many problems with using commercial technology platforms to host democratic, social, or activist content and communications. These problems came up in multiple sessions at the recent National Conference on Media Reform. There are also obvious reasons to continue using these platforms (audience reach, most notably), and so we do. Some activist efforts that silo communications on more open, but relatively unknown platforms strike me as irresponsible, if the goal is to reach as many people as possible (but this is a fine line). The more I think about this issue, though, the more I see potential solutions and a future in working with the platform providers to build some degree of flexibility into their products and policies.

The spot on the carpet reserved for public ranting at #NCMR13.

Social media giants do not have immediately obvious incentive to participate in such compromise. First of all, supporting individual humans doesn’t scale at anywhere near the order of magnitude they seek with their software. This model of customer support is perhaps best illustrated by Google, where serious and eminently solvable problems are routed through static FAQ pages, or, if you’re lucky, a forum page where a Google developer or superuser might stumble across your concern and provide some hint of illumination as to its origin or any hope of forthcoming resolution.

Policies dictating which user-generated content crosses the line are designed to protect good taste, not free speech. These commercial operations want to maintain a culture that delivers the most revenue, which can often translate to offending the fewest users. As Rebecca MacKinnon noted talking about her book, “Consent of the Networked,” it’s sort of a lucky accident that Twitter is as good as it has been on free speech (and maybe we shouldn’t rely on lucky accidents).

Minimalist design values also demand that product managers strike unnecessary features and superfluous checkboxes from the user experience. Technology companies test this stuff, and can show you the very real, very empirical costs of those additional features. But I think that if we think creatively, there are some solutions that can scale without expensive back-and-forth with users.

A golden exception, and maybe one of the few exceptions, is YouTube’s partnership with WITNESS. At NCMR, I got to sit on a panel with WITNESS’s Human Rights Channel curator, Madeleine Bair. WITNESS was founded on the idea that a planet populated by billions of people with billions of networked-connected cameras would reshape our discovery and consumption of media documenting human rights violations.

WITNESS is also one of the few advocacy organizations I’m aware of that has successfully convinced a major social media platform (the second most popular search engine in the world) to become aware of their work and alter policies to support its dissemination. For a while, when videos full of the graphic content comprising human rights violations were uploaded to YouTube, they were soon taken down for violating YouTube’s (generally understandable) policies to ensure the platform is used in good taste. WITNESS had developed their own video hosting site, but, as in so many other cases, YouTube’s crazy reach and engineering advantage surpassed the homegrown effort with regards to reaching people.

Educate the People Who Make the Decisions

On our conference panel, Bair pointed to the need to help educate technology companies on the issue. WITNESS has apparently done a great job of this; in addition to the policy revision, YouTube linked to the organization’s Cameras Everywhere report when it announced YouTube’s new facial blur feature. But for most of the social organizations I know, the challenge is finding a human being to talk to in the first place.

I don’t know the specifics of how WITNESS got YouTube on board, but I’m guessing it involved conversations with human rights champions within the company itself.

**Update: **Last year, Sameer Padania wrote up his firsthand experience at WITNESS developing the partnership with YouTube. It’s basically a much more informed version of this post. Go read it.

The major hurdle to other advocacy organizations arranging similar agreements is that talking to activists all day doesn’t scale for these companies. And when we say scale, remember, we’re talking about billions of posts a day, and in the case of video, more footage than any number of commercially supported human beings will ever be able to review. To begin to make sense of this torrent, YouTube has introduced algorithms that can automatically determine if a video of a song is a concert, a living room cover, or a music video. They’re now using humans to improve and refine how these algorithms rank hard-for-computers topics like humor and cuteness. As algorithms sort more and more of our content, a little transparency would go a long way.

In an email with Bair afterwards, she said she sees hope for finding ways to work with these companies to help them out of the delicate situations they increasingly find themselves in:

Of course, there need to be allies within that entity to invite you to the table, but I think as communication companies like YouTube, Facebook, Twitter, etc. are finding themselves in the middle of critical international debates about the freedom of speech, protection of human rights, and safety of activists, they would rather make informed decisions, which is where advocates can find an opportunity to engage in discussions about the potential impacts of those decisions.

Companies Respond More Quickly to Press than Users

Mothers have organized for year against Facebook’s “community guidelines” that initially banned photos of breastfeeding. If the areola is visible in a photo, Facebook takes the photo down and can even ban the user who posted it. After rounds of negative publicity, the company now aspires to support such photos, but still makes mistakes regulating decency in the billions of photos hosted on its site.

Some Issues are Easier to Gain Consensus than Others

We’re probably going to have more luck winning these platform and policy tweaks that support more universally agreeable issues like human rights rather than divisive topics. The degree to which an issue is universally agreeable is a larger battle (e.g., the sea change in Americans’ opinion on gay marriage — driven largely by cultural and interpersonal changes). In a recent story on this topic, NPR quoted law professor Jeffrey Rosen on the problematic practice of regulating speech based on community norms:

“If Facebook had existed in the 1970s, Rosen says, rules like these could have easily made organizing around, say, gay rights difficult or impossible. He says by definition, transgressive movements, at their founding, are going to offend people.”

But I see reason for hope for progress if we can encourage policy flexibility on the part of technology companies, education provided by social activists, and creativity on behalf of each.

Features that Embed Values into the Tech Itself

We can also work to encourage proactive values-supporting features, rather than just fight restrictive content policies. An example of this is Flickr and YouTube’s support, if not promotion, of Creative Commons licenses. The Creative Commons site has a variety of its own media search engines, but in my communications work, YouTube and Flickr’s pool of CC-licensed content cannot be beat. The thing is, many amateur producers are more than happy to share the media they’ve produced — that’s often why they’re posting it on the Internet in the first place. Whether or not they know what Creative Commons licenses are, or how restrictive formal copyright system is, or that they can change Flickr and YouTube’s default settings when they upload media, directly affects the size of the pool of reusable content. Facebook’s decision not to support such licenses makes getting permission to use any of the gajillions of photos posted there trickier.

Both Sides Should Work with Realities

The world is complicated and if our speech is going to occur on platforms run by private corporations rather than public squares, we’re going to have to push these companies to respect the complicated realities of the world we live in. It’s time to start thinking about carrots and sticks at our disposal, while bearing in mind the crazy scale at which these platforms operate.

P.S. MIT’s Nate Matias helpfully points out the Global Network Initiative, which seeks to help ICT companies navigate this space.

P.P.S. WITNESS is in the running for a Webby for their Voices of the Genocide project. Vote for them!

Matt Stempeck is a Research Assistant at the Center for Civic Media at the MIT Media Lab. He has spent his career at the intersection of technology and social change, mostly in Washington, D.C. He has advised numerous non-profits, startups, and socially responsible businesses on online strategy. Matt’s interested in location, games, online tools, and other fun things. He’s on Twitter @mstem.

This post originally appeared on the MIT Center for Civic Media blog.

Matt Stempeck :Matt's a Research Assistant at the Center for Civic Media at the MIT Media Lab. He has spent his career at the intersection of technology and social change, mostly in Washington, D.C. He has advised numerous non-profits, startups, and socially responsible businesses on online strategy. Matt's interested in location, games, online tools, and other fun things. He's on Twitter @mstem.

Comments are closed.