How New Technology Like AI, Drones and Big Data Can Limit the First Amendment

    by Jodie Gil and Vern Williams
    October 11, 2017
    A DJI Inspire drone hovers over campus at Southern Connecticut State University. Photo by Vern Williams

    Evolving technology is prompting new First Amendment challenges. As you prepare materials for your media law, ethics or First Amendment courses, here are six issues to consider adding to the discussion.

    Facial recognition

    The public is becoming more comfortable with having computers track their faces. Facebook introduced facial recognition with “tagged” photos in 2010, and now Apple is using the technology as a security measure on its latest iPhone.

    The rights guaranteed in the First Amendment to assemble, publish and speak freely are being truncated by omnipresent technology.

    However, paired with computer learning and large databases, facial recognition could lead to dangerous profiling.


    For example, Stanford researchers recently determined that facial recognition was able to predict whether an individual was gay, according to this Washington Post article. Facial recognition software at this level can then be seen as a threat to the right not to divulge sexual preference, one of the most basic rights when considering free speech.

    Surveillance cameras used in concert with facial recognition – either by law enforcement or corporations – also could cause people to rethink where and when they assemble and protest. Take, for example, the Churchix facial recognition software that tracks attendance at religious services by scanning the crowd and running a check in a database of faces. The company advertises the software can be used by “event managers who want to track event attendance, or by anyone who wants to identify known guests from live or recorded video.”

    Artificial intelligence

    As artificial intelligence improves, algorithms are developing their own speech. Does a robot or a Twitter bot have First Amendment rights?


    Consider news stories written by algorithms. The Washington Post used Heliograf, a news bot, to cover the Rio Olympics in summer 2016. They used it again in the 2016 election to cover more than 500 Senate, House and gubernatorial races. Does the same press protection afforded human journalists extend to Heliograf?

    There are also computer programs that interact with humans, such as Microsoft’s “Tay,” which in 2016 was designed to chat with and learn from millennials. The only problem: Internet users instead turned Tay into a racist troll. That kind of speech is protected for humans, but what about for non-humans?

    Legal scholars Toni M. Massaro and Helen Norton note that other non-humans, such as corporations, have already been extended First Amendment rights. That was solidified in the 2010 Citizens United v. FEC Supreme Court decision. But those corporations are still led by humans. With computers and programs, many questions remain.


    Doxing is the revealing of personal information without the person’s consent. It has been an integral part of the digital environment since the early 1990s. One early example was Neal Horsley’s “Nuremberg Files” website, on which he published names and addresses of abortion providers. Often cited as one of the first instances of doxing, the site is credited with inciting violence toward some of the doctors listed.

    With the advent of the smartphone in 2007, doxing ascended to a new level. A recent example illustrates how far it has come. Of the many pictures made during the recent Charlottesville protests, some were used to identify participants as white nationalists. At least one person identified on Twitter lost his job as a direct result of being outed.

    Community censorship has been around for as long as the country itself, and some might celebrate the fact that an individual espousing hateful views has been justifiably punished. However, the same Twitter feed misidentified another individual, damaging the reputation of someone not involved.

    Digital technology has given everyone the ability to practice freedom of the press as their First Amendment right. The journalistic practice of verification, however, has been ignored by those doxers to the detriment of that freedom.


    The law is pretty clear that pictures can be taken while standing on public property. But what about the air above? As drones become more prevalent tools for newsgathering, this question keeps arising.

    The FAA has classified public and commercial airspace, but federal law pertaining to flying over someone else’s property is meager at best, and there are few state statutes or regulations pertaining to the issue.

    One of the most cited examples goes back to a 1946 case where it was established that 86 feet was the benchmark separating private airspace from public. This, however, was never meant as a ruling to determine flight space for flying cameras.

    Equally concerning, from a First Amendment perspective, are content-based restrictions on drone photography. One such issue came up in New Jersey, where a proposed law would have made it illegal to take drone photos of “critical infrastructure.”

    The debate is often focused on safety issues, such as drones damaging the power grid or causing other potential harm. But new drones are much smaller, and have obstacle avoidance capabilities, so the safety factor may soon be a moot point. Also, drones are already being used for infrastructure inspection in several industries. It seems intrusive on the rights of journalists, then, to preclude rather than simply regulate their use.

    Internet of Things

    Now that your cell phone, television, computer and even refrigerator can be listening devices, there have been increasing legal questions surrounding the data collected from the items.

    For example, in early 2017, Amazon argued that sound recordings from one customer’s Echo are protected by the First Amendment. The Echo, through a virtual assistant named Alexa, listens to voice commands in order to perform searches, play music and complete other tasks. The information and recordings gathered during this process were requested by Arkansas lawyers for use in a murder trial.

    Amazon eventually released the data with the defendant’s consent. However, the company’s initial arguments against the release claimed that Alexa has First Amendment rights, too.

    “Alexa’s decision about what information to include in its response, like the ranking of search results, is ‘constitutionally protected opinion’ that is ‘entitled to full constitutional protection,'” the lawyers wrote, citing a 2003 case dealing with search engine results.

    Amazon’s lawyers also cautioned against the chilling effects of government access to search history.

    “At the heart of the First Amendment protection is the right to browse and purchase expressive materials anonymously, without fear of government discovery,” Amazon’s attorneys wrote.

    Big data

    It’s not just data collection from listening devices and facial recognition that prompt concerns. Information collected over time has more power once combined. The risks increase with big data collection of cell phone, shopping, travel and communication transactions.

    This is not a new concern. The Supreme Court, in the 1989 DOJ v. Reporters Committee for Freedom of the Press case, ruled that information in FBI rap-sheets should not be released to the public because it could be a violation of personal privacy. Individually, all the information contained in the rap-sheet is public. But once it is combined, it becomes something different, the court ruled.

    Interestingly, the Reporters Committee decision focused on the release, not the collection, of dossiers. But groups like the Electronic Frontier Foundation are concerned with the collection, too. In early 2017, EFF urged data brokers – companies that collect and sell personal data – to protect information from government request.

    EFF and others are supporting a proposal in California to prevent local government agencies from releasing data to the federal government when the information could be used to create lists or registries of people based on religion or ethnicity.

    Brave New World

    The rights guaranteed in the First Amendment to assemble, publish and speak freely are being truncated by omnipresent technology. The fear of loss of one’s livelihood or life for speaking out, joining a group or practicing the “wrong” type of medicine, narrows the conversation needed to sustain a healthy democracy.

    As case law develops around these issues, it’s important to help students critically think about the implications of evolving technology.

    Jodie Gil and Vern Williams are assistant professors of journalism at Southern Connecticut State University. 

    Tagged: artificial intelligence big data doxing drones facial recognition first amendment internet media ethics media law

    Comments are closed.

  • About EducationShift

    EducationShift aims to move journalism education forward with coverage of innovation in the classroom as journalism and communications schools around the globe are coping with massive technological change. The project includes a website, bi-weekly Twitter chats at #EdShift, mixers and workshops, and webinars for educators.
    Amanda Bright: Education Curator
    Mark Glaser: Executive Editor
    Design: Vega Project

    MediaShift received a grant from the Knight Foundation to revamp its EducationShift section to focus on change in journalism education.
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media