• ADVERTISEMENT

    How Bots Are Threatening Online Discourse

    by John Gray
    December 11, 2017
    A scene from the Robotville exhibition at the Science Museum in London (Photo by Oli Scarff/Getty Images)

    The 2016 US Presidential election elevated the issue of social media bots and “fake news” to an unprecedented level of attention. Yet for all of the headlines it’s an issue that’s both complex and in many ways misunderstood.

    Social media bots (automated software designed to perform specific tasks, such as retweeting or liking specific content on Twitter) are a problem because of the part they play in spreading misinformation. What’s still lost in the commentary, however, is that simply proving bots’ existence, let alone purging them from social platforms, is extremely difficult. With these fake entities meshing deeply into the fabric of cyberspace, the biggest threat they pose is of turning it into a place devoid of human discourse.

    From political dialogue, like the Dutch, British and German elections, to the anti-vaccination issue in the US, seeing firsthand the mechanisms at work to misinform and mislead has been troubling.

    Since the beginning of 2017, my team at Mentionmapp Analytics been tracking bots, sockpuppets (real people operating behind fake profiles, and usually promoting specific points of view) and the flow of misinformation using our network visualization application.

    ADVERTISEMENT

    These are the key lessons we’ve learned so far:

    • There’s no doubt that people with ill-intent are highly creative.
    • Technology alone (e.g. trying to create “good” bots to confront “bad” bots) is unlikely to solve the problem.
    • Bots’ personalities are evolving at a rapid pace. Creators are honing their craft daily to fool us and to fool the platform detection abilities.
    • Bot network operators will adjust their tactics to stay in the game. There’s more to spreading misinformation that just high volume or distributing it like spam. Manipulating and inflating social metrics is an important way to game platform algorithms. For instance, some profiles exist simply to “like” certain tweets.
    • And we are witnessing bots being deployed by interests throughout the full political spectrum.
    • It’s also more than just a Russian issue. Bots are employed by other states, their proxies, and sometimes people simply trying to make a buck.

    The major trends

    Over the course of the past year, we have observed and documented the proliferation of bots in online conversations. From political dialogue, like the Dutch, British and German elections, to the anti-vaccination issue in the US, seeing firsthand the mechanisms at work to misinform and mislead has been troubling.

    The major bot and sockpuppet patterns and trends we’ve observed include:

    ADVERTISEMENT
    • Clusters of profiles retweeting or liking content within a tight time window of the original post.
    • New profiles, with few followers and little account activity, receiving retweets in the hundreds (or thousands).
    • Computer-generated profile display names and handles (with stolen profile photos, mismatched genders, and a variety of seemingly unlikely language used given the person’s stated location).
    • Automated behaviors which allow profiles to generate responses to conversations at speeds and volumes that could only be executed by computers.

    Overall the goals of social media bots are to distract, influence, or suppress everyday online conversations. In the case of sockpuppets, it’s the deliberate deception and misrepresentation of the digital self that’s cause for concern. Sockpuppets are the equivalent of paying protesters to show up to your rally: the optics suggest it’s a cause that the public cares about and supports, when it’s really a front to manipulate perceptions and opinions about an issue.

    #Geertwilders and #FakeNews

    The political events of late 2016 pushed my team to start thinking about digital communications in a different light and how we could apply our Twitter visualization business in a completely new way. We’ve transitioned from a niche Twitter marketing tool business into researching and identifying the wide variety of bots operating on Twitter, and understanding their behaviors.

    We began our first case study in February 2017, tracking the hashtag #Geertwilders in the lead-up to the Dutch general election. We uncovered a group of 26 fake profiles acting as amplifiers in an orchestrated fashion. With near identical “fingerprints,” such as date joined, follower-to-following and number of tweets-to-like ratios, and retweeting one specific tweet connected to @abermans, a profile no longer on Twitter, it was our first clear picture of computational propaganda in action.

    We’ve also seen bots use the hashtag #FakeNews to intimidate and harass journalists and activists. On August 28, 2017, five ProPublica journalists and the Open Society Foundation saw their accounts overwhelmed by a massive bot attack.

    Seeing one tweet generate over 13,000 retweets and likes over the course of four hours was both a valuable learning experience for us and deeply disheartening. After generating more than 21,000 retweets and likes in barely eight hours, this incident removed any doubt that real online human discourse is threatened. Below is a small snapshot of what we saw, and the tweet that started the deluge.

    The first tweet in the ProPublica #FakeNews bot attack (Image courtesy John Gray)

    Data visualization of the #FakeNews bot attack against ProPublica (Image courtesy John Gray)

    Is there a solution?

    The online information ecosystem is messy for many reasons. Easy-to-access software automation tools mixed with open platforms of distribution make for a complex situation. 51 percent of those surveyed (including my team) for Pew Research Center’s study, The Future of Truth and Misinformation Online, are not optimistic about the situation improving over the next 10 years. Still, we aren’t discouraged from working to be part of a solution.

    One part of the solution is to facilitate conversation around the issue. From more than one hundred investigations and a commitment to being part of an informed conversation, we wrote the ebook, “Ecosystem of Fake: Bots, Misinformation, and Distorted Realities” which looks at the trajectory of online information as it’s moved through the “post-truth” and “post-fact” eras.

    Bots are like crabgrass: they are impossible to eliminate completely. The bot occupation of cyberspace is a significant problem and still an emerging conversation. With more awareness, discussion, and action around the problem, we can begin cleaning up the havoc created by software automation and those who sow misinformation in our public conversations.

    John Gray is CEO & co-founder of Mentionmapp Analytics Inc. and a freelance writer. He is investigating Twitter interactions unfolding between real people, sockpuppets and Bots in consideration of how misinformation is impacting our socio-political discourse. John has a Bachelor of Applied Science (Communications) and a B.A. (English) both from Simon Fraser University.

    Tagged: bots fake news Geert Wilders misinformation social media twitter

    Comments are closed.

  • ADVERTISEMENT
  • ADVERTISEMENT
  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media

    @MediaShiftorg
    @Mediatwit
    @MediaShiftPod
    Facebook.com/MediaShift