An unintended consequence of the ubiquity of the Internet, particularly in social media, is the rise of the troll. Trolls post comments of unbelievable vitriol in some cases, comments that if issued in person and in public might lead to arrest and psych evaluations. Then vitriol turns into viral vitriol and the helpless target is bombarded with hate speech. But you can’t just suspend the rights of trolls. Speech is protected, at least up to a point, in many countries and few of us could honestly claim that we have never indulged in a heated response to a post or email. We may not be as vile as the worst offenders, but we share some of their traits.
In fact, theories of what makes for trollish behavior seem to be in flux. Accepted wisdom is that many of these people are socially awkward misfits (particularly teens and young adults) working out aggression through the anonymity of the Internet. But recent research suggests that many trolls are proud of their opinions, which they feel reflect social norms they want to defend. They are quick to anger, in that state perhaps less aware of crossing lines in self-expression, but mostly they are happy to be identified, to garner credit among like-minded thinkers for their vigorous support of those norms. Clickbait and echo chambers certainly play on this all-too-human weakness.
So “outing” trolls won’t necessarily help, and since none of us are perfect we ought to recognize that we too might be tempted to indulge in trollish behavior. Perhaps it would be preferable to try to block bad posts rather than bad posters, which requires some level of recognition and determination of how to respond. Social media providers are working on varying types of system in this line. Twitter, one of the most visible platforms for troll attacks, has an interesting approach called Periscope, which depends on users rather than machine learning to decide whether a tweet is abusive or offensive. As soon as one reader reports a tweet in this context, Periscope polls a randomly selected jury of other users reading the same tweet, to comment on whether they also find it offensive or abusive. If found guilty, the commenter is put in a 1-minute timeout and comments on the tweet are disabled. Repeat offenders are permanently muted. Nice approach, depending on human rather than artificial intelligence and difficult to game (I would think) given random jury selection.
Then again, Twitter may not have moved fast enough. According to Jim Cramer, Salesforce.com may have walked away from an acquisition in part because of the negative reputation around public perception of hatred associated with Twitter traffic. Which should be a reminder to other social platforms. It’s not just about being morally righteous – it’s also about company valuation.
In Google, a group called Jigsaw has developed (and no doubt continues to develop) a capability called Conversation AI. This is a machine-learning-based approach for which they used 17 million comments on New York Time stories, with moderator flags on offensive/abusive comments, plus data from Wikipedia discussion logs where they used a crowd-sourced service to flag reactions. Google claims this now can match judgments against a human panel with ~90% certainty and a ~10% false positive rate. Not bad, but I’m pretty sure these rates need to improve quite a bit to reach reasonable 1[SUP]st[/SUP] Amendment standards. Meantime Google is planning continued trials with NYT and Wikipedia.
An interesting sidebar here is that Conversation AI was inspired in part by work done by Riot Games on moderating player behavior in their massive multiplayer League of Legends world. Riot Games use machine learning to analyze conversations that has led to players being banned. From this they are able to show players in real-time where aspects of their comments are offensive or abusive. According to that company, providing this feedback has led to a 92% drop in offending behavior, which to me is an indicator that nipping the problem in the bud may be more effective that post-facto censorship.
Facebook doesn’t seem to be (at least publicly) as active in this area, perhaps because you connect only (mostly) to friends and you can unfollow or unfriend anyone who offends you. They do have some capabilities to detect a related problem – someone impersonating your account with the same name and profile. They’re also testing methods to detect intimate images as instances of revenge porn. In both cases, the potential victim is notified but must choose to have action taken (to avoid problems in purely automated responses).
This seems like an area where collaboration between providers is needed, perhaps even more than in domains like general AI. We could even dream that similar methods might encourage a general rise in the level of civility in on-line debate, out of which all kinds of wonderful things might happen (perhaps sane and effective government, to pick just one random example). Details on Google Jigsaw and the Riot Games work come from this Wired article. A Twitter Periscope article can be found HERE and can find an article on characterization of trolls HERE. What I could find on Facebook work in this area is HERE.
Share this post via:
Next Generation of Systems Design at Siemens