Error: Twitter did not respond. Please wait a few minutes and refresh this page.
I’ve been working as a moderator for two years now, and the attacks on Caroline Criado-Perez and my realisation that Twitter doesn’t have an easy ‘report abuse’ mechanism made me think about exactly how reporting might work on Twitter. And then, since I’m here, I’ve whacked out some ponderings on moderation in general. Because, y’know.
At the moment if you want to report an abusive user you have to fill in a massive form. I mean, it’s huge. It wasn’t easy to find, either. That might be fine if you’ve got one person who’s repeatedly attacking you but if you’re experiencing a sustained campaign of abuse, it’s just not practical.
There’s a page in Twitter’s help section that says you can report abuse from the individual tweet page by clicking on the ‘more’ option, but this doesn’t appear for me. Presumably it’s what Twitter’s Tony Wang is referring to when he says the company is trialling new ways to report.
God, yes. Happens all the time. I see a lot of comments about football being reported by fans of rival teams. But it’s all in the implementation. A fully or partially automated system is definitely open to abuse because machines can’t appreciate nuance, and anything that triggers an automatic, temporary suspension (which I believe happens at the moment) is just asking to be abused by anyone who wants to attack an account proffering an opposing point of view.
If I were developing this, it would be managed entirely by humans. If you report a tweet or an account it would vanish from your timeline so you don’t have to look at it any more (this is what Twitter is already trialling) but it stays live. A human being then assesses the tweet against Twitter’s published guidelines and makes a judgement on whether a suspension or total ban is needed.
Note that I say Twitter’s published guidelines. They’re there so that everyone knows where they stand, and assessments can only be made against them. That goes for every site, everywhere. From what I’ve seen, some of the threats against Criado-Perez would fail current guidelines (“You may not publish or post direct, specific threats of violence against others.”), though the guidelines aren’t very detailed1. You might not like the guidelines (Facebook’s stupid ‘no breastfeeding photos’ is one example) but that’s another issue2.
And moderators aren’t idiots. A malicious report is obvious and I don’t act on it. It doesn’t matter if 1,000 people report the same comment, if it doesn’t break the guidelines it stays. I’d rather 1,000 people didn’t report the comment because that creates a lot of work, but I’m not going to cave under sheer numbers. I’d also rather wade through 999 false reports than not have a decent mechanism for one that’s genuine.
I’ve seen a concern that an easy ‘report abuse’ button on Twitter could be taken up by celebrities looking to sick their followers onto someone. But if implemented intelligently, the system could actually spell the end of that kind of abuse of power. If I get a load of reports in a short timeframe it’s fairly obvious it’s been sparked by something specific. If a tweet is constantly reported that was directed at someone famous, it’s the work of a few minutes to check that celebrity’s timeline and see if they unleashed the mob. In such a situation I think a short sleb suspension would be in order; banning people for malicious reporting does happen and I’ve done it.
Yup. That’s what happens if you want decent moderation. Sorry, Twitter, you’re going to have to increase your staff. And sorry, Twitter users, you’ll probably have to put up with more adverts to pay for them.
And anyway, Twitter currently has the capacity to suspend accounts because an avatar breaks a trademark (as a friend of mine found out). If it can do that, it can adequately deal with abuse as well.
Isn’t it? In my job, if I see something that’s a clear and credible threat I pass it up the management chain and it could potentially end up with the police. If the tweets sent to Criado-Perez containing her address and a threat to rape her had been posted to somewhere I was working, I would have immediately flagged that. The same goes if someone seems to be a credible suicide risk. I’m here to protect as well moderate – partly because I’ve been trained to have knowledge of certain laws and am perhaps better positioned to spot breaches than your average Joe. I don’t see why Twitter should put all the responsibility onto its users.
It is just a form. But as I said above, it’s not practical if you have more than a couple of accounts to report. I’ve never experienced a tirade of abuse into my personal accounts, but I have for work.
It started with a trickle of messages into the software I was using to moderate a Facebook page. They were all the same, clearly a copy and paste job being directed from somewhere else. But within a minute there were 100 messages waiting to be processed in the queue, then within five minutes there was 1,000. And it wasn’t slowing down. I dealt with it quickly because a) I’m a pro b) I had the right tools at my disposal and c) none of it was directed at me personally. I cannot imagine having to report each one in a laborious process, without colleagues to call on, in my own time, and having to assess and explain in detail why each horrible comment directed at me broke the guidelines.
And I’m having genuine difficulty thinking of a website or social media platform that doesn’t have some form of one-click reporting. If you create a platform you have to accept some responsibility for the safety and peace of mind of your users.
I don’t know; find some kind of common ground? Ask Facebook how it does it? It’s not like this stuff hasn’t been worked out by other platforms. Edit: to be clear, I’m talking about Twitter asking Facebook how it’s worked out complying with basic legal requirements across various countries, not how Facebook moderates.
[The section that follows has apparently been read by some people as me advocating the following be applied to Twitter. Hells, no! It’d be unworkable and massively undesirable. The following is more of a background to me, as an agency moderator, working on company sites and pages, which all have more restrictive guidelines than the much more laissez faire basic platform guidelines of Twitter and Facebook et al.]
Not if you employ decent people. You wouldn’t believe the amount of stuff I’ve let stand even though I found it foul and deeply offensive. If it passes site guidelines it doesn’t get touched; if a moderator doesn’t abide by this rule they won’t be in the job very long.
99 times out of 100, this won’t happen because companies are now savvy enough to realise this is brand suicide. If there genuinely are no negative comments, then the company is stupid and you probably shouldn’t do business with them. Or they’re the most amazing company and nobody’s ever had a problem. Hahahahaha. I’m kidding. Someone, somewhere, will always have a problem that the best place to air is Facebook.
Have you? Or have you seen people complaining that their comment got removed ‘for no reason at all’? When in actual fact (and when I see people complaining while I’m working, I always check back) the comments got deleted for very, very obvious reasons. If someone is insistent that their comment got removed because it somehow reflected badly on the site, look around for a moment and see how many other negative comments there are. I’ll bet there are loads, but none of them is calling the CEO a fuckwit.
You have no protected human right to heap abuse on anyone or to say ‘motherfucker’ on Facebook. Grow up.
OK, let’s talk this one out. If you consider removing comments containing severe swearing censorship (moderators genuinely have a list of permitted swearwords; e.g., ‘arse but not arsehole’, ‘one shit but no more’) then OK it’s censorship, but no more so than the TV watershed. You may be posting in a public space, but if you’re posting on someone else’s page or site it’s still ‘owned’ by them and they have a right to set the tone of the discussion.
More contentious may be the removal of mindless comments like “[product] is shit” or “[name of competitor] FTW!”. This is a circumstance where those complaining their negative comment got removed has some grounding, but not a huge amount; firstly, it’s easy trolling. You have a problem with a company? Fine, write it out properly. But mainly these types of comments can get removed because of what happens afterwards. Generally 20 other, equally immature, commenters leap in with inventive suggestions of how the original poster can go fuck him- or herself. It derails a conversation and is unpleasant to read. Someone on Twitter suggested that keeping social media a ‘pleasant’ place sounds a bit Stepford. Maybe. But I’ve seen the alternative and I prefer it this way.
Another major reason for ‘censorship’ is legal grounds. Your comment may be deleted because it’s breaching copyright or trademark law, contempt of court law (not many people understand this one; as a general rule, if a trial is ongoing, don’t write a comment saying “hang him” because you’ve just assumed guilt), or falls into the category of illegal hate speech. In those circumstances you should be pleased your comment got removed; we, as moderators, just saved you from getting your ass sued à la Lord McAlpine.
Twitter won’t be moderating on this kind of scale though; and nobody is going to ban an account for general use of ‘cunt’. Though if all an account is doing is calling people cunts, that could be enough for a ban if it’s reported. And that needs a human being to make a judgement.
OK. You’re entitled to your opinion. But may I humbly suggest that if you feel this strongly that you should be able to express yourself on whatever subject and in whatever manner you choose, then perhaps the Waitrose Facebook page3 is not the place for you?
Yes. But as I’ve already said, moderators aren’t stupid. Trolls, on the other hand, often are. It is usually so obvious when an account is a sockpuppet or secondary account; when I read this New Statesman article about the gamification of trolling (in essence: trolls are proud of their behaviour so want duplicate accounts to be recognisable) I nodded in agreement so much I was in danger of pulling a neck muscle. Usernames are similar; email addresses used to sign up are variations on a theme; syntax, spelling, arguments are all very familiar. I worked on one site where one user had at least 30 aliases; when we banned him on a Tuesday I’d be waiting for him to reappear on the Wednesday. Eventually he got bored and pissed off.
That kind of thing is easier to keep track of when you’re working on a small site. For Twitter, if they don’t have some kind of database where they can check suspicions about duplicate accounts they’d be foolish. However, creating a duplicate account in itself isn’t necessarily grounds for banning – though if new accounts are being created specifically to abusively troll they’ll break the guidelines pretty quickly. My advice would be to report on the basis of rule breaking and add suspicions of a troll returning as a secondary matter; if you report just because, after a couple of tweets, you think your troll has returned you’ll get yourself a reputation for malicious reporting.
Added 30 July: I forgot to say that of course, spammers set up different accounts as soon as the originals are taken down. But I don’t see anyone saying ‘oh, we should just block and ignore them, they’re entitled to advertise their fake Viagra pills if they want’.
I’m dead inside.
1. I’ve often thought it would be helpful for many websites to expand on their published guidelines. I’ve worked on several sites where guidance to moderators runs to several pages, but the only guidance to users is a couple of sentences of impenetrable legalese. Then they wonder at users getting pissed off when they don’t understand why comments get removed.
2. I’ve seen Twitter have apparently rejected a report of “I will rape you when I get the chance” as it doesn’t violate their rules. I guess they’re only looking for what we’d call ‘specific, direct threats’; in other words, they want a time and a place, or some other indication that the user genuinely intends to rape the target. I’m hoping that this is because Twitter doesn’t currently have the resources to properly investigate this kind of abuse and not because it doesn’t think it’s their job to act. On any project I’ve worked on, a comment like this would be immediately blocked and the user potentially banned for unacceptable abuse. I’d also be interested to know if the police have a lower threshold for triggering an investigation than Twitter do for banning. If so: Twitter, you have a problem.
3. I do not work for Waitrose or its Facebook page.
Did I miss any points? Put them in the comments and I’ll see if I have any background knowledge that might help.