Twitter reporting and general points on moderation

I’ve been working as a moderator for two years now, and the attacks on Caroline Criado-Perez and my realisation that Twitter doesn’t have an easy ‘report abuse’ mechanism made me think about exactly how reporting might work on Twitter. And then, since I’m here, I’ve whacked out some ponderings on moderation in general. Because, y’know.

Reporting abuse on Twitter

At the moment if you want to report an abusive user you have to fill in a massive form. I mean, it’s huge. It wasn’t easy to find, either. That might be fine if you’ve got one person who’s repeatedly attacking you but if you’re experiencing a sustained campaign of abuse, it’s just not practical.

There’s a page in Twitter’s help section that says you can report abuse from the individual tweet page by clicking on the ‘more’ option, but this doesn’t appear for me. Presumably it’s what Twitter’s Tony Wang is referring to when he says the company is trialling new ways to report.

Won’t a ‘report abuse’ button be abused?

God, yes. Happens all the time. I see a lot of comments about football being reported by fans of rival teams. But it’s all in the implementation. A fully or partially automated system is definitely open to abuse because machines can’t appreciate nuance, and anything that triggers an automatic, temporary suspension (which I believe happens at the moment) is just asking to be abused by anyone who wants to attack an account proffering an opposing point of view.

If I were developing this, it would be managed entirely by humans. If you report a tweet or an account it would vanish from your timeline so you don’t have to look at it any more (this is what Twitter is already trialling) but it stays live. A human being then assesses the tweet against Twitter’s published guidelines and makes a judgement on whether a suspension or total ban is needed.

Note that I say Twitter’s published guidelines. They’re there so that everyone knows where they stand, and assessments can only be made against them. That goes for every site, everywhere. From what I’ve seen, some of the threats against Criado-Perez would fail current guidelines (“You may not publish or post direct, specific threats of violence against others.”), though the guidelines aren’t very detailed1. You might not like the guidelines (Facebook’s stupid ‘no breastfeeding photos’ is one example) but that’s another issue2.

And moderators aren’t idiots. A malicious report is obvious and I don’t act on it. It doesn’t matter if 1,000 people report the same comment, if it doesn’t break the guidelines it stays. I’d rather 1,000 people didn’t report the comment because that creates a lot of work, but I’m not going to cave under sheer numbers. I’d also rather wade through 999 false reports than not have a decent mechanism for one that’s genuine.

I’ve seen a concern that an easy ‘report abuse’ button on Twitter could be taken up by celebrities looking to sick their followers onto someone. But if implemented intelligently, the system could actually spell the end of that kind of abuse of power. If I get a load of reports in a short timeframe it’s fairly obvious it’s been sparked by something specific. If a tweet is constantly reported that was directed at someone famous, it’s the work of a few minutes to check that celebrity’s timeline and see if they unleashed the mob. In such a situation I think a short sleb suspension would be in order; banning people for malicious reporting does happen and I’ve done it.

This is going to need a lot of people

Yup. That’s what happens if you want decent moderation. Sorry, Twitter, you’re going to have to increase your staff. And sorry, Twitter users, you’ll probably have to put up with more adverts to pay for them.

And anyway, Twitter currently has the capacity to suspend accounts because an avatar breaks a trademark (as a friend of mine found out). If it can do that, it can adequately deal with abuse as well.

It’s not Twitter’s job to be the police

Isn’t it? In my job, if I see something that’s a clear and credible threat I pass it up the management chain and it could potentially end up with the police. If the tweets sent to Criado-Perez containing her address and a threat to rape her had been posted to somewhere I was working, I would have immediately flagged that. The same goes if someone seems to be a credible suicide risk. I’m here to protect as well moderate – partly because I’ve been trained to have knowledge of certain laws and am perhaps better positioned to spot breaches than your average Joe. I don’t see why Twitter should put all the responsibility onto its users.

What’s the problem with the current system? It’s only a form

It is just a form. But as I said above, it’s not practical if you have more than a couple of accounts to report. I’ve never experienced a tirade of abuse into my personal accounts, but I have for work.

It started with a trickle of messages into the software I was using to moderate a Facebook page. They were all the same, clearly a copy and paste job being directed from somewhere else. But within a minute there were 100 messages waiting to be processed in the queue, then within five minutes there was 1,000. And it wasn’t slowing down. I dealt with it quickly because a) I’m a pro b) I had the right tools at my disposal and c) none of it was directed at me personally. I cannot imagine having to report each one in a laborious process, without colleagues to call on, in my own time, and having to assess and explain in detail why each horrible comment directed at me broke the guidelines.

And I’m having genuine difficulty thinking of a website or social media platform that doesn’t have some form of one-click reporting. If you create a platform you have to accept some responsibility for the safety and peace of mind of your users.

Twitter’s international, how is it supposed to deal with different countries’ laws?

I don’t know; find some kind of common ground? Ask Facebook how it does it? It’s not like this stuff hasn’t been worked out by other platforms. Edit: to be clear, I’m talking about Twitter asking Facebook how it’s worked out complying with basic legal requirements across various countries, not how Facebook moderates.

[The section that follows has apparently been read by some people as me advocating the following be applied to Twitter. Hells, no! It'd be unworkable and massively undesirable. The following is more of a background to me, as an agency moderator, working on company sites and pages, which all have more restrictive guidelines than the much more laissez faire basic platform guidelines of Twitter and Facebook et al.]

And more generally… moderators will be swayed by their own personal beliefs

Not if you employ decent people. You wouldn’t believe the amount of stuff I’ve let stand even though I found it foul and deeply offensive. If it passes site guidelines it doesn’t get touched; if a moderator doesn’t abide by this rule they won’t be in the job very long.

What about companies deleting comments they don’t like?

99 times out of 100, this won’t happen because companies are now savvy enough to realise this is brand suicide.  If there genuinely are no negative comments, then the company is stupid and you probably shouldn’t do business with them. Or they’re the most amazing company and nobody’s ever had a problem. Hahahahaha. I’m kidding. Someone, somewhere, will always have a problem that the best place to air is Facebook.

No, I’ve seen it happen!

Have you? Or have you seen people complaining that their comment got removed ‘for no reason at all’? When in actual fact (and when I see people complaining while I’m working, I always check back) the comments got deleted for very, very obvious reasons. If someone is insistent that their comment got removed because it somehow reflected badly on the site, look around for a moment and see how many other negative comments there are. I’ll bet there are loads, but none of them is calling the CEO a fuckwit.

Moderation is censorship and is against my human rights

You have no protected human right to heap abuse on anyone or to say ‘motherfucker’ on Facebook. Grow up.

Moderation is definitely censorship though

OK, let’s talk this one out. If you consider removing comments containing severe swearing censorship (moderators genuinely have a list of permitted swearwords; e.g., ‘arse but not arsehole’, ‘one shit but no more’) then OK it’s censorship, but no more so than the TV watershed. You may be posting in a public space, but if you’re posting on someone else’s page or site it’s still ‘owned’ by them and they have a right to set the tone of the discussion.

More contentious may be the removal of mindless comments like “[product] is shit” or “[name of competitor] FTW!”. This is a circumstance where those complaining their negative comment got removed has some grounding, but not a huge amount; firstly, it’s easy trolling. You have a problem with a company? Fine, write it out properly. But mainly these types of comments can get removed because of what happens afterwards. Generally 20 other, equally immature, commenters leap in with inventive suggestions of how the original poster can go fuck him- or herself. It derails a conversation and is unpleasant to read. Someone on Twitter suggested that keeping social media a ‘pleasant’ place sounds a bit Stepford. Maybe. But I’ve seen the alternative and I prefer it this way.

Another major reason for ‘censorship’ is legal grounds. Your comment may be deleted because it’s breaching copyright or trademark law, contempt of court law (not many people understand this one; as a general rule, if a trial is ongoing, don’t write a comment saying “hang him” because you’ve just assumed guilt), or falls into the category of illegal hate speech. In those circumstances you should be pleased your comment got removed; we, as moderators, just saved you from getting your ass sued à la Lord McAlpine.

Twitter won’t be moderating on this kind of scale though; and nobody is going to ban an account for general use of ‘cunt’. Though if all an account is doing is calling people cunts, that could be enough for a ban if it’s reported. And that needs a human being to make a judgement.

I don’t care, it’s all censorship

OK. You’re entitled to your opinion. But may I humbly suggest that if you feel this strongly that you should be able to express yourself on whatever subject and in whatever manner you choose, then perhaps the Waitrose Facebook page3 is not the place for you?

If a troll is banned, won’t they just set up a different account?

Yes. But as I’ve already said, moderators aren’t stupid. Trolls, on the other hand, often are. It is usually so obvious when an account is a sockpuppet or secondary account; when I read this New Statesman article about the gamification of trolling (in essence: trolls are proud of their behaviour so want duplicate accounts to be recognisable) I nodded in agreement so much I was in danger of pulling a neck muscle. Usernames are similar; email addresses used to sign up are variations on a theme; syntax, spelling, arguments are all very familiar. I worked on one site where one user had at least 30 aliases; when we banned him on a Tuesday I’d be waiting for him to reappear on the Wednesday. Eventually he got bored and pissed off.

That kind of thing is easier to keep track of when you’re working on a small site. For Twitter, if they don’t have some kind of database where they can check suspicions about duplicate accounts they’d be foolish. However, creating a duplicate account in itself isn’t necessarily grounds for banning – though if new accounts are being created specifically to abusively troll they’ll break the guidelines pretty quickly. My advice would be to report on the basis of rule breaking and add suspicions of a troll returning as a secondary matter; if you report just because, after a couple of tweets, you think your troll has returned you’ll get yourself a reputation for malicious reporting.

Added 30 July: I forgot to say that of course, spammers set up different accounts as soon as the originals are taken down. But I don’t see anyone saying ‘oh, we should just block and ignore them, they’re entitled to advertise their fake Viagra pills if they want’.

How do you cope with this stuff on a daily basis?

I’m dead inside.

———————————

1. I’ve often thought it would be helpful for many websites to expand on their published guidelines. I’ve worked on several sites where guidance to moderators runs to several pages, but the only guidance to users is a couple of sentences of impenetrable legalese. Then they wonder at users getting pissed off when they don’t understand why comments get removed.

2. I’ve seen Twitter have apparently rejected a report of “I will rape you when I get the chance” as it doesn’t violate their rules. I guess they’re only looking for what we’d call ‘specific, direct threats’; in other words, they want a time and a place, or some other indication that the user genuinely intends to rape the target. I’m hoping that this is because Twitter doesn’t currently have the resources to properly investigate this kind of abuse and not because it doesn’t think it’s their job to act. On any project I’ve worked on, a comment like this would be immediately blocked and the user potentially banned for unacceptable abuse. I’d also be interested to know if the police have a lower threshold for triggering an investigation than Twitter do for banning. If so: Twitter, you have a problem.

3. I do not work for Waitrose or its Facebook page.

Did I miss any points? Put them in the comments and I’ll see if I have any background knowledge that might help. 

18 responses to “Twitter reporting and general points on moderation

  1. Odtaa July 30, 2013 at 1:24 pm

    I reckon it would be quite easy for Twitter to set up a team of volunteer moderators to do the initial assessment. Obviously people would have to be checked when they started and double moderated now and again to check for bias.

    The heavier, nastier stuff would then be passed on to Twitter’s moderators.

    In fact if mixed in with some training it could be used as an internship to provide people professional experience at managing websites/ online communications etc.

    • John July 30, 2013 at 2:10 pm

      With over 550 millions users? Anything moderation involves human interaction won’t be easy.

      • Odtaa July 30, 2013 at 4:47 pm

        I totally agree that it will be difficult. It’s also not realistic for Twitter to employ enough moderators -again because of the volume of users.

        As Rachel explains many, I suspect the majority, of the requests for moderation are in fact not valid – however if it comes to serious threats – something ought to be done.

        Setting up a system of volunteers to do an initial sifting of complaints would take quite a bit of pressure off Twitter and could make some sort of reporting viable. It could be made a positive – with people getting practical experience to further their careers or ancients like me – who could be happy to do a stint once a week.

        I don’t want Twitter to decline – which it could be if the trolling gets out of hand.

        Personally I would be happy to volunteer for a shift a week to help protect others.

        I was heavily targeted on a couple of bulletin boards a few years back from some extreme right wingers – I was arguing for the EU and that somehow made me a Nazi. There were credible threats that they would ‘get me’ – I had no real support network – so it was safer to stop posting.

      • Nick S July 30, 2013 at 9:08 pm

        Nothing Twitter does is going to be easy any more. In exchange for dealing with this, it gets to be worth billions of dollars as a company. What we can say is that the tools that worked on a smaller scale (block/ignore/killfile) don’t work for distributed, pile-on abuse.

  2. Luca (@thefabricpress) July 30, 2013 at 1:42 pm

    In my view, a free speech defence is not compatible with anonymity. Insults between “consenting adults” — ie: verified account holders — is a fair and fully accountable game, but I would happily make folks who choose to retain anonymous, non-verifiable accounts subject to automated (cheaper) temporary suspension. Freedom of speech, after all, is the right of real individuals, not avatars. And I would remind folks that in most recent cases, the victim’s real identity was transparent throughout, while the abuser remained an unidentified “troll” — for a while at least. That feels palpably unfair.

    • Meredith L Patterson (@maradydd) July 31, 2013 at 10:26 pm

      Your ignorance of the historical relationship between anonymity and free speech is disappointing. For instance, the proposed ratification in 1787 of what is now the United States constitution was an extremely contentious topic — and some of the most important discussion on it, both for and against, took place under cover of pseudonymity aided by the publishers of the newspapers (the Twitter of their day) that ran the pseudonymous authors’ letters. Even today, the writings of “Cato,” “Brutus,” and “Publius” are studied for their historical importance; we know them now as the Federalist Papers and the Anti-Federalist Papers. Earlier still, the literary tradition of the pasquinade — an anonymous satirical poem attributed to a particular statue — developed in the 1500s among inhabitants of Rome as a technique for speaking truth to power; this tradition continues to the modern day. Are these writers “real individuals,” or merely “avatars” in your framing?

      Now, you might argue that criticising authority or analysing political arguments are valuable pursuits, to be distinguished from calling people names or wishing harm upon them. But who is to be the judge of that value? The Framers certainly didn’t believe themselves competent to make that call, and so they erred on the side of essential liberty — the freedom to speak — rather than safety from ideas or speech that they or someone else might find objectionable.

      Have you ever needed to speak anonymously? Have you ever had something so controversial to say — or so potentially embarrassing, or so taboo-violating, or whatever — that the protection of a false name or no name at all is what makes or breaks your decision between speech and silence?

      If not, then you enjoy a privilege that you may not have previously been aware of. But a lack of awareness does not obviate your responsibility to check it.

  3. Michael July 30, 2013 at 2:10 pm

    Great piece. Very clear and helpful. Like the tone as well. Thanks

  4. David July 30, 2013 at 2:39 pm

    It’s great to see commentary from someone who has actually worked in a moderation role. Few discussions, whether for or against, are from those who have any experience of the practicalities of any such abuse report system.

    As a developer i’ve worked with moderation teams and designed moderation systems (mostly at Last.fm, if you’re familiar with the comment system there..). So I have some similar experience, though more on the systems side of things than actual enforcement.

    I mostly agree with your post, and appreciate some of the important caveats – like “more staff, more adverts” – that aren’t often getting discussed. But there are some points i’m unsure of…

    Mostly it boils down to scale. In terms of moderatable content Twitter runs at an enormous scale compared to all but a handful of other equally large internet companies. Mind bogglingly big even compared to large community sites.

    We can only speculate on the level of usage a “report abuse” button get and the amount of staffing that would be required to deal with it (though to your side note about trademark infringement – it’s safe to assume those requests are orders of magnitude less frequent). I think that the existing abuse reporting system is effectively hidden (well, far out of the way) to reduce usage shows that Twitter don’t have the staff to implement large scale moderation.

    There is a lot of frustration at Twitter for not doing something immediately, that it’s not hard to add a button. As your post beautifully illustrates, the button is not the hard bit. Building a skilled team, with professional support and some level of well designed automation is required to make such a system workable – even at relatively small scales – is.

    For Twitter to do this properly they need teams in multiple countries/languages (lest we only care about abuse in the english language) and operating across multiple time zones (so abuse is dealt with 24/7). Neither of those things are impossible, but that are harder, more difficult to plan and thus slow to commit to publicly.

    My general feeling on the whole thing is that, yes, good moderation is absolutely key to good communities, but defining and enforcing guidelines gets increasingly hard as the community grows. When you reach platform level size, like Twitter, your community is barely distinguishable from “people using the internet” and finding agreed guidelines from the community becomes borderline impossible.

    You either end up with vague guidelines that are hard to enforce and frustrate people with it’s “ineffectiveness” – or – a system that itself is abused to silence the people it was supposed to protect. I don’t doubt that Twitter can manage a better system than they do, but it will take time and planning and managing of peoples expectations. There is no system that can appease all of Twitters users, even within the groups campaigning at the moment.

    Sorry for what has turned in to a bit of a rant. This is an important topic, and your post has genuinely been one of the best i’ve so far, it feels like a good place to make those points – short of publishing an entire piece myself.

  5. rachel bagelmouse July 30, 2013 at 5:59 pm

    Wow, this has been amazing! Thank you everyone.

    On scale – as it happens, I reported a comment to Facebook a couple of weeks ago. They responded within 24 hours which I thought was impressive. Facebook is different from Twitter in that pages tend to be monitored by whoever set them up so hopefully anything majorly dodgy would get caught by them, but any comment can still be reported to the central team with a one-click-and-a-couple-of-radio-buttons system (and I bet they’re inundated). I have no idea how many people work on the Facebook abuse team but if Facebook can do it, Twitter can do it. They just need to work out how to pay for it (and if we want decent moderation, we’ll have to put up that. It’s the trade-off).

    The moderating company I work for can provide 24/7 cover in multiple languages so it’s possible to put together, but you do need bloody good admin in place!

    And I completely agree with what David says about how creating the back-end team to deal with abuse is a big deal, and can’t be implemented quickly. Similarly, abuse reports will take time to process so we as users will have to accept something like the 24 hour turnaround I experienced with Facebook. I think that’s exceptionally good given the volume they must deal with.

    The guidelines is where it gets awkward and where you have to look to Twitter itself, as a company. Moderators can only work to the guidelines and implement them accurately and consistently. There’s a great article also doing the rounds by Susan Hall which I think might be better placed to address some of the legal responsibilities Twitter has to its users, and what it should update its guidelines to include.

    • bookmonstercats July 30, 2013 at 8:14 pm

      Rachel, I have just caught up with your comment about Susan Hall. I know her and have worked with her in one of the Manchester law firms. I can’t commend her highly enough for IP/ITC work.

  6. Huw Sayer July 30, 2013 at 5:59 pm

    Excellent (thank you @newsmary for sharing) – really well argued.

  7. Jonathan Schofield July 30, 2013 at 8:04 pm

    Great article with many home truths Twitter need to meet head on on some level. Though I think @flayland nicely redefines the problem in a way that makes the solution more robust and pragmatic: http://flay.jellybee.co.uk/2013/07/panic-mode-my-proposal-to-curb-twitter.html?m=1

  8. bookmonstercats July 30, 2013 at 8:06 pm

    The inherent difficulty in policing such a big site shouldn’t be a reason for not doing it. I think it’s everybody’s duty to help police genuine trolling, such as Caroline C-P suffered (I signed her petition). Like Odtaa, I would sign up for a volunteer shift. I believe social media can be such a powerful force for good that the real trolls shouldn’t ruin it. I will re-tweet this blog.

  9. hollyhock140 July 30, 2013 at 9:45 pm

    A very informative piece, thank you. Would a “I am being serially abused” button work which then enables humans to monitor and intervene? It would have to be clear that false alarms would be liable to sanction. The onus would still be on the abused to report but only once. One off abuse could still be dealt with by a simplified report form.

  10. Pingback: Two Must Read Articles on Proposals to Tackle Twitter Abuse | beyondclicktivism

  11. Pingback: On misogyny and Twitter | Time flies when you're having fun…

  12. J August 14, 2013 at 10:26 am

    I totally agree with the necessity of the house rules being clearly available for users on Facebook etc, I work for the same people you do and it’s frustrating when people complain about something being deleted for profanity (for example) when in fairness to them, they wouldn’t have known what the page rules were. I know that to some it would seem common sense not to swear in a ‘public space’ but not everyone’s ‘common sense’ is the same!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.