I read a thread recently about Twitter's troubles with trolls:
https://www.buzzfeed.com/charliewarzel/a-honeypot-for-assholes-inside-twitters-10-year-failure-to-s?utm_term=.auJxL8YEk#.ltvDOyLZr
It's a big article, here's a tl;dr:
- Twitter wants to be popular, people want a popular platform too to interact with people.
- Allowing free speech, even controversies and hate speech attracts people who want their voice to be heard
- All people don't like all hate speech. Though trolls employ such speech for fun, many people genuinely mean what
they say.
- Options:
--Twitter gets strict, strong moderation, popularity decreased.
--Does nothing. More hate speech, popularity decreased.
- Twitter chooses latter, it's difficult to draw a line of acceptable and unacceptable hate speech. Problems
continue.
Here I'm not asking if it's 'right' for it to do so, it's a private website, so I think it can
do whatever it wants, relevant video: https://www.youtube.com/watch?v=FmZbdaqGqlc
But you don't get popular (and hence rich) by doing whatever you want.
But we as consumers benefited by such services should also try to empathise with the service providers and discuss to see if it's reasonable for us to expect such a balance.
You can keep the scope limited to MT as we're here while replying, but I'm considering a general case.
And a good starting point would be to reconsider if it is even reasonable label speech as inappropriate at all. Is it possible, given the allowance we expect for our everyday understanding of free speech, to reach a common definition of 'inappropriate speech'?
I think the line between appropriate and inappropriate speech in such forums can be determined based on reasoning. If
the reasons behind a comment isn't evident enough and we feel it's hurtful/inconvenient, we should be allowed
to report it to a body, which should be maintained by the site, which would have time then to analyse the comment and
understand a reason, and if it too cannot, then the body should approach the comment politely to glean the thoughts
behind the comment and return to the reporter with an answer and a decision on whether or not the site considered such
comments reasonable. If the body considers it objectionable, then the user, who has been informed clearly the rationale
of such content, will be given a choice to forgive and forget or to escalate it to an automated system that will delete
the comment without fail or depending on severity would take further actions on the objected person, like mute/ban, etc.
If the body considers it acceptable then the reporter can do nothing.
The body needs to be diverse, culturally, socially and intellectually in proportion to the site's user demographic,
to ensure a democratic fairness.
This of course us the ideal case, and not all sites can afford such a thing, but they can try to implement it at least, to some smaller extents, like in MT's case, designating a mod as the body in charge of this.
Please share your views, I am sure my idea would have some flaws, or else at least the bigger sites that have legal teams and all would have implemented it.