In June 1993, a new radio station called Radio-Television Libre des Mille Collines (RTLMC) began broadcasting in Rwanda…
The station was rowdy and used street language – there were disc jockeys, pop music and phone-ins. Sometimes the announcers were drunk. It was designed to appeal to the unemployed, the delinquents and the gangs of thugs in the militia. “In a largely illiterate population, the radio station soon had a very large audience who found it immensely entertaining.” This entertainment led to the slaughtering of 800,000 people with machetes in 30 days.
Soon enough, this radio station consistently repeated messages to deliberately “troll”, that means incite and awaken a nefarious will to destroy and to cause the other party to be moved to anger to retaliate. In fact, the word troll in fishing terminology means to constantly lead a fish along a long baited line in order to catch the fish.
This has become the order of the day, only it exists more freely in the world of social media and the internet. Every body that cares to follow is being led to elicit their reactions in a bid to denigrate and slander them and you would agree that this has somehow snowballed into hate speech. In fact in the pot pourri of trolls, hate speech is a major component and this includes a great level of incitement, instigation and arrogant black mail.
In a recent statistical analysis, results showed that women are the most affected group to be trolled. The Guardian UK says over 40% of comments on articles written by women are abusive ones. The stats, although not empirical, are even more dire in Nigeria as regulators interviewed say they have taken over 40 million statements on hate speech pronouncements online in the last 10 years in Nigeria.
The recent political imbroglio in Nigeria has further heightened trolling among various groups and hate speeches have become the order of the day. In fact the leader of the free world is not left out with his recent support of white supremacist and his slander of liberal news outlets as fake news. It is said that over 40% of his tweets have been slander, trolls or hate speech. The shocking part of this debate is there are no local laws that adequately define what constitutes hate speech or trolls and how people involved could seek redress.
Recently the U.K. sent a stern warning and yesterday, the acting president of Nigeria, Yemi Osinbajo said, “the intimidation of a population by words or speech is an act of terrorism and will no longer be tolerated by the President Muhammadu Buhari administration”. But, how far can this go?
In order to clamp down on hate speech the internet giants have to be able to effectively define it. Here’s how they have fared with that;
- Facebook defines the term “hate speech” as “direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease”.
- Twitter does not provide its own definition, but simply forbids to “publish or post direct, specific threats of violence against others.”
- YouTube clearly says it does not permit hate speech, which it defines as “speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status and sexual orientation/gender identity.”
- Google makes a special mention on hate speech in its User Content and Conduct Policy: “Do not distribute content that promotes hatred or violence towards groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.”
Hitherto, that was the definition by the internet giants but I was excited when on May 2016, Facebook, Google and Twitter signed a code of conduct, announcing a set of standards for dealing with hate speech, including:
- A promise to review the majority of reports of illegal hate speech and remove the offending content within 24 hours
- Making users aware about what is banned by each company
- Training staff to let them better spot and respond to online hate speech.
Furthermore, Germany’s justice minister Heiko Maas proposed fining social media up to €50m for not responding quickly enough to reports of illegal content or hate speech. (March 2017)
• The law would require social media platforms to come up with ways to make it easy for users to report hateful content. Companies would have 24 hours to respond to “obviously criminal content” or a week for more ambiguous cases.
These are measures but they raise more questions than confidence. What happens to people who hide under the guise of “trolling”- intimidating people and subliminally spreading hate speeches in the process? Aren’t “Retweets not equal endorsement”, the comment section, etc., tools powerful enough to promote “hate speech”? Will internet trolls also be liable for their trolling?
In the Nigerian example, one wonders if it is infringing on freedom of expression. Is this an easy way to muzzle the voice of the people and will any speech made at the government be treated different from that made towards just any Nigerian?