Facebook and Stuff are allowing racism to flourish on their platforms
From August to October 2018, ActionStation ran a pilot project called Tauiwi Tautoko where 20 volunteers were trained and supported to interact with people in online forums for one hour per week to listen, engage and discuss to find common ground with the goal of educating, supporting and encouraging people to develop more caring, thoughtful and educated responses to Māori people, culture and language.
The volunteers detailed their responses and reflected on what worked well (and didn’t) in the conversations using Google Forms. Those responses were collated in a spreadsheet that was analysed by an independent researcher from a University of Otago research project (Putting Hope into Action: What inspires and sustains young people’s engagement in social movements?).
The focus of the analysis was on the number of racist comments vs caring and supportive comments expressed in two online forums: Stuff and Facebook. Comments were coded against 15 ‘racist themes’ which are detailed in the Methodology section.
Six Stuff articles with a total of 446 comments were analysed for the purpose of this research.
Sixty-six comments were positive and engaging and there were 225 references to the fifteen racist themes. This does not mean that half the comments were racist because each occurrence of a theme was coded and some comments contained material that could be coded under more than one theme. However, the number of references to the racist themes does provide an overall ‘feel’ for the material encountered in the Stuff comments as the chart below shows. Only 15% of the comments were supportive even when there was a dedicated group of Tauiwi Tautoko volunteers involved in the online forum.
The six Stuff articles that the Tauiwi Tautoko volunteers commented on also provides a small database to analyse some trends in the extent and content of racism.
Our analysis found that, on average, readers of Stuff comments will encounter nearly four times as many different racist ways of thinking for each kind comment. Facebook users will encounter nearly five times as many.
Eleven Facebook posts were analysed for this research. 4,011 comments were reviewed and coded. As with the Stuff comments, it was possible for Facebook comments to contain text that could be coded across more than one of the racist themes so the number of racist references reported below does not equate to the number of comments. It is a count of the number of times that each racist themes is mentioned. Racist themes emerged 553 times (14%) compared with 124 supportive comments (3%). The majority of comments were neither explicitly positive nor racist at 3,326 (83%). However, these comments should not necessarily be considered ‘neutral’: they are just neither ‘kind nor engaging’ nor racist as defined by the fifteen codes. Many of the comments were unpleasant, unkind or at best cheeky.
The analysis above suggests that comments to both Stuff and Facebook online media are much more likely to contain references to racist themes than comments that are supportive and caring. Additionally, because some posts contain text that can be considered ‘racist’ in more than one way as identified using the fifteen themes, it was also important to get a comparison using a basic count. This was done using the comments to one Stuff article and one Facebook post (of the same Stuff article).
This basic count shows that in both Facebook and Stuff comments, people are twice as likely to encounter racist comments than kind and caring ones.
ActionStation, along with many other organisations, groups and individuals, are concerned not only that these racist views are held by some people, but that forums exist that allow, invite and support the airing of racist views and attacks. This occurs despite human rights legislation, and censorship guidelines, which explicitly condemn material which “degrades or dehumanises or demeans any person.” There is increasing academic and public unease about internet safety and its implications for society, which have resulted in guidelines about staying safe online and are annually brought to our attention with “Safer Internet Day” on the 5th of February. While this safety focus is often directed at keeping individuals safe from breaches of privacy or cyberbullying, it is increasingly acknowledged that being exposed to racism has serious effects on health and wellbeing. It is therefore crucial that we also address online racial discrimination in order to keep people safe online and promote the type of society that we would prefer to live in.
Both Stuff and Facebook have policies about comments but these are not included with the comments sections and have to be searched for. Facebook’s policy was published relatively recently (April, 2018) and hate speech appears to be something Facebook has struggled with, particularly with regard to drawing the line between hate speech and expression of free speech. These boundaries need to be more clearly defined by policy makers and political action so that inciting disharmony and hatred has legal implications.
Finally, what the nineteen different articles/posts and 4,867 comments required for this analysis revealed was that the same names and pseudonyms come up time and time again, which confirms that there are online ‘trolls’. Although there are recommended ways to manage attacks from trolls, it seems incongruent that rather than restricting trolls, the recipients of ‘trolling’ have the burden of protecting themselves.
Recent research from New Zealand suggests that ‘successful’ trolling requires motivated trolls, reactive targets and the absence of capable guardians. This begs the question: Are Facebook and Stuff capable guardians? Our research would suggest not.