← Back

Misinformation spread online can fuel hate and damage democracy

According to The Washington Post, 2018 was the year of online hate.

Between July and September 2018 Facebook removed nearly three million pieces of hate speech, while YouTube removed more than 140,000 videos featuring "hateful or abusive" content.

Over the last two decades, social media corporations have risen to prominence with a “move fast and break things” attitude, and while this is no longer Facebook’s company motto, this approach to building tech platforms was widely adopted and the results of that mentality remain.

During the 2016 US elections Russian disinformation groups used Facebook, Twitter, YouTube (owned by Google) and Instagram (owned by Facebook) to target specific groups with curated content meant to anger them and drive their voting behaviour.

According to Oxford University:

“Russia’s Internet Research Agency (IRA) activities were designed to polarise the US public and interfere in elections by campaigning for African American voters to boycott elections or follow the wrong voting procedures in 2016…  for Mexican American and Hispanic voters to distrust US institutions; encouraging extreme right-wing voters to be more confrontational; and spreading sensationalist, conspiratorial, and other forms of junk political news and misinformation to voters across the political spectrum.”

Over 30 million people shared the IRA’s Facebook and Instagram posts with their friends and family between 2015 - 2017. This orchestrated manipulation of social media by Russian operatives, the majority of which revolved around dividing the nation along racial lines, was later connected to an increase in racialised hate crimes (Center for the Study of Hate and Extremism; California State University).

In 2017 a violent and ultimately deadly neo-Nazi march in Charlottesville, USA was planned, advertised, coordinated and paid for using PayPal, Facebook, and the gamer chat app Discord.

In 2018 misinformation spread through Facebook-owned messaging platform WhatsApp incited at least 27 lynchings in India. Meanwhile a United Nations human rights expert concluded Facebook helped facilitate “acts of genocide” in Myanmar. Cambridge Analytica harvested the personal data of millions of people's Facebook profiles without their consent and used it for political purposes. A subsequent attack gave unknown hackers full access to 50 million people’s accounts, including private messages, location information and the ability to make posts.

These dangerous, and in some cases deadly, outcomes of the tech behemoths “move fast and break things” approach should be enough to force governments everywhere to take urgent action. Just as the internet has created immense positive value, it has also given new tools to people and groups who want to hack, silence, threaten, harass, intimidate, defame, or violently attack other people.

While there is limited research into the effects and reach of online mis/dis/mal-information in New Zealand, we are not immune to the threats that we’ve seen play out overseas.

Far-right fringe hate groups (largely based in the US) targeted French and German elections to influence the outcome. The EU commission has told US tech giants Facebook, Google, Twitter, Mozilla and advertising businesses to intensify their actions against disinformation campaigns ahead of European elections in May 2019, or face regulation.

The dangerous use of ‘dark ads’ (online ads which can only be seen by the people they are targeting) by bad faith actors has been well documented overseas. We have no system to document and oversee these ads in New Zealand. In the 2017 general election political parties spent hundreds of thousands of dollars on Facebook advertising. There is no record of the content of those ads or who they were targeted at. There are also few, if any, restrictions or disclosure requirements on the type of audience being targeted in ads.

Coordinated anti-1080 campaigns facilitated by Facebook, often based on misinformation, are an example of the kind of momentum created when groups and individuals can find widespread support online.

We have also seen a troubling rise in the toxicity of online discussions. Politicians, journalists and public figures who appear in the media, as well as anyone who shares their views on current affairs, often become the focus of targeted online harassment and abuse campaigns; it’s especially bad for women.

This abuse could discourage people from taking part in the conversations that help shape public opinion and influence decisions, posing a real threat to the functioning of our democracy.

Giant tech corporations like Facebook, Google and Twitter must face the public and political scrutiny worthy of infrastructure which billions of people now rely on for employment, information and social interaction.

As noted in the InternetNZ discussion paper Platforms and misinformation: Where we are and how we got here:

The Internet did not create disinformation. It did not create divisions between groups of people who are different, or have different ideologies about the world. It did, however, create new ways to spread this information, and much faster than we have ever seen before.”

This needs to be an area of focus for the New Zealand government.