Portfolio #other
Read time: 02'05''
19 July 2021
More action is needed to tackle racism on social media platforms
Unsplash © Bence Balla-Schottner

More action is needed to tackle racism on social media platforms

The Football Association (FA) has condemned the abuse targeted at Marcus Rashford, Jadon Sancho and Bukayo Saka after England lost the Euro 2020 final against Italy on penalties. The FA has called on the UK government to bring in new legislation and urged social media companies to ban abusers and to help bring about the prosecution of offenders.

Online abuse in any form is wholly unacceptable and the manner in which it’s being targeted at England football heroes Bukayo Saka, Marcus Rashford and Jadon Sancho – fuelling conversations about England’s loss with racial discrimination – is even worse. Instances like this not only highlight how prevalent racism continues to be in football and society at large, but also pose wider repercussions for diversity in sport.

While the ultimate responsibility for online abuse lies with internet trolls writing such content and posting it, social media firms need to do more to look after their users – historically, online platforms have been breeding grounds for hate speech. For example, women climate activists reported facing misogynistic messages on platforms such as Facebook when providing environmental expertise, while the Anti-Defamation League and the National Association for the Advancement of Colored People boycotted Facebook’s over its poor moderation of racial hate speech. TikTok’s private Creator Marketplace, meanwhile, has been observed flagging phrases such as “supporting black people” and “supporting black voices” as inappropriate content, while leaving messages of support for white supremacy untouched.

What’s being done?

Pressure is mounting for social media companies to tackle online hate. UK MPs previously called on a legal duty of care to protect younger users online. To tackle abuse in gaming, Intel has developed an AI-driven speech recognition tool allowing users to filter levels of hate speech, however, critics claim this continues to place the onus on individuals instead of companies. Regarding the incident above, a Facebook employee claimed the racial abuse received by England players was preventable.

Despite failures, certain methods and technologies are showing potential against online hate. Google has released a free AI tool for recognising abusive images and videos, suggesting AI could assist moderators with racial hate identification, however it should be noted that similar technology employed by Instagram to detect racist language and emojis failed to recognise examples when reported. Undoubtedly, more of a concerted effort will be needed from governments and tech companies to tackle the wider issue

What’s next?

Prime Minister Boris Johnson has met with social media firms to demand greater action in removing and preventing racial abuse online, however, in the aftermath of the match, Johnson and Home Secretary Priti Patel have been widely accused of hypocrisy after failing to criticise fans who booed players taking the knee against racism. The UK’s Online Safety Bill, meanwhile, stipulates companies could be fined up to 10% of their turnover or $25M if they neglect to remove hateful content.

The bigger impact of content moderation

Calls for content moderation have grown amid climate and COVID-19 misinformation. According to MI5, preventing online racist content can also help defuse UK right-wing extremism.

Resources:

  • Report online hate to the UK police through True Vision.
  • Organisation support links.
Sustt Banner
Sign up for Sustt