News #Opinion
6 July 2020
Unsplash © Vladimir Kramer

Where do conspiracy theories come from?

The human mind is wired to seek patterns, whether to identify potential threats or aid learning or even to stay calm in a crisis, these patterns are everywhere. Ramsey Theory states that if there are enough elements in a set or structure, a pattern will emerge within it. These two rationales can be used to explain where conspiracy theories come from; the human necessity to find links and meaning in text, events and disasters, in short: our brains.

As long as there is news, there is fake news and as long as there is fake news, you can guarantee that they’ll be conspiracy theories. The anguish and uncertainty of crises, especially societal crises, create an incentive to understand what’s happening, amplifying the spread of conspiracy theories. Many times, these theories are based on small shreds of truth, centred on the dichotomy between those in power and the rest of us in the infinite battle of good versus evil.

2020, a tale of fact versus fiction

Since the COVID-19 pandemic and the Black Lives Matter protests that erupted after the murder of George Floyd, conspiracy theorists are having a field day, as for most, the world right now feels just too dystopic to be the result of a series of coincidences. Whenever something huge happens, think 9/11, the 1969 moon landing or Princess Diana’s 1997 death, there’s a trail of conspiracy theories materialising after them fuelled on the idea that ‘there must be something more to this.’ The internet and particularly social media have made these theories inescapable and to some, appear scarily legit. What tech is being developed to combat this?

Fake News Guard are using tech to debunk fake news. Stating how “disinformation can be produced faster than it is fact-checked,” the company use tools such as “a search engine” and “a track record feature, to show which newspapers and publications have put out fake stories in the past” as well as “an algorithm to help detect fake news in an unsupervised manner” for both individuals and organisations.

Similarly, Untrue News is an open-source search engine that only displays fake news results when topics are searched. It does this using “both automated and semi-automated natural language processing techniques to crawl and identify false and misleading news articles” in multiple languages. The startup relies on the International Fact-Checking Network from the nonprofit Poynter Institute to separate accurate news from fake news. By solely identifying false news, the company “fills a gap left by multipurpose tools” and “uses both automated and semi-automated natural language processing techniques to crawl and identify false and misleading news articles.”

Factmata uses tech to “extract claims being made by actors online [in the form of] journalists, Twitter users or Facebook profiles and highlights the common themes or narratives evolving,” says CEO of the company, Dhruv Ghulati. “We also have separate algorithms which answer the question, ‘are these likely to be threatening, hateful, sexist, or propaganda content,’ and should they be removed from a network?” Factmata’s tech is also designed for both businesses and organisations with a focus on selling the tech to businesses.

Speaking on the current requirement for this technology, Dhruv states “we are going into a world where information will be the new battlefield where warfare takes place, from spreading lies, confusing populations, altering voter behaviour to changing stock prices.”

Faheem Nasir, Head of Social Media at Logically explains how the company uses AI and algorithms “designed to identify the accuracy and credibility of content using a three-pronged approach.” This approach utilises networks, metadata and content. The company’s tech is “built on a set of modular processes, capable of analysing boundless amounts of data” that helps turn data into insight through machine learning algorithms.

“Once we’ve ingested and analysed the data, our clustering algorithms highlight similarities and disparities between similar data sets, such as articles from different publishers covering the same subject,” Faheem explains. They also offer a free app that “gathers credible news stories from across the political spectrum so that users can get all the facts, evaluate biases within news content and come to their own conclusions based on a plurality of sources.”

But what about freedom of speech?

It gets complicated though, because as much as conspiracy theories seem crazy, many people spreading these theories online actually believe them. While actors attempting to infiltrate people’s thinking and peddle political propaganda for ulterior motives are awful, there are also fears over the censoring of any information online. If online speech begins to be censored, how far will it go and who decides what is and what isn’t allowed to be said online?

Just before the pandemic started to kick off in the West, Time Magazine wrote an article stating that “freedom [of speech] is often abused, but – truly – whose fault is that? Is it Twitter’s fault if I lie about the news? It is my responsibility to exercise my rights responsibly.” But we can’t trust people to act responsibly, that’s the whole point. Treading the line between censoring dangerous and offensive content and allowing free speech is a dangerous road to walk on and one that is turning into an increasingly loaded debate.

Dhruv emphasizes how the onus is on the individual to make sure they are only sharing fact-checked, truthful information. “Spreading lies or not doing your research on facts has negative externalities on others, as does being abusive and sexist. This discourse needs to be monitored if it is in the public sphere communication and not a private channel.”

Mattia, CEO of Fake News Guard mentions how asking big tech companies to “introduce editorial control is a blow to the very core of their business model.” He goes on to say how “they are now trying to do this with algorithms and various techniques, but the technology is not fully there yet.” His start-up offers “tools that can help people to stay informed” targeted at regular internet users. They are also creating “a data-driven hub” that users can come to when they have doubts about a news story.

Likewise, Faheem mentions how “for many social media platforms, effectively fighting fake news is counterproductive to their business models. Whether it be through paid ads or someone sharing a suspicious post, social media and algorithm-based platforms are not only susceptible to the propagation of misinformation, they often profit from it.”

“The idea for Logically came after seeing the political polarisation that surrounded the Brexit and US Presidential campaigns, with both campaigns using highly targeted social media advertising to aid them. It was clear that the public both needed and wanted access to credible information they could reliably use to inform their decisions.” He continues. “Freedom of speech and expression are fundamental rights which should always be respected, but when it is to the detriment of public wellbeing, it is paramount that we develop comprehensive solutions to these complex problems.”

“It is known that there are vested-interest groups whose jobs are to produce fake news in order to push their agenda, be it political, environmental or else. It becomes excruciatingly difficult to tell truths from falsehoods,” says Dr. Vinicius Wolosyzn from Untrue News.

“Look at the Brazilian election in 2018 that elected far-right Jair Bolsonaro as president. The far-right had huge teams (being investigated to this day) producing fake news non-stop, some of which was too ridiculous to be believed by most people. But it worked! And now Brazil (and the world if you consider the deforestation of the Amazon) is paying the price for not having acted early enough to avoid the spread of fake news,” he continues.

“Freedom of speech is not freedom to be racist, a bigot, misogynist; or freedom to spread lies and falsehoods. Untrue.news aims to empower people who don’t want to be deceived. We believe that only in an environment of truth, true freedom of speech can be fully exercised,” he states.

How is big tech responding to this?

Recently, Mark Zuckerberg stated that he doesn’t think that “Facebook should be the arbiter of truth of everything that people say online.” Whatsapp, owned by Facebook, set up a COVID-19 fact-checking bot in March to fight fake news being spread about the pandemic. The messaging service also imposed restrictions on forwarding fake news, with users only being allowed to forward a message once if that message has been previously forwarded over 5 times.

Instagram, also owned by Facebook, has recently made efforts to remove COVID-19 accounts from recommendations and the explore page as well as downranking false information from stories unless it’s posted from credible health organisations.

Twitter announced that it will remove tweets that promote harmful information relating to COVID-19 and are trying to stop users tweeting articles they have not read. Last year, Twitter acquired Fabula AI, a startup using deep learning to identify online fake news and disinformation.

Recently, Google announced that they would be spending $6.5M on fighting fake news relating to COVID-19. The money will be spent on utilising global fact-checkers and nonprofit organisations dedicated to combating misinformation online.

Finally, YouTube which is owned by Google and arguably the home of conspiracy theories has one in four videos posted that contain fake news. They have also been accused of ‘actively promoting misinformation’ by a senior UK politician. Earlier this year, YouTube announced that they would be implementing machine learning to recommend content from trusted news sources to users and use human moderators to tackle fake video content. Susan Wojcicki, CEO of YouTube mentioned that removing certain creators from the platform would be going too far and is instead opting to provide links under videos displaying accurate information on the topic at hand.

The battle for a free, fair and safe internet 

Striking a balance between keeping the internet the intriguing and opinionated hotbed that it is while tackling fake and harmful misinformation will be one of the greatest challenges for big tech over the next decade. 

Scandals such as Cambridge Analytica highlight that this issue requires action as it fundamentally affects how we think and shapes the world that we live in. Though, we should strive to retain the essence of social media which is self-expression while holding big companies and advertisers spreading false stories accountable.

Speaking on the urgency for services such as Logically, Faheem states “we rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that flawed news reaches more people faster than ever before. The effects of misinformation have shaken democracies, provoked public health epidemics and induced lethal violence.”

“We strive to nurture a fair and free internet, to take people out of their divisive filter bubbles and echo chambers and help make better sense of the world around us.”