The term “fake news” has become ubiquitous over the past two years. The Cambridge English dictionary defines it as “false stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke”.
As part of a global push to curb the spread of deliberate misinformation, researchers are trying to understand what drives people to share fake news and how its endorsement can propagate through a social network.
But humans are complex social animals, and technology misses the richness of human learning and interactions.
That’s why we decided to take a different approach in our research. We used the latest techniques from artificial intelligence to study how support for – or opposition to – a piece of fake news can spread within a social network. We believe our model is more realistic than previous approaches because individuals in our model learn endogenously from their interactions with the environment and not just follow prescribed rules. Our novel approach allowed us to learn a number of new things about how fake news is spread.
The main take away from our research is that when it comes to preventing the spread of fake news, privacy is key. It is important to keep your personal data to yourself and be cautious when providing information to large social media websites or search engines.
The most recent wave of technological innovations has brought us the data-centric web 2.0 and with it a number of fundamental challenges to user privacy and the integrity of news shared in social networks. But as our research shows, there’s reason to be optimistic that technology, paired with a healthy dose of individual activism, might also provide solutions to the scourge of fake news.
Modelling human behaviour
Existing literature models the spread of fake news in a social network in one of two ways.
In the first instance, you could model what happens when people observe what their neighbours do and then use this information in a complicated calculation to optimally update their beliefs about the world.
The second approach assumes that people follow a simple majority rule: everyone does what most of their neighbours do.
But both approaches have their shortcomings. They cannot mimic what happens when someone’s mind is changed after several conversations or interactions.
Our research differed. We modelled humans as agents who develop their own strategies on how to update their views on a piece of news given their neighbours’ actions. We then introduced an adversary that tried to spread fake news and compared how efficient the adversary was when he had knowledge about the strength of other agents’ beliefs compared to when he didn’t.
So in a real world example, an adversary determined to spread fake news might first read your Facebook profile and see what you believe, then tailor his disinformation to try and match your beliefs to increase the likelihood that you share the fake news he sent to you.
We learnt a few new things about how fake news is spread. For example, we show that providing feedback about news that’s been shared means that its easier for people to detect fake news.
Our work also suggests that artificially injecting a certain amount of fake news into a social network can train users to better spot fake news.
Crucially, we can also use models like ours to come up with strategies on how to curb the spread of fake news.
There are three things we have learned from this research about what everyone can do to stop fake news.
Fighting fake news
Because humans learn from their neighbours, who learn from their neighbours, and so on, everybody who detects and flags fake news can help prevent the spread of fake news on the network. When we modelled how the spread of fake news can be prevented, we found the single best way was to allow users to provide feedback to their friends about a piece of news they shared.
Beyond pointing out fake news, you can also praise a friend when they share a well researched and balanced piece of quality journalism. Importantly, this praise can happen even when you disagree with the conclusion or political point of view expressed in the article. Studies in human psychology and reinforcement learning show that people adapt their behaviour in response to negative and positive feedback – particularly when this feedback comes from within their social circle.
The second big lesson was: keep your data to yourself.
The web 2.0 was built on the premise that companies offer free services in exchange for users’ data. Billions followed the siren’s call, turning Facebook, Google, Twitter, and LinkedIn into multi-billion dollar behemoths. But as these companies grew, more and more data was collected. Some estimate that as much as 90% of all the world’s data has only been created in the past few years.
Do not give your personal information away easily or freely. Whenever possible, use tools that are fully encrypted and very little information is collected about you online. There is a more secure and more privacy-focused alternative for most applications, from search engines to messaging apps.
Social media sites don’t yet have privacy-focused alternatives. Luckily the emergence of blockchain has provided a new technology that could solve the privacy-profitability paradox. Instead of having to trust Facebook to keep your data secure, you can now put it on a decentralised blockchain that was designed to operate as a trustless environment.
Co-Pierre Georg, Senior Lecturer, African Institute for Financial Markets and Risk Management and Director, UCT Financial Innovation Lab, University of Cape Town; Christoph Aymanns, Assistant Professor, University of St. Gallen - School of Finance, University of St.Gallen, and Jakob Foerster, Doctoral student, Artificial Intelligence and Machine Learning, University of Oxford.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Please view the republishing articles page for more information.