Navigating a World of Lies: Staying Media-Savvy in a World Full of Mis/Disinformation
By Feidias Psaras, Sciences Po—Menton
We rely on sources of information that we perceive as authoritative to obtain the overwhelming majority of our information—medical advice from doctors, the history of our country from our teachers, when to change the oil on our car from mechanics. The advent of the internet has created a space where information could be circulated, maintained, altered, and conjured up at unfathomable speed. For all its flaws, the internet—at least in countries where it is relatively open and accessible— has assuaged fears of an oppressive Orwellian dystopia where a monopoly of information allows for it to be freely withheld or rebranded.
But while Orwell might have been put at ease, Huxley has not; while the internet gives rise to a dimensionless information-space, it’s one that comprises both good information and bad—the sheer volume of which is humanly impossible to sort. The advent of the internet has precipitated an era of post-truth, where appeals to emotion and personal beliefs are a more powerful way to shape public opinion than reliable facts. We are constantly faced with what is termed “problematic information”: information which is “inaccurate, misleading, inappropriately attributed, or altogether fabricated.” This information can either be unintentionally false—misinformation—or intentionally false—disinformation.
Misinformation and disinformation are important because they exhibit higher “spreadability” online. A study using data from over 3 million tweets, it was found that while truthful news are broadcast by more sources in total, the average piece of false information stemming from a single original source tends to exhibit longer retweet chains and attract more users in terms of viewership and various forms of interaction. Within the less than mean amount of time needed for a truthful tweet give rise to a chain that’s 10 retweets long, false tweets on average reached chains twice that length. Whereas sources conveying truthful and accurate information rarely reached more than 1000 people, the top 1% of any given retweet chain reached between 1000 and 100 000 people. Misinformation is most prevalent as well as more successful around the topic of politics, with 45,000 original false posts out of the 126,000 stories sample and regularly reaching 20,000 people faster than any other false information category.
It is clear, then, that false information spreads faster than truthful information for any subject. But what exactly are the reasons for this? The answer has to do with the nature of individual dispositions, the information itself, and the structures that enable the exchange of information in the first place.
Looking at the Information
For natural disasters, false news—most often in the form of misinformation—is the result of the inability of localized news sources to instruct citizens on how to behave. This reporting gap creates an incentive for individuals close to the disaster site to become the primary sources for news. Anxiety, information ambiguity, and a sense of responsibility generated by personal involvement give rise to ‘improvised news’ which spreads through social networks online and obfuscates the nature of a crisis, thus slowing down research efforts and slowing down time.
When it comes to scientific matters, disinformation is often spread specifically to communities which have some predisposition, whether religious or otherwise, to believe them. This is particularly prominent with matters relating to public health, with vaccines being a hotly contested topic. The now widely discredited 1998 paper published by Andrew Wakefield investigating a possible link between measles vaccines and autism was driven by major undisclosed financial incentives; at the time, Dr. Wakefield was given approximately $700,000 by a lawyer preparing a lawsuit against a vaccine manufacturer on behalf of clients. Despite the eventual omission of the article from the Lancet and Dr. Wakefield’s removal from the medical register, the study constitutes a cornerstone of still-active anti-vaccination advocacy groups. A flare-up of anti-vaccination misinformation was also witnessed during the recent COVID pandemic, with the WHO Director-General noting: “We are not just fighting an epidemic; we are fighting an infodemic.”
More research has been carried out on the motivations, nature, and outcomes of false political information. Domestic political mis/disinformation might exist to denigrate opponents or contribute to partisan rhetoric, either to gain political capital or to express discontent, typically at the status quo. More intentional forms of disinformation include micro-targeting (use of personal data in order to separate users into different ‘advertisable’ groups), impersonator accounts, or fueling polarization through instigators or trolls who intentionally post and spread controversial information. Oftentimes, the goal of political dis/misinformation is to undermine trust in existing institutions. It is also common for individuals to spread disinformation for comedic purposes, or to gain internet clout.
AI threatens to exacerbate all forms of disinformation. While its large-scale use since the release of the new generation of AI in 2022 has been detected in only a handful of election processes—with no observable effects on election outcomes away from predictive polls to-date—its application in disinformation is certain to increase significantly in the coming years. AI-operated troll armies, which imitate human subtlety with machine-level tirelessness, will be able to spread more convincing disinformation much faster than current bot accounts. Impersonation could take the form not only of incorrectly attributed accounts, but also in the form of ‘deep fakes’, which are becoming more and more convincing. While there still exist specific ways to understand whether an image or video is doctored, audio-producing AI has proved to be much more convincing, especially when supplemented with background noise.
AI’s ability to amplify the intensity and effectiveness of disinformation poses an important conundrum for legislative bodies in the EU, which seek to regulate such technologies in the interest of maintaining the integrity of public discourse while keeping up in a competitive innovation-driven economy. Mr. Zuckerberg, the CEO of Meta, has stated that it is a shame to see the EU being ‘left behind’ on AI after having to instruct Meta’s software developers to direct resources away from the regulation-heavy bloc exemplifies this trade-off.
Looking at the Structure
But much of the reason for the spreadability of misinformation and disinformation is due to the way in which the platforms where we get so much of our information—YouTube, X, Instagram, Facebook—present their content. These platforms generate revenue by maximizing watch-times, and therefore design their recommendation algorithms in a way that reinforces viewers’ engagement habits. Although this strategy is relatively anodyne when it comes to entertainment, its application to political content means that users in their digital environments are surrounded by political messaging that fits pre-existing beliefs in what is commonly called a “filter bubble.” It also means that, because of these positive-feedback algorithms, political orientations which contain those given beliefs are exacerbated over time; algorithms have been previously cited as major causes for the radicalization of individuals. The lucrativeness of sensationalism, often at the cost of journalistic integrity, spurs the creation of ‘fake news’ organizations which spread false information in a news format in order to get clicks to generate revenue. Although recent studies have indicated that the filter bubble effect might be overstated, it remains an important notion that describes the way our perception in online spaces is structured.
The classification of political content as a profit-making endeavor is most evident with the use of marketing tactics to drive political campaigning. The Cambridge Analytica Scandal, which exposed a research project utilizing Facebook analytics in order to determine users’ political orientation based on activity data, presents an example of the dangers of the use of private data for political-commercial purposes.
In recent years, the platforms themselves have become subjects to politicization. After Twitter’s decision to ban the current U.S. President Donald Trump in lieu of his incendiary Tweets that spurred on the January 6th rioters, it came to the center of the American political debate. Its subsequent acquisition and radical restructuring by Elon Musk, tech mogul and Trump’s current right-hand man, have led to the so-called ‘Twitter exodus’. Thus, political forces operating within the same platform have now fractured along political lines, with the emergence of right- and left-wing alternatives such as “Truth Social” and “Bluesky,” respectively. The segmentation of political discourse into platforms marks a more formal, platform-mediated echo chamber that might allow more rampant spread of dis/misinformation and deepen partisan divides.
Disinformation in a Geopolitical Context
An increasingly widespread phenomenon sees the weaponization of disinformation by foreign state and state-aligned actors in order to achieve political aims. Some tactical trends emerge as far as foreign-led disinformation operations are concerned. One has to do with domesticating narratives by making sure that external interference is inoculated by local users in order to a) bypass platform checks and b) make detection of the initial source harder in order to c) spread disinformation with minimal effort. Another important tactic includes perception hacking, small attacks compromising electoral systems which have a bigger psychological than procedural effect; they create the impression that the system is rigged, and undermine voter trust in institutions.
In light of the Romanian Supreme Court’s annulment of the first round of votes that favoured the far-right pro-Russian candidate Calin Georgescu on grounds media manipulation on TikTok, the EU Commission has opened a formal investigation into the entertainment platform’s involvement in influencing the country’s political affairs. In the case of Georgescu, comments from the far-right in Europe that the EU Commission is prone to censorship, while feeding into already fallacious and over-inflated rhetoric, are not completely unjustified. While the EU case against TikTok might indeed determine foreign election interference, that does not detract from the fact that Romania’s top court made the decision rendered meaningless the results of a democratic process that was itself carried out with integrity, and in turn undermined the country’s—and Union’s—institutions.
Solutions
More than just a symptom of an attention-driven and personalized attention economy, misinformation and disinformation have become effective tools utilized at domestic and foreign levels in order to cloud political discourse, entrench partisanship and undermine voter trust in democratic institutions. In light of this, an important question arises: how can policymakers address the corrosive effects of mis/disinformation on democratic institutions without themselves compromising the basic democratic tenet of freedom of speech? The answer necessitates a holistic approach, which includes behavioral science, journalism, education and so on.
On the platform level, there needs to be a higher accountability for content. This includes more initiatives to sanction corporations for lack of disinformation management such as with the EU investigation, and creating preemptive laws that increase management of on-platform content. It is important for platforms to institute fact-checking mechanisms to flag false information. This could either be by hiring professional fact-checkers in the case of Facebook, although given the limited resources the fact-checking process is quite slow. Another approach, as suggested by Jang et al. as the corrective one, would be to grant fact-checking capabilities to users. While this creates the possibility of ‘‘fact-checking trolls’, a bottom-up approach with the right checks and balances is more likely to provide a faster and more dynamic method of curbing dubious information online. Still, both cases present the problem that unreviewed content which does not have a tag could be falsely interpreted as legitimate (implied truth effect), this could be remedied with the inclusion of a third ‘unverified’ tag. Another feature that has proven useful has been to limit the “broadcastability” of any given actor. For messaging apps such as WhatsApp, which played a large role in the spread of disinformation in the Indian elections, the decrease of the amount of users shared to from 250 to 5 has brought down sharing by 25%.
It is also important to focus on the users themselves through raising awareness and education programs. Workshops such as the ‘Satyameva Jayate’ initiative in the Kannur district of Kerala, which teaches schoolchildren to spot misinformation, might prove crucial to maintaining integrity of public discourse online by instilling digital literacy in the upcoming generation of users. Even for everyday use, techniques such as the SIFT method (Stop, Investigate source, Find better coverage, Trace to source) and proposed by digital literacy expert Mike Caulfield can prove highly effective in assessing the validity of content and institutions online. It requires pausing when reading a new piece of information, especially one that makes a large claim about an event or person, checking the validity of the source through a third party, cross-checking information and finding the original source of the claim.
More resources should be spent on fact-checking organizations by NGOS and on the state level. Initiatives such as EU vs Disinfo headed by the East StratCom Task Force, which specialize in debunking disinformation in countries formerly controlled by the Soviet Union, are severely underfunded. This reduces the number of stories that are debunked and prevents strong networks of disinformation-battling institutions from forming in the EU. To remedy this, it is recommended that more funding be dedicated to these organizations and that national offices be established which counter disinformation relevant to the regional level.
According to the World Economic Forum, misinformation is the biggest threat that governments will be facing within the coming year. Given this classification, efforts to engage with this issue—as governments, policymakers, users—can’t remain moderately engaged. Through increased and standardized media literacy, greater and more concentrated support for ant-dis/misinformation task forces, and clear preemptive state frameworks to regulate media companies’ inordinate power over our attention and opinions.
Cover photo taken from Flickr.