The Politics of Misinformation: Social Media, Polarization, and the Geopolitical Landscape in 2025
The Politics of Misinformation: Social Media, Polarization, and the Geopolitical Landscape in 2025
Trend Report 6 / January 2025
By Ceren Çetinkaya
The Politics of Misinformation: Social Media, Polarization, and the Geopolitical Landscape in 2025
According to the Global Web Index, as of October 2024, 64% of the global population actively uses social media; they spend an average of 2 hours and 19 minutes daily on such platforms. (Chaffey, 2024). This digital transformation has reshaped various domains in our lives, most notably the political sphere. Social media platforms, particularly X (formerly known as Twitter), TikTok, and Facebook have become central hubs for political discourse. Their democratizing potential enables grassroots movements and empowers individuals to communicate with large audiences, bypassing traditional gatekeepers. However, this empowerment has come with significant trade-offs, including the proliferation of disinformation and misinformation, and the reinforcement of ideological echo chambers which contribute to the polarization of society.
The World Economic Forum’s 2025 Global Risks Report identifies misinformation as the most critical challenge to political cohesion and societal trust, particularly due to its ability to fracture democratic institutions, in the next two years. As we enter 2025, X has shown its potential to become a platform for political discourse, especially for populist far-right movements, and has been accused of propagating misinformation. Recently, Facebook CEO Mark Zuckerberg announced that Meta would remove third-party fact-checkers in the US and replace them with a crowd-sourced moderating service like the “community notes” feature on the rival social media platform X, because “the fact-checkers are politically biased”. Given the previous effects of social media in conflicts such as Rohingya humanitarian crisis in 2018, the effect of changes in social media regulations and the big tech-politics axis remains uncertain in a year where the ongoing conflicts seem unlikely to end soon. This report examines the geopolitical implications of misinformation in 2025 and calls for greater global attention to the role of social media in conflict areas.
Digital Transformation of Political Engagement in the Age of Misinformation
Social media has played a major role in political campaigns, especially in 2024, a year marked by unusually high numbers of global elections. Politicians increasingly harnessed social media’s narrative-shaping power to mobilize support. For example, during her campaign, Kamala Harris spent $113 million on Meta advertising—exceeding the GDP of some small nations—and $4.5 million on TikTok influencers, while Donald Trump’s campaign allocated $17 million in total (Chaudhuri & Zhu, 2024). These figures underpin the critical role that digital platforms play in modern political engagement. However, the impact of these digital strategies extends beyond mere spending, as the social media ecosystem has become a powerful force in shaping public opinion. (West, 2024). 2024 also marked a significant shift in the use of artificial intelligence (AI) in political campaigns, introducing new challenges and opportunities for voter engagement. AI-generated content, particularly in the form of "deepfakes" and other manipulated media, emerged as a potent tool for both political messaging and the spread of misinformation.
Social media platforms have turned into powerful catalysts for the proliferation of misinformation by means of two key mechanisms. Firstly, sophisticated algorithms create "filter bubbles" that curate content aligned with users' existing views, maximizing engagement. Secondly, users tend to connect with like-minded individuals, forming echo chambers that amplify and reinforce pre-existing beliefs (Rhodes, 2022). These dynamics cultivate directionally motivated reasoning wherein individuals interpret information in ways supportive of their preconceptions and, even in the presence of factual corrections, maintain misinformation and increase political polarization. This is a product of algorithms designed to give primacy to user engagement and profitability over the representation of a variety of diverse or critical perspectives. The persistence of these algorithms is intrinsically tied to the core business models of social media companies; platforms such as Facebook (now Meta) have built their financial success on these engagement-driven systems. Therefore, without significant external pressure—particularly in the form of regulatory intervention—these tech giants are unlikely to implement substantial changes to their algorithmic frameworks (Arguedas et al., 2022). As we navigate the complex landscape of digital political engagement, the need for a balanced approach that preserves the benefits of social media while mitigating its potential for harm has become increasingly apparent.
Looking Ahead with Lessons from the Past: Conflict Management and Big Tech
As we move into 2025, the interrelation between media, big tech, and politics has continued to shape the global political landscape, with social media platforms in possession of unprecedented influence. This influence was notably demonstrated by X’s role and Elon Musk’s support in Donald Trump’s recent election campaign and victory. In response, new regulations on social media platforms have emerged and big tech’s impact on global politics is being reassessed.
On January 7th, Mark Zuckerberg announced that Meta would replace its third-party fact-checking program in the U.S. with the “Community Notes” system, that allows users to flag misleading posts and provide the necessary context for them (Kaplan, 2025). While this approach may be effective in some cases, critics fear it could empower vocal and well-organized groups to selectively shape narratives and promote alternative agendas (Ertuna, 2025). The decision was justified by claims of political bias among fact-checkers, but Nobel Peace Prize laureate Maria Ressa countered this stating that journalists abide by professional morals and values, highlighting their significant role in fact-checking (Milmo, 2025).
The potential consequences of this shift are concerning, especially given Facebook’s past role in exacerbating conflicts, including the Rohingya crisis in Myanmar. The platform’s algorithms, which prioritize engagement, have been shown to amplify hate speech and misinformation, fueling ethnic violence. Facebook’s inadequate local language moderation and delayed responses enabled harmful content to spread unchecked, leading to widespread condemnation. The Office of the High Commissioner for Human Rights (OHCHR) in a report stated that Facebook has been highly slow and ineffective at counteracting hate speech and violence online (OHCHR, 2018). Facebook admitted its failure to prevent the platform from being used to "incite offline violence" in Myanmar, acknowledging that its recommendation systems continued to spread harmful content even after banning certain individuals for hate speech (BBC, 2018).
Similar issues persist globally. In October 2024, the international NGO Amnesty International reported that Philippine authorities used social media, particularly Facebook and Instagram, to "red-tag" young activists, endangering lives and undermining dissent (Amnesty International, 2024). These examples highlight the urgent need for social media platforms to implement stronger measures to prevent the misuse of their services, protect users, and safeguard democratic processes around the world.
The implications of unchecked misinformation are clear: polarized public opinion, manipulation of information by state and non-state actors, and the undermining of democratic processes. These dynamics, proven to paralyze communication and coordination in crisis situations, make conflict resolution and de-escalation more complex. In 2025, in the ever-increasingly complex digital topography, fighting misinformation is fast becoming one of the key global challenges, with false narratives dramatically escalating tensions and impeding peace efforts in conflict zones. The World Economic Forum’s Global Risks Report underscores the urgent need for broad, adaptive strategies that go beyond national borders and technological platforms. Addressing this challenge will require sustained international collaboration, innovative regulatory approaches, and a collective commitment to preserving the integrity of public discourse in regions experiencing active conflicts and geopolitical instability.
Downloads