Disinformation and Digital Targeting in the Wake of Tripoli’s May 2025 Unrest

Disinformation and Digital Targeting in the Wake of Tripoli’s May 2025 Unrest

Executive Summary:

The violent escalation in Tripoli during May 2025 did not remain confined to the city’s streets. As armed confrontations unfolded between rival militias and security forces, Libya’s digital space became a parallel battlefield. This report examines how disinformation, online harassment, and digital targeting surged in the wake of the unrest, exposing the vulnerabilities of Libya’s online ecosystem and the dangers of unchecked information warfare.

Our findings reveal a coordinated spike in misleading and harmful content across platforms like Facebook, X (formerly Twitter), and Telegram. This content was frequently aimed at inflaming public sentiment, tarnishing reputations, and weaponizing identity. In particular, false associations with militias, impersonation of media outlets, and the distribution of malware disguised as leaked content were common tactics used to sow confusion and fear.

The crisis was further compounded by an internet slowdown and partial shutdown. While some attributed the disruption to infrastructure damage amid the fighting, others raised concerns over deliberate restrictions imposed by authorities to suppress information flow and online mobilisation.

Amid this toxic information environment, Libyan civil society actors, fact-checkers, and independent voices mobilised quickly to counter harmful narratives and promote accurate, peace-oriented messaging. Their efforts highlight the resilience of local digital defenders but also underscore the pressing need for systemic support.

The events of May 2025 underscore how quickly online spaces can become tools of manipulation in fragile political environments. The report concludes with a set of urgent recommendations: improving digital literacy across society, increasing Arabic-language content moderation capacity, enhancing emergency response protocols on social platforms, and ensuring transparency in internet governance during crises.

Introduction:

In May 2025, the killing of Abdelghani al-Kikli—commonly known as “Ghaniwa”—a powerful militia commander in Tripoli, triggered intense street battles between rival factions nominally aligned with the Government of National Unity (GNU). But while the violence was most visible in Tripoli’s neighbourhoods, a less visible but equally potent conflict erupted online. Almost immediately, Libya’s social media landscape was inundated with disinformation campaigns, manipulated content, hate speech, and targeted harassment, revealing the strategic use of digital tools to shape narratives, discredit opponents, and exacerbate tensions.

This dual-track conflict—on the ground and in cyberspace—exposed the fragility of Libya’s digital information environment. Posts falsely implicating individuals in militia affiliations, impersonations of reputable media brands spreading malware, and repeated use of inflammatory language fueled mistrust and confusion among a population already navigating political instability and trauma.

Amid this digital chaos, users also experienced a major disruption in internet connectivity. While initial reports suggested that telecommunications infrastructure had been damaged during the clashes, others alleged that the blackout was a calculated move by authorities to curb dissent, delay mobilisation, and control the narrative, especially as public outcry and protests against the GNU intensified.

This report documents and analyses how the digital realm was weaponised before, during, and after the Tripoli unrest. It focuses on specific cases of online defamation, malicious ad campaigns, and viral misinformation, drawing lessons for platforms, policymakers, and civil society. In doing so, it offers a sobering look at the new frontlines of conflict—and the growing urgency to defend digital integrity in times of crisis.

Methodology

This study employed a mixed-methods investigative approach to analyse the digital manipulation and targeting tactics that unfolded during and immediately after the May 2025 Tripoli unrest. It focused on identifying and verifying harmful content, tracing the tactics of malicious actors, and assessing the human and digital impacts of disinformation campaigns in real time.

The methodology consisted of three interrelated phases:

1.Identification of Harmful Digital Activity

Multiple data sources were leveraged to detect and track disinformation and targeted online attacks:

  • Helpdesk Reports: Individuals directly affected by digital harassment or impersonation submitted cases through our helpdesk system. These real-time alerts offered critical insight into personal-level targeting, including incidents where people were falsely linked to militia activity.

  • Platform Monitoring and Alerts: Ongoing surveillance of Facebook and other platforms was conducted using keyword tracking, OSINT tools, and previously reported data on malicious pages. Special attention was paid to low-profile Facebook pages exhibiting abnormal behaviour, such as minimal follower counts but high ad spend or reposting rates.

  • Crowdsourced Intelligence: The research team collaborated with Meta’s safety and moderation teams, enabling verification and reporting of disinformation networks. Viral and recycled misinformation was tracked across multiple accounts to assess scope and amplification tactics.

  • Suspicious Link Analysis: Potentially harmful external URLs shared in ads or posts were systematically analysed. This included links leading to expired file-hosting services historically associated with malware campaigns.

    2.Content Evaluation and Verification Criteria

To determine the severity and intent of each identified case, a structured evaluation framework was applied:

  • Source and Authenticity Verification: Reverse image search tools (e.g., InVID, Google Reverse Image Search) were used to confirm whether profile pictures, video content, or other media assets were manipulated, reused, or stolen.

  • Intent and Harm Assessment: Content was assessed for malicious purpose—e.g., attempts to incite violence, defame individuals, impersonate institutions, or distribute malware. This was essential to distinguish between misleading content and deliberate information attacks.

  • Engagement and Amplification Metrics: The study tracked how widely harmful posts were shared or promoted, particularly in cases where misinformation was reposted by multiple pages or artificially boosted through ads.

  • Platform Policy Violation Analysis: Content was reviewed in relation to platform rules on impersonation, targeted harassment, coordinated inauthentic behaviour, and malicious software distribution.

  • Partner Collaboration Logs: All cases flagged were formally submitted to Meta’s moderation team, and enforcement actions (e.g., content takedowns, page removals) were documented in our helpdesk system.

    3. Tools and Techniques Utilised

A blend of technical and investigative tools supported the research process:

  • Helpdesk Logging System: Used to collect reports, track case resolution, and store verification outcomes.

  • Malware Analysis Platforms: Suspicious files downloaded via reported links were tested using malware analysis sandboxes to detect malicious behaviour such as data extraction, spyware deployment, or backdoor installation.

  • Open-Source Intelligence (OSINT): Public databases (e.g., UK Companies House) were consulted to verify the status of companies hosting malicious files or managing suspect domains.

  • Verification and Media Forensics Tools: Technologies like InVID and metadata scrapers were applied to assess the credibility of viral images and videos.

This rigorous methodological approach enabled the team to document and respond to emerging threats in real time, while also laying the groundwork for long-term accountability and platform responsibility. By combining digital forensics, platform engagement, and user-centred reporting, the study offers a multi-layered perspective on the weaponisation of Libya’s digital space during conflict.

Digital Weaponisation in the May 2025 Tripoli Crisis

The May 2025 unrest in Tripoli was not only a battle between rival armed groups but also a digitally orchestrated campaign of fear, manipulation, and reputational harm. As the city erupted into violence following the killing of Abdelghani al-Kikli (“Ghaniwa”), Libya’s information space became saturated with coordinated online attacks that mirrored—and in some cases intensified—the street-level conflict.

This section explores how digital platforms were exploited during this crisis, through disinformation, impersonation, malware distribution, and identity-based harassment. Drawing from three documented case studies, the analysis reveals how Libya’s fragile digital ecosystem is increasingly being weaponised to destabilise, discredit, and endanger individuals and communities.

1. Identity-Based Harassment and Defamation

In the chaos of conflict, individuals found themselves falsely accused of affiliation with dismantled militias. In one documented case, a person working for the International Organisation for Migration (IOM) was maliciously targeted on Facebook. A fraudulent page, with fewer than 4,000 followers, stole his profile image and falsely paired it with a known militia member’s photo. This post not only defamed him but posed severe risks to his personal safety.

Such identity theft tactics exemplify how social media can be transformed into a tool of psychological warfare, exposing individuals to reputational damage, job loss, physical retaliation, and deep social ostracisation. What makes these attacks more dangerous is that they often originate from seemingly obscure Facebook pages that fly under the radar, yet have an outsized impact due to the volatility of the context.

2. Impersonation and Malware as a Vector for Disinformation

Another disturbing tactic observed during the unrest involved the impersonation of reputable Libyan media outlets to spread malware. Fraudulent Facebook pages disguised as credible news sources—particularly mimicking the branding of بوابة الوسط (Alwasat News)—ran sponsored ads claiming to share leaked recordings from the conflict. These ads lured users to download compressed files hosted on Files.fm, a dissolved and unregulated file-hosting platform previously flagged for misuse.

Once downloaded, these files deployed malicious software that could compromise users’ devices, collect sensitive information, or install spyware. This tactic represents a dangerous convergence of two threats: the undermining of trust in legitimate media institutions and the weaponisation of curiosity in crisis moments to execute cyberattacks. It underscores how misinformation is no longer just a narrative; it can be executable and infectious.

3. Viral Misinformation Campaigns and Psychological Manipulation

Beyond individual targeting and technical deception, widespread misinformation campaigns aimed to exploit collective fear. Multiple Facebook pages—often unverified and operating in parallel—posted nearly identical updates with inflammatory language, fabricated claims, and unverified battlefield updates. These posts spread rapidly, reinforcing false narratives and contributing to public panic.

This pattern reveals a deliberate use of echo chambers and virality to dominate the online discourse. By repeatedly reposting falsehoods across multiple platforms, bad actors engineered a sense of credibility and urgency. As misinformation accumulated, it became difficult for users—especially those without high digital literacy—to distinguish truth from propaganda, weakening social trust and fueling offline tensions.

Observed Patterns and Strategic Implications

Across the three cases, several patterns emerged:

  • Low-visibility Pages, High Impact: Many attacks originated from low-engagement accounts that nonetheless achieved significant influence through targeted ads and viral messaging.

  • Visual Disinformation: The strategic pairing of real and manipulated images proved highly effective in manipulating perception and spreading defamation.

  • Exploitation of Platform Gaps: Gaps in content moderation in Arabic, combined with Libya’s limited institutional digital safeguards, allowed malicious actors to operate with impunity.

  • Weaponised Virality: Emotional manipulation and sensational framing helped weaponise public sentiment, turning fear into fuel for instability.

Conclusion

The May 2025 unrest in Tripoli revealed not only the physical dangers of armed conflict but also the growing risks of a fragmented and weaponised digital information space. The manipulation of online platforms to harass individuals, spread malware, and flood the public sphere with false or contradictory information represents a new dimension of warfare—one that targets perception, trust, and civic stability rather than territory alone.

The documented cases illustrate how seemingly minor actors—fraudulent Facebook pages, impersonated media brands, anonymous troll accounts—can cause outsized harm in moments of crisis. Their actions blurred the line between truth and falsehood, eroding public confidence in institutions, news sources, and even personal safety. In one striking example, a single piece of information was published, debunked, re-published as fact, and debunked again, creating such a confusing media environment that many no longer believed anything. In such an atmosphere, disinformation doesn't need to be convincing; it only needs to be overwhelming.

This digital fog of war had real-world consequences: reputations ruined, devices compromised, and trust fractured. And while local civil society actors and fact-checkers made commendable efforts to respond, the structural gaps in Libya’s information governance, content moderation, and platform accountability remain dangerously wide.

- Call to Action:

To prevent future crises from spiralling into unchecked digital chaos, urgent, coordinated action is required across all sectors:

  • Social Media Platforms must expand Arabic-language content moderation, prioritise context-aware enforcement in conflict zones, and work transparently with local partners to flag and remove harmful content.

  • Civil Society and Media must be supported to scale fact-checking initiatives, develop digital literacy campaigns, and produce clear, accessible counter-narratives rooted in verified information.

  • Libyan Authorities and Regulators must commit to transparent internet governance, ensure connectivity is protected even during emergencies, and resist using shutdowns or surveillance as tools of control.

  • International organisations and Donors should invest in digital safety infrastructure, support cross-sectoral collaboration on counter-disinformation, and champion the integration of digital rights into humanitarian and peacebuilding frameworks.

  • Citizens must be empowered through education and engagement to question, verify, and responsibly share information, especially in times of uncertainty.

    Libya’s digital future must not be left vulnerable to manipulation and harm. Protecting that future requires treating information integrity not as a luxury, but as a fundamental pillar of peace, trust, and democratic resilience. The next conflict may not begin with gunfire; it may begin with a post, a video, or a lie that spreads faster than truth can catch it.

- Disclaimer:

The case studies presented in this appendix are based on documented reports, helpdesk submissions, and verified monitoring during the May 2025 Tripoli unrest. While efforts have been made to ensure factual accuracy and protect the privacy of those affected, certain details—such as names, links, and identifiers—have been withheld or anonymised to ensure the safety and confidentiality of individuals. These cases are illustrative, not exhaustive, and reflect the broader patterns of digital manipulation and targeting observed during the crisis.

The analysis is intended for informational and research purposes and does not imply legal liability or attribution to specific actors unless explicitly confirmed.

- Contributors:

This report was developed under the coordination and support of the Annir Initiative, in collaboration with a network of Libyan digital rights defenders, researchers, and civil society actors. Special thanks to the individuals and partners who contributed helpdesk data, verification support, and contextual analysis.

This work was made possible with technical input from digital security experts, open-source intelligence practitioners, and platform safety liaisons. The contributors extend their gratitude to all those working to protect the integrity, safety, and inclusiveness of Libya’s digital space.

مواضيع أخرى قد تهمك