India’s Media Crackdown and a Comparison of Global Responses to Digital Misinformation
A screenshot of the homepage of the Pakistani news outlet Geo News from Wednesday, May 7.
The Integrity Project
India: A National Security Imperative
The Indian government has recently banned 16 Pakistani YouTube channels, citing their role in spreading provocative, communally sensitive content and misinformation targeted against India. The decision, announced Monday, comes just days after a terrorist attack April 22 in Pahalgam, Jammu and Kashmir, which left 26 civilians dead.
The Ministry of Home Affairs recommended the ban, highlighting that many of the channels were engaged in what they believed to be a coordinated effort to malign India’s army, security forces, and government institutions. The banned platforms included major Pakistani media outlets such as Dawn News, Geo News, ARY News, Bol News, Samaa TV, and Suno News, along with popular individual social media content creators—channels which had amassed over 63 million subscribers.
According to reports from The Times of India and Hindustan Times, the government acted swiftly to block access to these channels from within India, with affected pages now displaying messages that the content is unavailable due to a government order related to national security concerns. Officials stressed that the channels were spreading "false narratives" that threatened to inflame communal tensions and undermine public order in the wake of the Pahalgam attack.
The government's digital crackdown is part of a broader set of diplomatic and security measures taken against Pakistan. In addition to the YouTube ban, India suspended visa services for Pakistani nationals, expelled Pakistani military attachés, halted operations under the Indus Waters Treaty, and closed the Attari land-transit post. Officials also issued a formal warning to the BBC for its coverage of the Pahalgam attack, criticizing its use of the word "militants" instead of "terrorists" when describing the attackers.
Pakistan, for its part, has denied any involvement in the Pahalgam incident and has called for an independent investigation into the attack. Pakistani officials accused India of escalating tensions unnecessarily and politicizing the tragedy for domestic gain.
This move reflects India’s intensifying efforts to counter misinformation campaigns perceived as threats to national security, particularly from neighboring countries. It also highlights the growing role of digital media regulation as a key front in not only India’s broader national security, but many nations across the globe.
United States: Balancing Free Speech and Harms
In contrast to India’s approach, the United States has contented with the challenge of regulating online content within the framework of the First Amendment, which guarantees free speech. This constitutional protection limits the government's ability to directly control online content, leading to a reliance on legislation that targets specific harms.
One such legislative effort is the "Take It Down Act," passed by Congress in April 2025. This bipartisan bill criminalized the unconsensual non-consensual sharing of intimate images, including AI-generated deepfakes and revenge porn. It mandated that online platforms remove such content within 48 hours of a valid report. While the act has garnered support from tech companies and advocacy groups, some digital rights organizations expressed concern over potential overreach and the risk of suppressing legitimate content.
Additionally, multiple agencies and programs within the Executive Branch whose purview was coordinating and investigating disinformation threats both at home and abroad have been recently shuttered after concerns of overreach. The Disinformation Governance Board, created by the Department of Homeland Security in April 2022, was an advisory board intended to coordinate against disinformation threats, particularly from foreign actors like Russia. Its activities were paused in May 2022 after facing intense backlash from detractors who accused it of threatening free speech. Its director, Nina Jankowicz, resigned amid personal attacks and an advisory council review prompted its dissolution.
The U.S. State Department's Global Engagement Center (GEC), established to combat foreign disinformation, was closed in December 2024 after Congress failed to reauthorize it. Critics argued that the GEC's activities infringed on domestic political discourse, while supporters viewed it as essential for countering foreign propaganda. Although it was briefly reorganized into a successor entity—the Counter Foreign Information Manipulation and Interference Office—its permanent termination was announced in April by Secretary of State Marco Rubio.
The closures highlight a continued pressure point in American politics when it comes to addressing misinformation and disinformation threats, both foreign and domestic. The U.S.’ current approach, which critics warn has left the country vulnerable to foreign disinformation campaigns from our adversaries—many of whom spend billions annually on influence campaigns—is seen by supporters as worth the risk to avoid more broadly infringing on free speech rights.
European Union: Comprehensive Regulation through the Digital Services Act
The European Union has adopted a more systemic approach to online content regulation through the Digital Services Act (DSA), which was enacted in October 2022. Unlike the United States’ more “laissez-faire” approach, The DSA imposes strict obligations on online platforms, particularly Very Large Online Platforms (VLOPs are considered platforms with an average of 45 million or more monthly active users in the ER), to manage and moderate content responsibly. Key provisions include:
• Accountability for Illegal Content: Platforms must promptly remove illegal content and can face fines of up to 6% of their annual global revenue for non-compliance.
• Transparency in Algorithms: Companies are required to disclose how their algorithms work, particularly in content curation and moderation.
• User Reporting Tools: The DSA mandates user-friendly mechanisms for reporting illegal content and obliges platforms to cooperate with "trusted flaggers".
• Risk Assessments: Platforms must conduct regular assessments to identify and mitigate systemic risks, including the spread of disinformation.
Furthermore, the EU has integrated its previously voluntary Code of Practice on Disinformation into the DSA, making it legally binding. This move enhances the EU's ability to enforce measures against the spread of false information online.
Supporters of the EU's approach emphasize that the DSA prioritizes transparency, accountability, and the protection of democratic processes, setting a global benchmark for comprehensive digital regulation. Critics, however, have argued it risks undermining free expression of ideas by incentivizing platforms to over-correct by removing too much content—often relying on automated moderation tools—out of fear of liability.
Civil liberties groups have warned that what the DSA has done is effectively outsource censorship decisions to private companies without sufficient oversight. The law’s broad, sometimes vague definitions of “illegal content” is also a point of contention, as they vary across EU member states and can potentially lead to not only inconsistent enforcement, but legal uncertainty.
Comparative Analysis: Divergent Paths
The strategies employed by India, the United States, and the European Union highlight differing priorities and legal frameworks in addressing online content moderation:
• India: Prioritizes national security and public order, employing swift and decisive actions to remove content deemed harmful, but often with limited transparency or legal recourse
• United States: Balances the protection of free speech with targeted legislation addressing specific harms, relying on the cooperation of private platforms for content moderation but leaving the populace open to potentially malign misinformation campaigns
• European Union: Implements comprehensive, legally binding regulations that hold platforms accountable for content moderation, emphasizing transparency and systemic risk management, but places oversight directly onto the shoulders of platforms to contend with an oft-confused mélange of definitions and focuses
In a time when information warfare is increasingly waged through algorithms and viral content, each region’s strategy reveals deeper ideological commitments—India’s emphasis on sovereignty and national unity, the U.S.’s fierce guardrails around speech, and the EU’s technocratic push for structured oversight. As global platforms operate across these conflicting regimes, the tension between security, liberty, and accountability will only intensify—forcing democracies to ask not just how to moderate, but who gets to decide the terms.
ADDITIONAL COVERAGE OF THIS TOPIC FROM THE TIMES OF INDIA
ADDITIONAL NEWS FROM THE INTEGRITY PROJECT