Brazil Supreme Court Redefines Platform Liability Under Marco Civil da Internet featured image

Brazil Supreme Court Redefines Platform Liability Under Marco Civil da Internet

Published: March 12, 2026
2 min read

On February 10, 2026, Brazil's Supreme Federal Court (STF) issued a landmark ruling that fundamentally alters the nation's digital regulatory landscape by declaring Article 19 of the Marco Civil da Internet (Internet Civil Framework) partially unconstitutional. Enacted in 2014, Article 19 originally established a broad safe harbor for internet service providers, stating they could only be held civilly liable for third-party user content if they failed to comply with a specific, prior judicial order demanding its removal.

The STF's highly anticipated judgment dismantles this pure "notice-and-takedown" model, aligning Brazil closer to the proactive duty-of-care frameworks seen in the European Union's Digital Services Act (DSA). The Court reasoned that given the colossal volume of online content, requiring a judicial order for every instance of harmful speech creates an intolerable delay that results in irreparable harm. Consequently, the Court established a tiered liability regime based heavily on the specific nature and severity of the content in question.

Crucially, the Court created an exception for "evidently illicit content"—a category encompassing child sexual abuse material (CSAM), racism, hate speech, and incitement to the violent abolition of the democratic system. For this tier, platforms are legally presumed to have unequivocal knowledge and are subjected to a strict duty of care. They must proactively detect and diligently remove such content without waiting for user notifications or judicial orders; failure to do so systemically will result in direct civil liability.

Furthermore, the STF removed the safe harbor protections for "paid or boosted content" (including advertisements and monetized posts), holding platforms directly liable for unlawful material they profit from distributing. The ruling also imposes liability when platforms utilize "artificial distribution networks" (bot networks) to amplify illegal speech. However, the Court preserved the original judicial order requirement for private interpersonal communications (such as WhatsApp or email) and for complex, individually harmful content like defamation, attempting to balance the new duties with constitutional privacy and free speech protections. The decision marks a definitive shift toward a global model of digital diligence.

Source: Tech Policy

L

Lawzana Editorial Team

Legal Industry Experts

Our editorial team consists of experienced legal professionals and industry analysts who provide insights into the latest legal trends, regulatory changes, and industry developments to help both legal practitioners and clients stay informed.

Last updated: March 12, 2026
Share:

Start Growing Your Practice Today

Free listing. Easy profile setup. Immediate online visibility.

By submitting this form, you agree to our Terms of Service and Privacy Policy.