Section 230 Is the First Amendment for the Internet
The bipartisan campaign to destroy Section 230 isn’t about protecting kids or cracking down on Big Tech. It’s about concentrating control over who gets to speak online.
Technology journalist Taylor Lorenz just launched a multi-part video series on Section 230 that every American who uses the internet should watch. Her timing couldn’t be more urgent. In December 2025, Senators Lindsey Graham and Dick Durbin introduced the Sunset Section 230 Act with broad bipartisan co-sponsorship — Graham, Durbin, Grassley, Whitehouse, Hawley, Klobuchar, Blackburn, Blumenthal, Moody, and Welch — to repeal Section 230 entirely by January 2027. In the House, Representative Harriet Hageman introduced a companion sunset bill. The TAKE IT DOWN Act, signed into law by President Trump in May 2025, already carved out new exceptions. And a cascade of additional bills — the Deepfake Liability Act, the SCREEN Act, KOSA — are working to hollow out what remains of the law’s protections.
The politicians framing this as accountability for Big Tech are either confused about what Section 230 actually does, or they’re lying about it. Lorenz makes the case as clearly as anyone has: Section 230 isn’t a corporate shield. It’s the legal infrastructure that makes democratic speech online possible. And the campaign to destroy it is, at bottom, a campaign to decide who gets to speak — and who doesn’t.
I want to build on Lorenz’s arguments by connecting them to the constitutional principles I’ve been writing about in this Substack, to the promise of decentralized AI as a structural alternative, and to the specific actions we can take right now to defend what may be the most important 26 words in American law.
What Section 230 Actually Does — and Who It Actually Protects
Lorenz’s video opens with a history lesson that most Americans never received: the story of CompuServe and Prodigy in the early 1990s, and how two contradictory court rulings created an impossible legal trap that Section 230 was written to resolve.
The short version: CompuServe took a hands-off approach to its forums and was ruled not liable for defamatory user posts. Prodigy tried to moderate — removing spam and illegal content to create a family-friendly environment — and was ruled more liable precisely because it moderated. The message to every internet company was perverse and clear: if you try to clean up your platform at all, you become legally responsible for everything on it. The safest strategy was to ignore abuse entirely.
Section 230, authored by Republican Chris Cox and Democrat Ron Wyden, resolved this with two principles. First, platforms that host user speech aren’t the speaker of that speech — the user is. If you threaten someone over the phone, we don’t sue the phone company. Second, platforms can moderate in good faith — removing spam, illegal content, harassment — without that moderation making them liable for everything they miss.
That’s it. Those are the two ideas. And they passed the House 420 to 4.
Here’s what Lorenz drives home that most critics miss entirely: Section 230 doesn’t primarily protect Meta and Google. It protects you. When you forward an email, share a link, leave a restaurant review, post in a Discord server, or comment on a news article, Section 230 is what prevents you from being sued for it. When a small forum operator runs a community for people recovering from addiction, Section 230 is what keeps a single frivolous lawsuit from bankrupting them overnight. When Wikipedia’s volunteers build an encyclopedia, Section 230 is what makes that legally possible.
Repeal Section 230, and the entities best positioned to survive are the trillion-dollar companies with armies of lawyers and AI-powered moderation systems. Everyone else — small platforms, nonprofit communities, independent creators, individual users — gets crushed. As the Electronic Frontier Foundation put it just days ago, marking Section 230’s 30th anniversary: “Repealing Section 230 would only cement the status of Big Tech monopolies.” The ACLU has been equally direct: “If this vital protection is taken away, people could find their voices censored, especially when talking about ideas that are under political attack today: race and racism, sexuality, and gender justice.”
This is why Facebook was one of the first companies to endorse FOSTA-SESTA back in 2018. Big Tech doesn’t fear Section 230 reform. It welcomes it — because compliance costs that are trivial for a trillion-dollar company are existential for any competitor.
Madison’s Warning: Faction and the Concentration of Speech
James Madison would have recognized this dynamic instantly. In Federalist No. 10, Madison identified the concentration of power among factions as the gravest threat to republican government. His solution wasn’t to eliminate factions — that would require eliminating liberty itself. Instead, he designed structural mechanisms to prevent any single faction from dominating.
Section 230 functions as precisely this kind of structural protection in the digital sphere. It distributes the capacity for speech across the entire population rather than concentrating it in the hands of those who can afford legal compliance. Before Section 230, publishing at scale required institutional backing — a newspaper, a broadcast license, a publishing house. After Section 230, anyone with an internet connection could reach an audience. That democratization of speech is arguably the most Madisonian development in American communications since the founding.
Repealing Section 230 would reverse this entirely. It would re-concentrate speech governance in the hands of the wealthiest platforms and the most litigious actors. Small communities, independent voices, marginalized groups — exactly the speakers Madison’s framework was designed to protect from majoritarian suppression — would be silenced first.
In Federalist No. 51, Madison wrote that “ambition must be made to counteract ambition” — that the structure of government must set competing powers against each other so no single entity could dominate. The current bipartisan assault on Section 230 does the opposite. It removes a structural check on concentrated power and hands the governance of online speech to whichever institutions — government agencies, corporate legal departments, or well-funded litigants — can most aggressively shape the new liability landscape.
Consider what Lorenz highlights about the FOSTA-SESTA precedent. When Congress carved out the first real exception to Section 230 in 2018, platforms didn’t respond with more nuanced moderation. Craigslist shut down its entire personals section overnight. Sex workers who used online platforms to screen clients and avoid trafficking lost those safety tools. A Government Accountability Office report later found that FOSTA never produced a single trafficking prosecution. The law didn’t protect vulnerable people — it made them more vulnerable, while giving platforms every incentive to censor broadly rather than risk liability.
That’s not a policy failure. That’s a structural inevitability. When you make platforms liable for user speech, they don’t moderate more carefully. They moderate more aggressively and more crudely, because the cost of over-censorship is zero while the cost of under-censorship is a lawsuit.
The Surveillance Business Model Is the Problem — Not the Speech Protection
In my earlier piece on techno-feudalism, I wrote about Section 230’s role in enabling the surveillance economy. The surveillance business model — comprehensive behavioral tracking, data extraction without compensation, algorithmic manipulation for engagement — is a genuine crisis. But Section 230 didn’t create that business model. The absence of data privacy law, antitrust enforcement, and digital property rights did.
Section 230 says platforms aren’t liable for user speech. It says nothing about data harvesting, behavioral prediction, or the monetization of surveillance. Conflating the speech protection with the surveillance economy is exactly the confusion that politicians exploit when they tell you that repealing Section 230 will “crack down on Big Tech.” It won’t. It will eliminate the legal framework that allows Big Tech’s competitors to exist while leaving the surveillance business model completely intact.
The right targets are structural: comprehensive federal data privacy legislation that treats behavioral surveillance as the Fourth Amendment violation it is. Antitrust enforcement that breaks up the platform monopolies extracting rent from digital serfs. The digital property rights framework I proposed in Part 2 of my series on decentralized AI, which would establish data ownership, mandate compensation for AI training, and require meaningful consent for surveillance. These are the structural interventions that would actually redistribute digital power — the Madisonian approach, building checks and balances rather than tearing down the one structural protection ordinary users still have.
Decentralized AI: The Technical Separation of Powers
This is where the constitutional framework meets technical architecture. Madison designed a system where power was distributed across competing institutions — executive, legislative, judicial — so that no single entity could dominate. The digital equivalent isn’t a single law or regulation. It’s a technical infrastructure that distributes power by design.
Decentralized AI protocols represent this possibility. Instead of a handful of corporations controlling the means of intelligence production — training data, compute resources, model weights, inference infrastructure — decentralized systems distribute these capabilities across networks of participants who each maintain sovereignty over their own data and contributions.
Think of it as the difference between a feudal estate and a town of independent landowners. In the feudal model, one lord controls all the land, all the tools, all the surplus — and conducts comprehensive surveillance of every serf. That’s our current platform economy. In the distributed model, each participant owns their productive assets, contributes to shared infrastructure voluntarily, and retains control over their own information. That’s what cryptographic protocols, blockchain verification, and decentralized compute networks make technically feasible.
The crucial insight is that you need both the legal framework (Section 230 for speech, plus the digital property rights and privacy protections I’ve proposed) and the technical infrastructure (decentralized protocols that make surveillance capitalism structurally obsolete). One without the other is insufficient. Legal protections without technical alternatives leave us dependent on the same concentrated platforms. Technical alternatives without legal protections leave those alternatives vulnerable to regulatory capture and litigation.
The Current Threat — And What You Can Do About It
The legislative assault on Section 230 is accelerating on multiple fronts simultaneously. Here’s what’s on the table right now:
The Sunset Section 230 Act (S. 3546): Introduced by Graham and Durbin with bipartisan co-sponsorship, this bill would repeal Section 230 outright by January 2027. The EFF calls it “grievances masquerading as legislation, not serious policy” — the bill proposes no replacement framework and offers no answer to the fundamental question of what legal standard should govern intermediary liability.
The Sunset To Reform Section 230 Act (H.R. 6746): Rep. Hageman’s House companion, which would sunset Section 230 by December 2026 — less than a year from now — forcing reauthorization under whatever political conditions prevail.
TAKE IT DOWN Act expansions: Already signed into law, the Act requires platforms to remove flagged content within 48 hours with no meaningful safeguards against bad-faith removal requests. New proposals would condition Section 230 protections on platforms meeting vaguely defined “duty of care” standards — precisely the kind of ambiguous mandate that encourages blanket censorship.
The SCREEN Act and KOSA: Age verification and “child safety” bills that would effectively require identity verification for internet access, eliminate anonymous speech, and give government officials authority to determine what content is appropriate — powers that, as the ACLU has warned, would be wielded by whoever holds executive authority.
As Lorenz points out, the irony of handing these censorship tools to the Trump administration while claiming to “protect kids” should be lost on no one.
Here’s what you can do:
Contact your representatives directly. Tell them you understand what Section 230 actually does and that you oppose the Sunset Section 230 Act (S. 3546 and H.R. 6746). Tell them that destroying Section 230 would consolidate Big Tech’s monopoly power, not challenge it. The website whatissection230.org, maintained by Fight for the Future, has scripts and contact tools. BadInternetBills.com tracks the full slate of problematic legislation and StopOnlineIDChecks.org focuses on the identity verification threat.
Support the organizations fighting this. The Electronic Frontier Foundation, the ACLU, Fight for the Future, and the Woodhull Freedom Foundation are doing essential work with minimal resources. Lorenz’s video series itself is entirely self-funded — she couldn’t find a single corporate sponsor for a series defending the law that allegedly benefits corporations. That should tell you everything about who Section 230 actually serves.
Demand the real reforms. When your representatives claim to want “accountability” for Big Tech, ask them why they aren’t pursuing comprehensive data privacy legislation, antitrust enforcement, algorithmic transparency requirements, or digital property rights. These are the interventions that would actually constrain corporate power without destroying the legal infrastructure that protects ordinary users.
Support decentralized alternatives. Use and advocate for platforms, protocols, and tools that distribute power rather than concentrate it. The technical infrastructure for a post-surveillance internet exists — it needs users, developers, and political support to scale.
The Stakes
Lorenz closes her video with a point that deserves repeating: every single civil liberties organization in the country — the EFF, the ACLU, LGBTQ advocacy groups, reproductive justice organizations, library associations, journalism groups — is sounding the alarm about the assault on Section 230. These are the organizations that exist to protect the rights of the most vulnerable people in our society. They are unanimous that weakening Section 230 would devastate the communities they serve.
Meanwhile, the politicians leading the charge can’t explain how the internet works. Justice Kagan’s comment during Gonzalez v. Google — “We’re a court. We really don’t know about these things” — is funny until you remember that these are the people deciding the legal future of the most important communications infrastructure in human history.
Section 230 isn’t perfect. The internet isn’t perfect. But Section 230 is the structural protection that makes democratic speech online possible — just as the First Amendment is the structural protection that makes democratic speech in the physical world possible. You don’t repeal the First Amendment because people say terrible things. You build better institutions, better norms, better tools. You address the actual problems — surveillance capitalism, monopoly power, the absence of digital rights — with structural solutions that distribute power rather than concentrate it.
Madison understood this. Cox and Wyden understood this in 1996. Lorenz understands this now. The question is whether enough of us will understand it before Congress destroys the legal foundation of the open internet — and hands the wreckage to the very corporations and political actors who wanted it gone in the first place.
Eric Forst is the founder of Civics Nation and co-founder of Blocksee, Kayeh AI, and Gracepoint Solutions. With more than 30 years of experience as a technology executive, teacher, and entrepreneur, Forst spent a decade helping build the surveillance marketing infrastructure he now works to dismantle. His forthcoming book, “Terms of Service: Reclaiming Human Agency in an Age of Digital Surveillance,” argues for constitutional and technical frameworks that distribute digital power rather than concentrate it.
Civics Nation is a reader-supported publication. To receive new posts and support this work, consider becoming a free or paid subscriber.



