- AiNews.com
- Posts
- Cloudflare Blocks AI Crawlers by Default and Tests Paywall for Web Scrapers
Cloudflare Blocks AI Crawlers by Default and Tests Paywall for Web Scrapers
Cloudflare will automatically block AI bots from scraping websites and is piloting a tool that lets top publishers charge for access.

Image Source: ChatGPT-4o
Cloudflare Blocks AI Crawlers by Default and Tests Paywall for Web Scrapers
Cloudflare will now block known AI web scrapers by default and offer a pricing tool that lets some publishers charge AI companies for access to their content.
Blocking AI Crawlers by Default
Cloudflare, one of the internet’s largest architecture providers, announced Tuesday that it will now block known AI crawlers by default. The move is aimed at stopping bots from “accessing content without permission or compensation,” according to the company.
New Cloudflare domain owners will now be asked whether they want to allow AI bots to scrape their sites. The default setting will block them, but publishers can opt in if they choose.
The update builds on Cloudflare’s existing bot detection systems, which already allowed websites to block AI crawlers—even those that ignore robots.txt protocols. These scrapers are identified using Cloudflare’s internal list of known AI bots.
Introducing “Pay Per Crawl”
As part of its broader strategy, Cloudflare is also piloting a “Pay Per Crawl” program. The feature allows a select group of publishers and content creators to set prices for AI companies that want to scrape their sites.
AI firms can browse available pricing and decide whether to register and pay for access—or decline and walk away. The goal, according to Cloudflare, is to support “quality content used the right way — with permission and compensation.”
This feature is currently limited to a group of high-profile publishers but may expand in the future.
A Changing Landscape for Online Publishing
Cloudflare has been developing tools to push back against AI scrapers since 2023. While early efforts focused on robots.txt-based blocking, recent updates go further—targeting non-compliant bots and adding new deterrents, such as routing bots into an “AI Labyrinth” to slow them down.
Major publishers including The Associated Press, The Atlantic, Fortune, Stack Overflow, and Quora have backed Cloudflare’s new crawler restrictions. The shift comes amid rising concerns that AI tools are diverting traffic from original content platforms.
“People trust the AI more over the last six months, which means they’re not reading original content,” Cloudflare CEO Matthew Prince said at a recent Axios Live event.
Transparency and Control for Site Owners
Cloudflare is also working with AI companies to verify and label their crawlers more clearly. Scrapers will now be encouraged to disclose their intent—whether they’re collecting data for training, inference, or search—and domain owners can review that information before granting access.
In a press release, Prince emphasized the importance of protecting original content online.
“Original content is what makes the Internet one of the greatest inventions in the last century, and we have to come together to protect it,” he said. “AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate.”
What This Means
Cloudflare’s new policy reflects a broader tension between content creators and AI developers over how data is sourced and monetized. By making AI crawler blocking the default and testing pricing tools like Pay Per Crawl, Cloudflare is moving toward a consent-based content ecosystem—one where creators decide how and when their work can be used.
For publishers already licensing content to AI companies—such as through formal agreements or partnerships—this policy may serve as an added layer of control. It allows them to manage unauthorized scraping more effectively, even as they maintain relationships with AI firms on their own terms.
This approach could shift the dynamics of how large language models are trained and how platforms structure access to high-quality, human-generated material. For publishers, it’s a potential path toward reclaiming control and compensation in a web increasingly shaped by generative AI.
As the rules of web publishing evolve, tools like these signal a shift toward a more accountable internet—one where creators don’t just publish, but also have a say in how their work is used.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.