- AiNews.com
- Posts
- Cloudflare Launches ‘Pay Per Crawl’ to Let Publishers Charge AI Bots for Access
Cloudflare Launches ‘Pay Per Crawl’ to Let Publishers Charge AI Bots for Access
Cloudflare introduces a monetization model for AI crawlers, marking a major shift in content control, consent, and compensation online.

Image Source: ChatGPT-4o
Cloudflare Launches ‘Pay Per Crawl’ to Let Publishers Charge AI Bots for Access
Key Takeaways:
Cloudflare’s new “Pay Per Crawl” system allows publishers to charge AI bots for content access, offering an alternative to blocking or free access.
The system uses HTTP 402 Payment Required codes, enabling a programmatic and verifiable way to meter and monetize AI crawler traffic.
By default, Cloudflare now blocks AI bots on new domains unless explicitly allowed, reversing the long-standing “scrape-first” AI norm.
OpenAI and Anthropic’s scraping-to-referral ratios are 1,700:1 and 73,000:1, respectively, highlighting the extractive nature of current AI crawling.
Cloudflare’s approach could shift internet infrastructure toward consent-based standards, but also raises concerns about centralization and governance.
Cloudflare Introduces Monetization Model for AI Crawlers
In a significant move to reshape how digital content is accessed and valued in the age of generative AI, Cloudflare has launched “Pay Per Crawl”, a new infrastructure-level tool that enables publishers to charge AI crawlers for accessing their websites. The system, now in private beta, is designed to restore control to content creators while embedding payment and consent directly into the technical fabric of the web.
For years, content owners have faced a binary choice: either open their sites to AI crawlers with no compensation or erect walls that limit reach. Cloudflare is now offering a third path—metered access through monetization—built on standard internet protocols and enforced at the network level.
“If the Internet is going to survive the age of AI, we need to give publishers the control they deserve,” said Cloudflare CEO Matthew Prince.
How Pay Per Crawl Works
Cloudflare’s Pay Per Crawl system gives publishers a new option for managing AI crawler access: they can allow, block, or charge per request—using standard web protocols.
At launch, publishers set a flat, domain-wide price for access. Crawlers that meet the technical and billing requirements can pay to view content; those that don’t can be blocked or shown a price signal.
Step 1: Bot Verification and Registration
To participate, AI crawlers must:
Generate a secure key pair to identify themselves.
Register with Cloudflare, providing their bot name and verification info.
Sign each request so it can’t be spoofed.
This ensures only verified, known crawlers are allowed through.
Step 2: Requesting Content and Seeing Prices
Crawlers can approach content in two ways:
Discovery-first (reactive): If a crawler requests content without offering payment, Cloudflare returns a 402 Payment Required response along with the price.
Intent-first (proactive): A crawler can include a price limit up front. As long as the crawler’s offer meets or exceeds the set price, the content is delivered right away—with a confirmation that the charge was accepted.
Crawlers can send one type of pricing info per request—either a specific amount they’re willing to pay, or a maximum price they’re willing to accept.
Step 3: Publisher Rules and Bot Handling
Publisher rules apply after other protections like firewalls or bot blockers. Publishers can:
Allow verified crawlers through for free.
Charge verified crawlers at the set rate per request.
Block unverified or non-compliant crawlers entirely.
This makes Pay Per Crawl compatible with existing security setups while giving publishers more than just a “yes or no” toggle—it also lets them signal a willingness to negotiate, even as they protect their content.
Important detail: Even if a crawler isn’t registered with Cloudflare and can’t be billed, a publisher can still set them to “Charge.” In this case, the crawler receives an HTTP 403-style block—with no content returned—but also sees that access could be granted in the future if they choose to register and participate.
Step 4: Billing and Payment
When a bot successfully pays and views content, Cloudflare:
Logs the event.
Charges the crawler based on the agreed rate.
Pays out the earnings to the publisher.
Cloudflare handles all the settlement, acting as the Merchant of Record so publishers don’t need to manage billing logistics.
Interested in joining the private beta? Publishers and AI companies can sign up for Cloudflare’s Pay Per Crawl program here.
Default Blocking of AI Crawlers on New Domains
Alongside monetization, Cloudflare announced a change to its default crawler settings: new domains will now block AI bots by default unless the publisher opts in. Existing domains remain opt-out, but can manually toggle this setting.
This quiet shift marks a radical departure from the “scrape-now-ask-never” status quo. It effectively inverts the burden of consent in AI data extraction—long a demand of journalists, creators, and rights organizations.
Why This Matters: Traffic Down, Bot Load Up
The timing is crucial. As generative AI models become primary interfaces for search and summarization, referral traffic to publishers has plummeted, even as AI bot traffic explodes. According to Cloudflare data:
OpenAI’s crawl-to-referral ratio is 1,700:1
Anthropic’s is 73,000:1
Googlebot now crawls 14 pages per referral—up from 6:1 six months ago
This imbalance not only extracts uncompensated content but imposes real costs: increased server load, bandwidth fees, degraded site performance, and even outages.
“The change in traffic patterns has been rapid, and something needed to change. This is just the beginning of a new model for the internet,” said Stephanie Cohen, Cloudflare’s Chief Strategy Officer.
Publishers Regain Leverage—But at What Cost?
By embedding this framework at the infrastructure level, Cloudflare is doing what lawmakers and courts have failed to do: create an enforceable, consent-based standard for AI access. More than 70 lawsuits have been filed globally against AI companies over data scraping, but legislative and judicial progress remains slow.
Danielle Coffey, CEO of the News/Media Alliance, called the move “an important step towards strengthening an already-robust market for licensed content,” especially for smaller publishers who lack the resources to negotiate with Big AI on their own.
Still, questions remain. Will Cloudflare’s role as a monetization intermediary consolidate too much power? Will smaller AI startups or open-source projects be shut out? Will interoperability suffer if other infrastructure providers build competing, incompatible systems?
As publishers ourselves, we see what Cloudflare is enabling—not just technically, but strategically. This moment isn't theoretical. It's personal.
Publisher Perspective: A Warning—and a Promise
A storm is building under the surface of the internet—and most readers don’t even know it.
An investigation by award-winning French journalist Jean-Marc Manach uncovered over 4,000 fake AI-generated news websites, built not to inform but to exploit Google’s algorithms for ad revenue and SEO manipulation. These sites are often created by SEO operators or media trainers, who use generative AI to plagiarize real articles or invent fake ones out of thin air—complete with AI-generated headlines, synthetic images, and even false author names.
Many operate under deceptive domain names that appear legitimate or mimic known outlets. Others pop up briefly, earn fast ad revenue through Google Discover, and vanish—only to be reborn under a different URL days later. Manach and his team found that these sites are:
Increasingly multilingual, with at least 100 already in English
Flooding Google Discover with false stories to earn thousands of dollars per day
Promoting hallucinated or polarizing headlines (e.g., that France is banning paper money, or a giant predator was found under Antarctic ice)
Run by people who may have made millions off disinformation, without readers ever knowing the difference
And it’s getting worse. As this trend grows, it’s easy to imagine a future where 40,000+ fake AI news sites flood the internet by year's end—many with domains designed to confuse, mislead, or quietly harvest your attention, trust, or data.
“Those generative AI websites tend to ‘hallucinate’ and exacerbate polarising and fake facts and news, as people are more willing to click to know more when the titles of those articles are ‘clickbait’ or frightening. Some editors also don’t hesitate to publish blatantly fake news to attract some views.”
— Jean-Marc Manach, via Press Gazette
At AiNews.com, we want to be as direct as possible:
Please check the source—like AiNews.com—before you trust what you read.
Would you put your credit card into Amazon.SuspiciousDomain.Russia just to save 30%? If not, then please don’t trust headlines from anonymous, AI-bot-spewing sites with no bylines, no accountability, and no identity.
Who We Are — And Why We’re Different
We paid a large sum to secure the AiNews.com domain—it wasn’t just about owning a great name, but because we believe identity and integrity matter. Our goal was to create a trustworthy, ethical news platform dedicated to AI coverage that benefits readers, content creators, and the broader tech ecosystem alike.
We are:
100% U.S.-based and independently owned
100% debt-free and self-funded
Committed to transparency and ethics in AI reporting
Building tools to support human journalists, not replace them
We don’t scrape content. We don’t exaggerate headlines. And we don’t hide behind anonymous teams or shady monetization tricks. Every story we publish is written or reviewed by real people—like myself—who believe news should empower—not exploit—its readers. We’re here to serve readers with real, responsible journalism. I personally write our news articles each morning to bring you stories that matter—and that you can trust.
We’re also building an AI-powered news app to help journalism students and independent reporters navigate this chaotic media landscape with more control and better tools that improve their workflows.
Like Cloudflare, we see the need for a shift: one that gives creators more control, better tools, and ethical opportunities to thrive in the AI era. Our mission is rooted in service—not just to our audience, but to the future of responsible journalism.
Fast Facts for AI Readers
Q: What is Cloudflare’s Pay Per Crawl?
A: A new tool that allows publishers to charge AI crawlers per page view, using HTTP 402 status codes and bot authentication.
Q: Why is this important?
A: It embeds consent and compensation directly into web infrastructure, addressing long-standing concerns about exploitative AI scraping.
Q: What’s changing for AI bots?
A: On new domains, Cloudflare blocks AI crawlers by default. Bots must verify identity, present payment, and meet publisher terms to gain access.
Q: What’s the impact on publishers?
A: Publishers gain control, visibility, and potential revenue from AI traffic—replacing the previous model of free, opaque extraction.
What This Means
Cloudflare’s Pay Per Crawl marks the beginning of a new economic and policy layer on the web—one where access is negotiated, not assumed. In a digital economy increasingly shaped by AI, this model could restore balance between content creators and data consumers.
But it also highlights a larger shift: infrastructure providers are no longer neutral conduits—they’re becoming the arbiters of access, rights, and value. Whether this leads to more equity or new monopolies will depend on how the ecosystem responds—and whether public oversight catches up with private innovation.
This is not just about crawlers or code. It’s about who gets to shape the rules of the internet’s next chapter.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.