Skip to content

Changelog

New updates and improvements at Cloudflare.

All products
hero image
  1. R2 Data Catalog, a managed Apache Iceberg catalog built into R2, now removes unreferenced data files during automatic snapshot expiration. This improvement reduces storage costs and eliminates the need to run manual maintenance jobs to reclaim space from deleted data.

    Previously, snapshot expiration only cleaned up Iceberg metadata files such as manifests and manifest lists. Data files that were no longer referenced by active snapshots remained in R2 storage until you manually ran remove_orphan_files or expire_snapshots through an engine like Spark. This required extra operational overhead and left stale data files consuming storage.

    Snapshot expiration now handles both metadata and data file cleanup automatically. When a snapshot is expired, any data files that are no longer referenced by retained snapshots are removed from R2 storage.

    Terminal window
    # Enable catalog-level snapshot expiration
    npx wrangler r2 bucket catalog snapshot-expiration enable my-bucket \
    --older-than-days 7 \
    --retain-last 10

    To learn more about snapshot expiration and other automatic maintenance operations, refer to the table maintenance documentation.

  1. Workflows now provides additional context inside step.do() callbacks and supports returning ReadableStream to handle larger step outputs.

    Step context properties

    The step.do() callback receives a context object with new properties alongside attempt:

    • step.name — The name passed to step.do()
    • step.count — How many times a step with that name has been invoked in this instance (1-indexed)
      • Useful when running the same step in a loop.
    • config — The resolved step configuration, including timeout and retries with defaults applied
    TypeScript
    type ResolvedStepConfig = {
    retries: {
    limit: number;
    delay: WorkflowDelayDuration | number;
    backoff?: "constant" | "linear" | "exponential";
    };
    timeout: WorkflowTimeoutDuration | number;
    };
    type WorkflowStepContext = {
    step: {
    name: string;
    count: number;
    };
    attempt: number;
    config: ResolvedStepConfig;
    };

    ReadableStream support in step.do()

    Steps can now return a ReadableStream directly. Although non-stream step outputs are limited to 1 MiB, streamed outputs support much larger payloads.

    TypeScript
    const largePayload = await step.do("fetch-large-file", async () => {
    const object = await env.MY_BUCKET.get("large-file.bin");
    return object.body;
    });

    Note that streamed outputs are still considered part of the Workflow instance storage limit.

  1. The Container logs page now displays related Worker and Durable Object logs alongside container logs. This co-locates all relevant log events for a container application in one place, making it easier to trace requests and debug issues.

    Container logs page showing Worker and Durable Object logs alongside container logs

    You can filter to a single source when you need to isolate Container, Worker, or Durable Object output.

    For information on configuring container logging, refer to How do Container logs work?.

  1. Pay-as-you-go customers can now monitor usage-based costs and configure spend alerts through two new features: the Billable Usage dashboard and Budget alerts.

    Billable Usage dashboard

    The Billable Usage dashboard provides daily visibility into usage-based costs across your Cloudflare account. The data comes from the same system that generates your monthly invoice, so the figures match your bill.

    The dashboard displays:

    • A bar chart showing daily usage charges for your billing period
    • A sortable table breaking down usage by product, including total usage, billable usage, and cumulative costs
    • Ability to view previous billing periods

    Usage data aligns to your billing cycle, not the calendar month. The total usage cost shown at the end of a completed billing period matches the usage overage charges on your corresponding invoice.

    To access the dashboard, go to Manage Account > Billing > Billable Usage.

    Screenshot of the Billable Usage dashboard in the Cloudflare dashboard

    Budget alerts

    Budget alerts allow you to set dollar-based thresholds for your account-level usage spend. You receive an email notification when your projected monthly spend reaches your configured threshold, giving you proactive visibility into your bill before month-end.

    To configure a budget alert:

    1. Go to Manage Account > Billing > Billable Usage.
    2. Select Set Budget Alert.
    3. Enter a budget threshold amount greater than $0.
    4. Select Create.

    Alternatively, configure alerts via Notifications > Add > Budget Alert.

    Create Budget Alert modal in the Cloudflare dashboard

    You can create multiple budget alerts at different dollar amounts. The notifications system automatically deduplicates alerts if multiple thresholds trigger at the same time. Budget alerts are calculated daily based on your usage trends and fire once per billing cycle when your projected spend first crosses your threshold.

    Both features are available to Pay-as-you-go accounts with usage-based products (Workers, R2, Images, etc.). Enterprise contract accounts are not supported.

    For more information, refer to the Usage based billing documentation.

  1. When a Cloudflare Worker intercepts a visitor request, it can dispatch additional outbound fetch calls called subrequests. By default, each subrequest generates its own log entry in Logpush, resulting in multiple log lines per visitor request. With subrequest merging enabled, subrequest data is embedded as a nested array field on the parent log record instead.

    What's new

    • New subrequest_merging field on Logpush jobs — Set "merge_subrequests": true when creating or updating an http_requests Logpush job to enable the feature.
    • New Subrequests log field — When subrequest merging is enabled, a Subrequests field (array\<object\>) is added to each parent request log record. Each element in the array contains the standard http_requests fields for that subrequest.

    Limitations

    • Applies to the http_requests (zone-scoped) dataset only.
    • A maximum of 50 subrequests are merged per parent request. Subrequests beyond this limit are passed through unmodified as individual log entries.
    • Subrequests must complete within 5 minutes of the visitor request. Subrequests that exceed this window are passed through unmodified.
    • Subrequests that do not qualify appear as separate log entries — no data is lost.
    • Subrequest merging is being gradually rolled out and is not yet available on all zones. Contact your account team for concerns or to ensure it is enabled for your zone.
    • For more information, refer to Subrequests.
  1. This week's release introduces a new detection for a Remote Code Execution (RCE) vulnerability in Apache ActiveMQ (CVE-2026-34197) and an updated signature for Magento 2 - Unrestricted File Upload. Alongside these detections, we are continuing our work on rule refinements to provide deeper security insights for our customers.

    Key Findings

    • Apache ActiveMQ (CVE-2026-34197): A vulnerability in Apache ActiveMQ allows an unauthenticated, remote attacker to execute arbitrary code. This flaw occurs during the processing of specially crafted network packets, leading to potential full system compromise.

    • Magento 2 - Unrestricted File Upload - 2: This is a follow-up enhancement to our existing protections for Magento and Adobe Commerce.

    Impact

    Successful exploitation of these vulnerabilities could allow unauthenticated attackers to execute arbitrary code or gain full administrative control over affected servers. We strongly recommend applying official vendor patches for Apache ActiveMQ and Magento to address the underlying vulnerabilities.

    Continuous Rule Improvements

    We are continuously refining our managed rules to provide more resilient protection and deeper insights into attack patterns. To ensure an optimal security posture, we recommend consistently monitoring the Security Events dashboard and adjusting rule actions as these enhancements are deployed.

    RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionComments
    Cloudflare Managed Ruleset N/ACommand Injection - Generic 8 - uriLogBlockThis is a new detection. Previous description was "Command Injection - Generic 8 - uri - Beta"
    Cloudflare Managed Ruleset N/ACommand Injection - Generic 8 - bodyDisabledDisabled

    Rule metadata description refined. Previous description was "Command Injection - Generic 8" (ID: )

    Cloudflare Managed Ruleset N/ACommand Injection - Generic 8 - body - BetaDisabledDisabled

    This is a new detection. This rule is merged into the original rule "Command Injection - Generic 8 - body" (ID: )

    Cloudflare Managed Ruleset N/AMySQL - SQLi - Executable Comment - BodyBlockBlock

    Rule metadata description refined. Previous description was "MySQL - SQLi - Executable Comment" (ID: )

    Cloudflare Managed Ruleset N/AMySQL - SQLi - Executable Comment - BetaLogBlock

    This is a new detection. This rule is merged into the original rule "MySQL - SQLi - Executable Comment - Body" (ID: )

    Cloudflare Managed Ruleset N/AMySQL - SQLi - Executable Comment - HeadersLogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/AMySQL - SQLi - Executable Comment - URILogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/AMagento 2 - Unrestricted file upload - 2LogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/AApache ActiveMQ - Remote Code Execution - CVE:CVE-2026-34197LogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/ASQLi - Sleep Function - BetaLogBlock

    This is a new detection. This rule is merged into the original rule "SQLi - Sleep Function" (ID: )

    Cloudflare Managed Ruleset N/ASQLi - Sleep Function - HeadersLogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/ASQLi - Sleep Function - URILogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/ASQLi - Probing - uriLogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/ASQLi - Probing - headerLogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/ASQLi - Probing - bodyDisabledDisabled

    This is a new detection. This rule is merged into the original rule "SQLi - Probing" (ID: )

    Cloudflare Managed Ruleset N/ASQLi - Probing 2 DisabledDisabled

    This rule had duplicate detection logic and has been deprecated.

    Cloudflare Managed Ruleset N/ASQLi - UNION in MSSQL - BodyDisabledDisabled

    This rule has been renamed to differentiate from "SQLi - UNION in MSSQL" (ID: ) and contains updated rule logic.

    Cloudflare Managed Ruleset N/ASQLi - UNION - 3DisabledDisabled

    This rule had duplicate detection logic and has been deprecated.

    Cloudflare Managed Ruleset N/AXSS, HTML Injection - Embed Tag - URIDisabledDisabled

    This is a new detection.

    Cloudflare Managed Ruleset N/AXSS, HTML Injection - Embed Tag - HeadersLogBlock

    This is a new detection.

    Cloudflare Managed Ruleset N/AXSS, HTML Injection - IFrame Tag - Src and Srcdoc Attributes - HeadersLogDisabled

    This is a new detection.

    Cloudflare Managed Ruleset N/AXSS, HTML Injection - Link Tag - HeadersLogDisabled

    This is a new detection.

    Cloudflare Managed Ruleset N/AXSS, HTML Injection - Link Tag - URIDisabledDisabled

    This is a new detection.

  1. Announcement DateRelease DateRelease BehaviorLegacy Rule IDRule IDDescriptionComments
    2026-04-212026-04-27LogN/A PostgreSQL - SQLi - COPY - Beta

    This is a new detection. This rule will be merged into the original rule "PostgreSQL - SQLi - COPY" (ID: )

    2026-04-212026-04-27LogN/A PostgreSQL - SQLi - COPY - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A PostgreSQL - SQLi - COPY - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Destructive Operations

    This is a new detection.

    2026-04-212026-04-27LogN/A SQLi - AND/OR MAKE_SET/ELT - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - AND/OR MAKE_SET/ELT" (ID: )

    2026-04-212026-04-27LogN/A SQLi - AND/OR MAKE_SET/ELT - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - AND/OR MAKE_SET/ELT - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Common Patterns - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - Common Patterns" (ID: )

    2026-04-212026-04-27LogN/A SQLi - Common Patterns - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Common Patterns - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Equation - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - Equation" (ID: )

    2026-04-212026-04-27LogN/A SQLi - Equation - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Equation - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - AND/OR Digit Operator Digit - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - AND/OR Digit Operator Digit" (ID: )

    2026-04-212026-04-27LogN/A SQLi - AND/OR Digit Operator Digit - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - AND/OR Digit Operator Digit - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Benchmark Function - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - Benchmark Function" (ID: )

    2026-04-212026-04-27LogN/A SQLi - Benchmark Function - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Benchmark Function - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Comparison - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - Comparison" (ID: )

    2026-04-212026-04-27LogN/A SQLi - Comparison - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - Comparison - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - String Concatenation - Body - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - String Concatenation - Headers" (ID: )

    2026-04-212026-04-27LogN/A SQLi - String Concatenation - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - String Concatenation - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - SELECT Expression - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - SELECT Expression" (ID: )

    2026-04-212026-04-27LogN/A SQLi - SELECT Expression - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - SELECT Expression - URIThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - ORD and ASCII - Beta

    This is a new detection. This rule will be merged into the original rule "SQLi - ORD and ASCII" (ID: )

    2026-04-212026-04-27LogN/A SQLi - ORD and ASCII - HeadersThis is a new detection.
    2026-04-212026-04-27LogN/A SQLi - ORD and ASCII - URIThis is a new detection.
    2026-04-212026-04-27LogN/A XSS, HTML Injection - Object Tag - Body (beta)This is a new detection.
    2026-04-212026-04-27LogN/A XSS, HTML Injection - Object Tag - Headers (beta)This is a new detection.
    2026-04-212026-04-27LogN/A XSS, HTML Injection - Object Tag - URI (beta)This is a new detection.
  1. Binary frames received on a WebSocket are now delivered to the message event as Blob objects by default. This matches the WebSocket specification and standard browser behavior. Previously, binary frames were always delivered as ArrayBuffer. The binaryType property on WebSocket controls the delivery type on a per-WebSocket basis.

    This change has been active for Workers with compatibility dates on or after 2026-03-17, via the websocket_standard_binary_type compatibility flag. We should have documented this change when it shipped but didn't. We're sorry for the trouble that caused. If your Worker handles binary WebSocket messages and assumes event.data is an ArrayBuffer, the frames will arrive as Blob instead, and a naive instanceof ArrayBuffer check will silently drop every frame.

    To opt back into ArrayBuffer delivery, assign binaryType before calling accept(). This works regardless of the compatibility flag:

    JavaScript
    const resp = await fetch("https://example.com", {
    headers: { Upgrade: "websocket" },
    });
    const ws = resp.webSocket;
    // Opt back into ArrayBuffer delivery for this WebSocket.
    ws.binaryType = "arraybuffer";
    ws.accept();
    ws.addEventListener("message", (event) => {
    if (typeof event.data === "string") {
    // Text frame.
    } else {
    // event.data is an ArrayBuffer because we set binaryType above.
    }
    });

    If you are not ready to migrate and want to keep ArrayBuffer as the default for all WebSockets in your Worker, add the no_websocket_standard_binary_type flag to your Wrangler configuration file.

    This change has no effect on the Durable Object hibernatable WebSocket webSocketMessage handler, which continues to receive binary data as ArrayBuffer.

    For more information, refer to WebSockets binary messages.

  1. The new Network session analytics dashboard is now available in Cloudflare One. This dashboard provides visibility into your network traffic patterns, helping you understand how traffic flows through your Cloudflare One infrastructure.

    Cloudflare One Network Session Analytics

    What you can do with Network session analytics

    • Analyze geographic distribution: View a world map showing where your network traffic originates, with a list of top locations by session count.
    • Monitor key metrics: Track session count, total bytes transferred, and unique users.
    • Identify connection issues: Analyze connection close reasons to troubleshoot network problems.
    • Review protocol usage: See which network protocols (TCP, UDP, ICMP) are most used.

    Dashboard features

    • Summary metrics: Session count, bytes total, and unique users
    • Traffic by location: World map visualization and location list with top traffic sources
    • Top protocols: Breakdown of TCP, UDP, ICMP, and ICMPv6 traffic
    • Connection close reasons: Insights into why sessions terminated (client closed, origin closed, timeouts, errors)

    How to access

    1. Log in to Cloudflare One.
    2. Go to Zero Trust > Insights > Dashboards.
    3. Select Network session analytics.

    For more information, refer to the Network session analytics documentation.

  1. Logpush has traditionally been great at delivering Cloudflare logs to a variety of destinations in JSON format. While JSON is flexible and easily readable, it can be inefficient to store and query at scale.

    With this release, you can now send your logs directly to Pipelines to ingest, transform, and store your logs in R2 as Parquet files or Apache Iceberg tables managed by R2 Data Catalog. This makes the data footprint more compact and more efficient at querying your logs instantly with R2 SQL or any other query engine that supports Apache Iceberg or Parquet.

    Transform logs before storage

    Pipelines SQL runs on each log record in-flight, so you can reshape your data before it is written. For example, you can drop noisy fields, redact sensitive values, or derive new columns:

    INSERT INTO http_logs_sink
    SELECT
    ClientIP,
    EdgeResponseStatus,
    to_timestamp_micros(EdgeStartTimestamp) AS event_time,
    upper(ClientRequestMethod) AS method,
    sha256(ClientIP) AS hashed_ip
    FROM http_logs_stream
    WHERE EdgeResponseStatus >= 400;

    Pipelines SQL supports string functions, regex, hashing, JSON extraction, timestamp conversion, conditional expressions, and more. For the full list, refer to the Pipelines SQL reference.

    Get started

    To configure Pipelines as a Logpush destination, refer to Enable Cloudflare Pipelines.

  1. R2 SQL is Cloudflare's serverless, distributed, analytics query engine for querying Apache Iceberg tables stored in R2 Data Catalog.

    R2 SQL now supports functions for querying JSON data stored in Apache Iceberg tables, an easier way to parse query plans with EXPLAIN FORMAT JSON, and querying tables without partition keys stored in R2 Data Catalog.

    JSON functions extract and manipulate JSON values directly in SQL without client-side processing:

    SELECT
    json_get_str(doc, 'name') AS name,
    json_get_int(doc, 'user', 'profile', 'level') AS level,
    json_get_bool(doc, 'active') AS is_active
    FROM my_namespace.sales_data
    WHERE json_contains(doc, 'email')

    For a full list of available functions, refer to JSON functions.

    EXPLAIN FORMAT JSON returns query execution plans as structured JSON for programmatic analysis and observability integrations:

    Terminal window
    npx wrangler r2 sql query "${WAREHOUSE}" "EXPLAIN FORMAT JSON SELECT * FROM logpush.requests LIMIT 10;"
    ┌──────────────────────────────────────┐
    plan
    ├──────────────────────────────────────┤
    {
    "name": "CoalescePartitionsExec",
    "output_partitions": 1,
    "rows": 10,
    "size_approx": "310B",
    "children": [
    {
    "name": "DataSourceExec",
    "output_partitions": 4,
    "rows": 28951,
    "size_approx": "900.0KB",
    "table": "logpush.requests",
    "files": 7,
    "bytes": 900019,
    "projection": [
    "__ingest_ts",
    "CPUTimeMs",
    "DispatchNamespace",
    "Entrypoint",
    "Event",
    "EventTimestampMs",
    "EventType",
    "Exceptions",
    "Logs",
    "Outcome",
    "ScriptName",
    "ScriptTags",
    "ScriptVersion",
    "WallTimeMs"
    ],
    "limit": 10
    }
    ]
    }
    └──────────────────────────────────────┘

    For more details, refer to EXPLAIN.

    Unpartitioned Iceberg tables can now be queried directly, which is useful for smaller datasets or data without natural time dimensions. For tables with more than 1000 files, partitioning is still recommended for better performance.

    Refer to Limitations and best practices for the latest guidance on using R2 SQL.

  1. @cf/moonshotai/kimi-k2.6 is now available on Workers AI, in partnership with Moonshot AI for Day 0 support. Kimi K2.6 is a native multimodal agentic model from Moonshot AI that advances practical capabilities in long-horizon coding, coding-driven design, proactive autonomous execution, and swarm-based task orchestration.

    Built on a Mixture-of-Experts architecture with 1T total parameters and 32B active per token, Kimi K2.6 delivers frontier-scale intelligence with efficient inference. It scores competitively against GPT-5.4 and Claude Opus 4.6 on agentic and coding benchmarks, including BrowseComp (83.2), SWE-Bench Verified (80.2), and Terminal-Bench 2.0 (66.7).

    Key capabilities

    • 262.1k token context window for retaining full conversation history, tool definitions, and codebases across long-running agent sessions
    • Long-horizon coding with significant improvements on complex, end-to-end coding tasks across languages including Rust, Go, and Python
    • Coding-driven design that transforms simple prompts and visual inputs into production-ready interfaces and full-stack workflows
    • Agent swarm orchestration scaling horizontally to 300 sub-agents executing 4,000 coordinated steps for complex autonomous tasks
    • Vision inputs for processing images alongside text
    • Thinking mode with configurable reasoning depth
    • Multi-turn tool calling for building agents that invoke tools across multiple conversation turns

    Differences from Kimi K2.5

    If you are migrating from Kimi K2.5, note the following API changes:

    • K2.6 uses chat_template_kwargs.thinking to control reasoning, replacing chat_template_kwargs.enable_thinking
    • K2.6 returns reasoning content in the reasoning field, replacing reasoning_content

    Get started

    Use Kimi K2.6 through the Workers AI binding (env.AI.run()), the REST API at /ai/run, or the OpenAI-compatible endpoint at /v1/chat/completions. You can also use AI Gateway with any of these endpoints.

    For more information, refer to the Kimi K2.6 model page and pricing.

  1. Cloudflare's network now supports redirecting verified AI training crawlers to canonical URLs when they request deprecated or duplicate pages. When enabled via AI Crawl Control > Quick Actions, AI training crawlers that request a page with a canonical tag pointing elsewhere receive a 301 redirect to the canonical version. Humans, search engine crawlers, and AI Search agents continue to see the original page normally.

    This feature leverages your existing <link rel="canonical"> tags. No additional configuration required beyond enabling the toggle. Available on Pro, Business, and Enterprise plans at no additional cost.

    Refer to the Redirects for AI Training documentation for details.

  1. AI Crawl Control now includes new tools to help you prepare your site for the agentic Internet—a web where AI agents are first-class citizens that discover and interact with content differently than human visitors.

    Content Format insights

    The Metrics tab now includes a Content Format chart showing what content types AI systems request versus what your origin serves. Understanding these patterns helps you optimize content delivery for both human and agent consumption.

    Directives tab (formerly Robots.txt)

    The Robots.txt tab has been renamed to Directives and now includes a link to check your site's Agent Readiness score.

    Refer to our blog post on preparing for the agentic Internet for more on why these capabilities matter.

  1. You can now achieve higher cache HIT rates and reduce origin load for origins hosted on public cloud providers with Smart Tiered Cache. By setting a cloud region hint for your origin, Cloudflare selects the optimal upper-tier data center for that cloud region, funneling all cache MISSes through a single location close to your origin.

    Previously, Smart Tiered Cache could not reliably select an optimal upper tier for origins behind anycast or regional unicast networks commonly used by cloud providers. Origins on AWS, GCP, Azure, and Oracle Cloud would fall back to a multi-upper-tier topology, resulting in lower cache HIT rates and more requests reaching your origin.

    How it works

    Set a cloud region hint (for example, aws/us-east-1 or gcp/europe-west1) for your origin IP or hostname. Smart Tiered Cache uses this hint along with real-time latency data to select a primary upper tier close to your cloud region, plus a fallback in a different location for resilience.

    • Supported providers: AWS, GCP, Azure, and Oracle Cloud.
    • All plans: Available on Free, Pro, Business, and Enterprise plans at no additional cost.
    • Dashboard and API: Configure from Caching > Tiered Cache > Origin Configuration, or use the API and Terraform.

    Get started

    To get started, enable Smart Tiered Cache and set a cloud region hint for your origin in the Tiered Cache settings.

  1. Radar adds three new features to the AI Insights page, expanding visibility into how AI bots, crawlers, and agents interact with the web.

    Adoption of AI agent standards

    The AI Insights page now includes an adoption of AI agent standards widget that tracks how websites adopt agent-facing standards. The data is filterable by domain category and updated weekly on Mondays. This data is also available through the Agent Readiness API reference.

    Screenshot of the adoption of AI agent standards chart

    URL Scanner reports now include an Agent readiness tab that evaluates a scanned URL against the criteria used by the Agent Readiness score tool.

    Screenshot of the URL Scanner agent readiness tab

    For more details, refer to the Agent Readiness blog post.

    Markdown for Agents savings

    A new savings gauge shows the median response-size reduction when serving Markdown instead of HTML to AI bots and crawlers. This highlights the bandwidth and token savings that Markdown for Agents provides.

    Screenshot of the Markdown for Agents savings gauge

    For more details, refer to the Markdown for Agents API reference.

    Response status

    The new response status widget displays the distribution of HTTP response status codes returned to AI bots and crawlers. Results are groupable by individual status code (200, 403, 404) or by category (2xx, 3xx, 4xx, 5xx).

    The same widget is available on each verified bot's detail page (only available for AI bots), for example Google.

    Screenshot of the response status distribution widget

    Explore all three features on the Cloudflare Radar AI Insights page.

  1. New AI Search instances created after today will work differently. New instances come with built-in storage and a vector index, so you can upload a file, have it indexed immediately, and search it right away.

    Additionally new Workers Bindings are now available to use with AI Search. The new namespace binding lets you create and manage instances at runtime, and cross-instance search API lets you query across multiple instances in one call.

    Built-in storage and vector index

    All new instances now comes with built-in storage which allows you to upload files directly to it using the Items API or the dashboard. No R2 buckets to set up, no external data sources to connect first.

    TypeScript
    const instance = env.AI_SEARCH.get("my-instance");
    // upload and wait for indexing to complete
    const item = await instance.items.uploadAndPoll("faq.md", content);
    // search immediately after indexing
    const results = await instance.search({
    messages: [{ role: "user", content: "onboarding guide" }],
    });

    Namespace binding

    The new ai_search_namespaces binding replaces the previous env.AI.autorag() API provided through the AI binding. It gives your Worker access to all instances within a namespace and lets you create, update, and delete instances at runtime without redeploying.

    JSONC
    // wrangler.jsonc
    {
    "ai_search_namespaces": [
    {
    "binding": "AI_SEARCH",
    "namespace": "default",
    },
    ],
    }
    TypeScript
    // create an instance at runtime
    const instance = await env.AI_SEARCH.create({
    id: "my-instance",
    });

    For migration details, refer to Workers binding migration. For more on namespaces, refer to Namespaces.

    Within the new AI Search binding, you now have access to a Search and Chat API on the namespace level. Pass an array of instance IDs and get one ranked list of results back.

    TypeScript
    const results = await env.AI_SEARCH.search({
    messages: [{ role: "user", content: "What is Cloudflare?" }],
    ai_search_options: {
    instance_ids: ["product-docs", "customer-abc123"],
    },
    });

    Refer to Namespace-level search for details.

  1. AI Search now supports hybrid search and relevance boosting, giving you more control over how results are found and ranked.

    Hybrid search combines vector (semantic) search with BM25 keyword search in a single query. Vector search finds chunks with similar meaning, even when the exact words differ. Keyword search matches chunks that contain your query terms exactly. When you enable hybrid search, both run in parallel and the results are fused into a single ranked list.

    You can configure the tokenizer (porter for natural language, trigram for code), keyword match mode (and for precision, or for recall), and fusion method (rrf or max) per instance:

    TypeScript
    const instance = await env.AI_SEARCH.create({
    id: "my-instance",
    index_method: { vector: true, keyword: true },
    fusion_method: "rrf",
    indexing_options: { keyword_tokenizer: "porter" },
    retrieval_options: { keyword_match_mode: "and" },
    });

    Refer to Search modes for an overview and Hybrid search for configuration details.

    Relevance boosting

    Relevance boosting lets you nudge search rankings based on document metadata. For example, you can prioritize recent documents by boosting on timestamp, or surface high-priority content by boosting on a custom metadata field like priority.

    Configure up to 3 boost fields per instance or override them per request:

    TypeScript
    const results = await env.AI_SEARCH.get("my-instance").search({
    messages: [{ role: "user", content: "deployment guide" }],
    ai_search_options: {
    retrieval: {
    boost_by: [
    { field: "timestamp", direction: "desc" },
    { field: "priority", direction: "desc" },
    ],
    },
    },
    });

    Refer to Relevance boosting for configuration details.

  1. Artifacts is now in private beta. Artifacts is Git-compatible storage built for scale: create tens of millions of repos, fork from any remote, and hand off a URL to any Git client. It provides a versioned filesystem for storing and exchanging file trees across Workers, the REST API, and any Git client, running locally or within an agent.

    You can read the announcement blog to learn more about what Artifacts does, how it works, and how to create repositories for your agents to use.

    Artifacts has three API surfaces:

    • Workers bindings (for creating and managing repositories)
    • REST API (for creating and managing repos from any other compute platform)
    • Git protocol (for interacting with repos)

    As an example: you can use the Workers binding to create a repo and read back its remote URL:

    TypeScript
    # Create a thousand, a million or ten million repos: one for every agent, for every upstream branch, or every user.
    const created = await env.PROD_ARTIFACTS.create("agent-007");
    const remote = (await created.repo.info())?.remote;

    Or, use the REST API to create a repo inside a namespace from your agent(s) running on any platform:

    Terminal window
    curl --request POST "https://artifacts.cloudflare.net/v1/api/namespaces/some-namespace/repos" --header "Authorization: Bearer $CLOUDFLARE_API_TOKEN" --header "Content-Type: application/json" --data '{"name":"agent-007"}'

    Any Git client that speaks smart HTTP can use the returned remote URL:

    Terminal window
    # Agents know git.
    # Every repository can act as a git repo, allowing agents to interact with Artifacts the way they know best: using the git CLI.
    git clone https://x:${REPO_TOKEN}@artifacts.cloudflare.net/some-namespace/agent-007.git

    To learn more, refer to Get started, Workers binding, and Git protocol.

  1. Workflows limits have been raised to the following:

    LimitPreviousNew
    Concurrent instances (running in parallel)10,00050,000
    Instance creation rate (per account)100/second per account300/second per account, 100/second per workflow
    Queued instances per Workflow 11 million2 million

    These increases apply to all users on the Workers Paid plan. Refer to the Workflows limits documentation for more details.

    Footnotes

    1. Queued instances are instances that have been created or awoken and are waiting for a concurrency slot.

  1. We are renaming Browser Rendering to Browser Run. The name Browser Rendering never fully captured what the product does. Browser Run lets you run full browser sessions on Cloudflare's global network, drive them with code or AI, record and replay sessions, crawl pages for content, debug in real time, and let humans intervene when your agent needs help.

    Along with the rename, we have increased limits for Workers Paid plans and redesigned the Browser Run dashboard.

    We have 4x-ed concurrency limits for Workers Paid plan users:

    • Concurrent browsers per account: 30 → 120 per account
    • New browser instances: 30 per minute → 1 per second
    • REST API rate limits: recently increased from 3 to 10 requests per second

    Rate limits across the limits page are now expressed in per-second terms, matching how they are enforced. No action is needed to benefit from the higher limits.

    The redesigned dashboard now shows every request in a single Runs tab, not just browser sessions but also quick actions like screenshots, PDFs, markdown, and crawls. Filter by endpoint, view target URLs, status, and duration, and expand any row for more detail.

    Browser Run dashboard Runs tab with browser sessions and quick actions visible in one list, and an expanded crawl job showing its progress

    We are also shipping several new features:

    • Live View, Human in the Loop, and Session Recordings - See what your agent is doing in real time, let humans step in when automation hits a wall, and replay any session after it ends.
    • WebMCP - Websites can expose structured tools for AI agents to discover and call directly, replacing slow screenshot-analyze-click loops.

    For the full story, read our Agents Week blog Browser Run: Give your agents a browser.

  1. When browser automation fails or behaves unexpectedly, it can be hard to understand what happened. We are shipping three new features in Browser Run (formerly Browser Rendering) to help:

    Live View

    Live View lets you see what your agent is doing in real time. The page, DOM, console, and network requests are all visible for any active browser session. Access Live View from the Cloudflare dashboard, via the hosted UI at live.browser.run, or using native Chrome DevTools.

    Human in the Loop

    When your agent hits a snag like a login page or unexpected edge case, it can hand off to a human instead of failing. With Human in the Loop, a human steps into the live browser session through Live View, resolves the issue, and hands control back to the script.

    Today, you can step in by opening the Live View URL for any active session. Next, we are adding a handoff flow where the agent can signal that it needs help, notify a human to step in, then hand control back to the agent once the issue is resolved.

    Browser Run Human in the Loop demo where an AI agent searches Amazon, selects a product, and requests human help when authentication is needed to buy

    Session Recordings

    Session Recordings records DOM state so you can replay any session after it ends. Enable recordings by passing recording: true when launching a browser. After the session closes, view the recording in the Cloudflare dashboard under Browser Run > Runs, or retrieve via API using the session ID. Next, we are adding the ability to inspect DOM state and console output at any point during the recording.

    Browser Run session recording showing an automated browser navigating the Sentry Shop and adding a bomber jacket to the cart

    To get started, refer to the documentation for Live View, Human in the Loop, and Session Recording.

  1. Browser Run (formerly Browser Rendering) now supports WebMCP (Web Model Context Protocol), a new browser API from the Google Chrome team.

    The Internet was built for humans, so navigating as an AI agent today is unreliable. WebMCP lets websites expose structured tools for AI agents to discover and call directly. Instead of slow screenshot-analyze-click loops, agents can call website functions like searchFlights() or bookTicket() with typed parameters, making browser automation faster, more reliable, and less fragile.

    Browser Run lab session showing WebMCP tools being discovered and executed in the Chrome DevTools console to book a hotel

    With WebMCP, you can:

    • Discover website tools - Use navigator.modelContextTesting.listTools() to see available actions on any WebMCP-enabled site
    • Execute tools directly - Call navigator.modelContextTesting.executeTool() with typed parameters
    • Handle human-in-the-loop interactions - Some tools pause for user confirmation before completing sensitive actions

    WebMCP requires Chrome beta features. We have an experimental pool with browser instances running Chrome beta so you can test emerging browser features before they reach stable Chrome. To start a WebMCP session, add lab=true to your /devtools/browser request:

    Terminal window
    curl -X POST "https://api.cloudflare.com/client/v4/accounts/{account_id}/browser-rendering/devtools/browser?lab=true&keep_alive=300000" \
    -H "Authorization: Bearer {api_token}"

    Combined with the recently launched CDP endpoint, AI agents can also use WebMCP. Connect an MCP client to Browser Run via CDP, and your agent can discover and call website tools directly. Here's the same hotel booking demo, this time driven by an AI agent through OpenCode:

    Browser Run Live View showing an AI agent navigating a hotel booking site in real time

    For a step-by-step guide, refer to the WebMCP documentation.

  1. Cloudflare Access now supports independent multi-factor authentication (MFA), allowing you to enforce MFA requirements without relying on your identity provider (IdP). With per-application and per-policy configuration, you can enforce stricter authentication methods like hardware security keys on sensitive applications without requiring them across your entire organization. This reduces the risk of MFA fatigue for your broader user population while adding additional security where it matters most.

    This feature also addresses common gaps in IdP-based MFA, such as inconsistent MFA policies across different identity providers or the need for additional security layers beyond what the IdP provides.

    Independent MFA supports the following authenticator types:

    • Authenticator application — Time-based one-time passwords (TOTP) using apps like Google Authenticator, Microsoft Authenticator, or Authy.
    • Security key — Hardware security keys such as YubiKeys.
    • Biometrics — Built-in device authenticators including Apple Touch ID, Apple Face ID, and Windows Hello.

    Configuration levels

    You can configure MFA requirements at three levels:

    LevelDescription
    OrganizationEnforce MFA by default for all applications in your account.
    ApplicationRequire or turn off MFA for a specific application.
    PolicyRequire or turn off MFA for users who match a specific policy.

    Settings at lower levels (policy) override settings at higher levels (organization), giving you granular control over MFA enforcement.

    User enrollment

    Users enroll their authenticators through the App Launcher. To help with onboarding, administrators can share a direct enrollment link: <your-team-name>.cloudflareaccess.com/AddMfaDevice.

    To get started with Independent MFA, refer to Independent MFA.

  1. Agent Lee adds Write Operations and Generative UI

    We are excited to announce two major capability upgrades for Agent Lee, the AI co-pilot built directly into the Cloudflare dashboard. Agent Lee is designed to understand your specific account configuration, and with this release, it moves from a passive advisor to an active assistant that can help you manage your infrastructure and visualize your data through natural language.

    Take action with Write Operations

    Agent Lee can now perform changes on your behalf across your Cloudflare account. Whether you need to update DNS records, modify SSL/TLS settings, or configure Workers routes, you can simply ask.

    To ensure security and accuracy, every write operation requires explicit user approval. Before any change is committed, Agent Lee will present a summary of the proposed action in plain language. No action is taken until you select Confirm, and this approval requirement is enforced at the infrastructure level to prevent unauthorized changes.

    Example requests:

    • "Add an A record for blog.example.com pointing to 192.0.2.10."
    • "Enable Always Use HTTPS on my zone."
    • "Set the SSL mode for example.com to Full (strict)."

    Visualize data with Generative UI

    Understanding your traffic and security trends is now as easy as asking a question. Agent Lee now features Generative UI, allowing it to render inline charts and structured data visualizations directly within the chat interface using your actual account telemetry.

    Example requests:

    • "Show me a chart of my traffic over the last 7 days."
    • "What does my error rate look like for the past 24 hours?"
    • "Graph my cache hit rate for example.com this week."

    Availability

    These features are currently available in Beta for all users on the Free plan. To get started, log in to the Cloudflare dashboard and select Ask AI in the upper right corner.

    To learn more about how to interact with your account using AI, refer to the Agent Lee documentation.