Cloudflare Python Workers
Status: Beta (requires python_workers compatibility flag)
Runtime: Pyodide (Python 3.12+ compiled to WebAssembly)
Package Versions: workers-py@1.7.0, workers-runtime-sdk@0.3.1, wrangler@4.58.0
Last Verified: 2026-01-21
Quick Start (5 Minutes)
1. Prerequisites
Ensure you have installed:
2. Initialize Project
bash1# Create project directory 2mkdir my-python-worker && cd my-python-worker 3 4# Initialize Python project 5uv init 6 7# Install pywrangler 8uv tool install workers-py 9 10# Initialize Worker configuration 11uv run pywrangler init
3. Create Entry Point
Create src/entry.py:
python1from workers import WorkerEntrypoint, Response 2 3class Default(WorkerEntrypoint): 4 async def fetch(self, request): 5 return Response("Hello from Python Worker!")
4. Configure wrangler.jsonc
jsonc1{ 2 "name": "my-python-worker", 3 "main": "src/entry.py", 4 "compatibility_date": "2025-12-01", 5 "compatibility_flags": ["python_workers"] 6}
5. Run Locally
bash1uv run pywrangler dev 2# Visit http://localhost:8787
6. Deploy
bash1uv run pywrangler deploy
Migration from Pre-December 2025 Workers
If you created a Python Worker before December 2025, you were limited to built-in packages. With pywrangler (Dec 2025), you can now deploy with external packages.
Old Approach (no longer needed):
python1# Limited to built-in packages only 2# Could only use httpx, aiohttp, beautifulsoup4, etc. 3# Error: "You cannot yet deploy Python Workers that depend on 4# packages defined in requirements.txt [code: 10021]"
New Approach (pywrangler):
toml1# pyproject.toml 2[project] 3dependencies = ["fastapi", "any-pyodide-compatible-package"]
bash1uv tool install workers-py 2uv run pywrangler deploy # Now works!
Historical Timeline:
- April 2024 - Dec 2025: Package deployment completely blocked
- Dec 8, 2025: Pywrangler released, enabling package deployment
- Jan 2026: Open beta with full package support
See: Package deployment issue history
Core Concepts
WorkerEntrypoint Class Pattern
As of August 2025, Python Workers use a class-based pattern (not global handlers):
python1from workers import WorkerEntrypoint, Response 2 3class Default(WorkerEntrypoint): 4 async def fetch(self, request): 5 # Access bindings via self.env 6 value = await self.env.MY_KV.get("key") 7 8 # Parse request 9 url = request.url 10 method = request.method 11 12 return Response(f"Method: {method}, URL: {url}")
Accessing Bindings
All Cloudflare bindings are accessed via self.env:
python1class Default(WorkerEntrypoint): 2 async def fetch(self, request): 3 # D1 Database 4 result = await self.env.DB.prepare("SELECT * FROM users").all() 5 6 # KV Storage 7 value = await self.env.MY_KV.get("key") 8 await self.env.MY_KV.put("key", "value") 9 10 # R2 Object Storage 11 obj = await self.env.MY_BUCKET.get("file.txt") 12 13 # Workers AI 14 response = await self.env.AI.run("@cf/meta/llama-2-7b-chat-int8", { 15 "prompt": "Hello!" 16 }) 17 18 return Response("OK")
Supported Bindings:
- D1 (SQL database)
- KV (key-value storage)
- R2 (object storage)
- Workers AI
- Vectorize
- Durable Objects
- Queues
- Analytics Engine
See Cloudflare Bindings Documentation for details.
Request/Response Handling
python1from workers import WorkerEntrypoint, Response 2import json 3 4class Default(WorkerEntrypoint): 5 async def fetch(self, request): 6 # Parse JSON body 7 if request.method == "POST": 8 body = await request.json() 9 return Response( 10 json.dumps({"received": body}), 11 headers={"Content-Type": "application/json"} 12 ) 13 14 # Query parameters 15 url = URL(request.url) 16 name = url.searchParams.get("name", "World") 17 18 return Response(f"Hello, {name}!")
Scheduled Handlers (Cron)
python1from workers import handler 2 3@handler 4async def on_scheduled(event, env, ctx): 5 # Run on cron schedule 6 print(f"Cron triggered at {event.scheduledTime}") 7 8 # Do work... 9 await env.MY_KV.put("last_run", str(event.scheduledTime))
Configure in wrangler.jsonc:
jsonc1{ 2 "triggers": { 3 "crons": ["*/5 * * * *"] // Every 5 minutes 4 } 5}
Python Workflows
Python Workflows enable durable, multi-step automation with automatic retries and state persistence.
Why Decorator Pattern?
Python Workflows use the @step.do() decorator pattern because Python does not easily support anonymous callbacks (unlike JavaScript/TypeScript which allows inline arrow functions). This is a fundamental language difference, not a limitation of Cloudflare's implementation.
JavaScript Pattern (doesn't translate):
javascript1await step.do("my step", async () => { 2 // Inline callback 3 return result; 4});
Python Pattern (required):
python1@step.do("my step") 2async def my_step(): 3 # Named function with decorator 4 return result 5 6result = await my_step()
Source: Python Workflows Blog
Concurrency with asyncio.gather
Pyodide captures JavaScript promises (thenables) and proxies them as Python awaitables. This enables Promise.all-equivalent behavior using standard Python async patterns:
python1import asyncio 2 3@step.do("step_a") 4async def step_a(): 5 return "A" 6 7@step.do("step_b") 8async def step_b(): 9 return "B" 10 11# Concurrent execution (like Promise.all) 12results = await asyncio.gather(step_a(), step_b()) 13# results = ["A", "B"]
Why This Works: JavaScript promises from workflow steps are proxied as Python awaitables, allowing standard asyncio concurrency primitives.
Source: Python Workflows Blog
Basic Workflow
python1from workers import WorkflowEntrypoint, WorkerEntrypoint, Response 2 3class MyWorkflow(WorkflowEntrypoint): 4 async def run(self, event, step): 5 # Step 1 6 @step.do("fetch data") 7 async def fetch_data(): 8 response = await fetch("https://api.example.com/data") 9 return await response.json() 10 11 data = await fetch_data() 12 13 # Step 2: Sleep 14 await step.sleep("wait", "10 seconds") 15 16 # Step 3: Process 17 @step.do("process data") 18 async def process_data(): 19 return {"processed": True, "count": len(data)} 20 21 result = await process_data() 22 return result 23 24 25class Default(WorkerEntrypoint): 26 async def fetch(self, request): 27 # Create workflow instance 28 instance = await self.env.MY_WORKFLOW.create() 29 return Response(f"Workflow started: {instance.id}")
DAG Dependencies
Define step dependencies for parallel execution:
python1class MyWorkflow(WorkflowEntrypoint): 2 async def run(self, event, step): 3 @step.do("step_a") 4 async def step_a(): 5 return "A done" 6 7 @step.do("step_b") 8 async def step_b(): 9 return "B done" 10 11 # step_c waits for both step_a and step_b 12 @step.do("step_c", depends=[step_a, step_b], concurrent=True) 13 async def step_c(result_a, result_b): 14 return f"C received: {result_a}, {result_b}" 15 16 return await step_c()
Workflow Configuration
jsonc1{ 2 "compatibility_flags": ["python_workers", "python_workflows"], 3 "compatibility_date": "2025-12-01", 4 "workflows": [ 5 { 6 "name": "my-workflow", 7 "binding": "MY_WORKFLOW", 8 "class_name": "MyWorkflow" 9 } 10 ] 11}
Package Management
pyproject.toml Configuration
toml1[project] 2name = "my-python-worker" 3version = "0.1.0" 4requires-python = ">=3.12" 5dependencies = [ 6 "beautifulsoup4", 7 "httpx" 8] 9 10[dependency-groups] 11dev = [ 12 "workers-py", 13 "workers-runtime-sdk" 14]
Supported Packages
Python Workers support:
- Pure Python packages from PyPI
- Pyodide packages (pre-built for WebAssembly)
HTTP Clients
Only async HTTP libraries work:
python1# ✅ WORKS - httpx (async) 2import httpx 3 4async with httpx.AsyncClient() as client: 5 response = await client.get("https://api.example.com") 6 7# ✅ WORKS - aiohttp 8import aiohttp 9 10async with aiohttp.ClientSession() as session: 11 async with session.get("https://api.example.com") as response: 12 data = await response.json() 13 14# ❌ DOES NOT WORK - requests (sync) 15import requests # Will fail!
Requesting New Packages
Request support for new packages at: https://github.com/cloudflare/workerd/discussions/categories/python-packages
FFI (Foreign Function Interface)
Access JavaScript APIs from Python via Pyodide's FFI:
JavaScript Globals
python1from js import fetch, console, Response as JSResponse 2 3class Default(WorkerEntrypoint): 4 async def fetch(self, request): 5 # Use JavaScript fetch 6 response = await fetch("https://api.example.com") 7 data = await response.json() 8 9 # Console logging 10 console.log("Fetched data:", data) 11 12 # Return JavaScript Response 13 return JSResponse.new("Hello!")
Type Conversions
Important: to_py() is a METHOD on JavaScript objects, not a standalone function. Only to_js() is a function.
python1from js import Object 2from pyodide.ffi import to_js 3 4# ❌ WRONG - ImportError! 5from pyodide.ffi import to_py 6python_data = to_py(js_data) 7 8# ✅ CORRECT - to_py() is a method 9async def fetch(self, request): 10 data = await request.json() # Returns JS object 11 python_data = data.to_py() # Convert to Python dict 12 13# Convert Python dict to JavaScript object 14python_dict = {"name": "test", "count": 42} 15js_object = to_js(python_dict, dict_converter=Object.fromEntries) 16 17# Use in Response 18return Response(to_js({"status": "ok"}))
Source: GitHub Issue #3322 (Pyodide maintainer clarification)
Known Issues Prevention
This skill prevents 11 documented issues:
Issue #1: Legacy Handler Pattern
Error: TypeError: on_fetch is not defined
Why: Handler pattern changed in August 2025.
python1# ❌ OLD (deprecated) 2@handler 3async def on_fetch(request): 4 return Response("Hello") 5 6# ✅ NEW (current) 7class Default(WorkerEntrypoint): 8 async def fetch(self, request): 9 return Response("Hello")
Issue #2: Sync HTTP Libraries
Error: RuntimeError: cannot use blocking call in async context
Why: Python Workers run async-only. Sync libraries block the event loop.
python1# ❌ FAILS 2import requests 3response = requests.get("https://api.example.com") 4 5# ✅ WORKS 6import httpx 7async with httpx.AsyncClient() as client: 8 response = await client.get("https://api.example.com")
Issue #3: Native/Compiled Packages
Error: ModuleNotFoundError: No module named 'numpy' (or similar)
Why: Only pure Python packages work. Native C extensions are not supported.
Solution: Use Pyodide-compatible alternatives or check Pyodide packages.
Issue #4: Missing Compatibility Flags
Error: Error: Python Workers require the python_workers compatibility flag
Fix: Add to wrangler.jsonc:
jsonc1{ 2 "compatibility_flags": ["python_workers"] 3}
For Workflows, also add "python_workflows".
Issue #5: I/O Outside Workflow Steps
Error: Workflow state not persisted correctly
Why: All I/O must happen inside @step.do for durability.
python1# ❌ BAD - fetch outside step 2response = await fetch("https://api.example.com") 3@step.do("use data") 4async def use_data(): 5 return await response.json() # response may be stale on retry 6 7# ✅ GOOD - fetch inside step 8@step.do("fetch and use") 9async def fetch_and_use(): 10 response = await fetch("https://api.example.com") 11 return await response.json()
Issue #6: Type Serialization Errors
Error: TypeError: Object of type X is not JSON serializable
Why: Workflow step return values must be JSON-serializable.
Fix: Convert complex objects before returning:
python1@step.do("process") 2async def process(): 3 # Convert datetime to string 4 return {"timestamp": datetime.now().isoformat()}
Issue #7: Cold Start Performance
Note: Python Workers have higher cold starts than JavaScript. With Wasm memory snapshots (Dec 2025), heavy packages like FastAPI and Pydantic now load in ~1 second (down from ~10 seconds previously), but this is still ~2x slower than JavaScript Workers (~50ms).
Performance Numbers (as of Dec 2025):
- Before snapshots: ~10 seconds for FastAPI/Pydantic
- After snapshots: ~1 second (10x improvement)
- JavaScript equivalent: ~50ms
Mitigation:
- Minimize top-level imports
- Use lazy loading for heavy packages
- Consider JavaScript Workers for latency-critical paths
- Wasm snapshots automatically improve cold starts (no config needed)
Source: Python Workers Redux Blog | InfoQ Coverage
Issue #8: Package Installation Failures
Error: Failed to install package X
Causes:
- Package has native dependencies
- Package not in Pyodide ecosystem
- Network issues during bundling
Fix: Check package compatibility, use alternatives, or request support.
Issue #9: Dev Registry Breaks JS-to-Python RPC
Error: Network connection lost when calling Python Worker from JavaScript Worker
Source: GitHub Issue #11438
Why It Happens: Dev registry doesn't properly route RPC calls between separately-run Workers in different terminals.
Prevention:
bash1# ❌ Doesn't work - separate terminals 2# Terminal 1: npx wrangler dev (JS worker) 3# Terminal 2: npx wrangler dev (Python worker) 4# Result: Network connection lost error 5 6# ✅ Works - single wrangler instance 7npx wrangler dev -c ts/wrangler.jsonc -c py/wrangler.jsonc
Run both workers in a single wrangler instance to enable proper RPC communication.
Issue #10: HTMLRewriter Memory Limit with Data URLs
Error: TypeError: Parser error: The memory limit has been exceeded
Source: GitHub Issue #10814
Why It Happens: Large inline data: URLs (>10MB) in HTML trigger parser memory limits. This is NOT about response size—10MB plain text works fine, but 10MB HTML with embedded data URLs fails. Common with Python Jupyter Notebooks that use inline images for plots.
Prevention:
python1# ❌ FAILS - HTMLRewriter triggered on notebook HTML with data: URLs 2response = await fetch("https://origin.example.com/notebook.html") 3return response # Crashes if HTML contains large data: URLs 4 5# ✅ WORKS - Stream directly or use text/plain 6response = await fetch("https://origin.example.com/notebook.html") 7headers = {"Content-Type": "text/plain"} # Bypass parser 8return Response(await response.text(), headers=headers)
Workarounds:
- Avoid HTMLRewriter on notebook content (stream directly)
- Pre-process notebooks to extract data URLs to external files
- Use
text/plaincontent-type to bypass parser
Issue #11: PRNG Cannot Be Seeded During Initialization
Error: Deployment fails with user error Source: Python Workers Redux Blog
Why It Happens: Wasm snapshots don't support PRNG initialization before request handlers. If you call pseudorandom number generator APIs (like random.seed()) during module initialization, deployment FAILS.
Prevention:
python1import random 2 3# ❌ FAILS deployment - module-level PRNG call 4random.seed(42) 5 6class Default(WorkerEntrypoint): 7 async def fetch(self, request): 8 return Response(str(random.randint(1, 100))) 9 10# ✅ WORKS - PRNG calls inside handlers 11class Default(WorkerEntrypoint): 12 async def fetch(self, request): 13 random.seed(42) # Initialize inside handler 14 return Response(str(random.randint(1, 100)))
Only call PRNG functions inside request handlers, not at module level.
Best Practices
Always Do
- Use
WorkerEntrypointclass pattern - Use async HTTP clients (httpx, aiohttp)
- Put all I/O inside workflow steps
- Add
python_workerscompatibility flag - Use
self.envfor all bindings - Return JSON-serializable data from workflow steps
Never Do
- Use sync HTTP libraries (requests)
- Use native/compiled packages
- Perform I/O outside workflow steps
- Use legacy
@handlerdecorator for fetch - Expect JavaScript-level cold start times
Framework Note: FastAPI
FastAPI can work with Python Workers but with limitations:
python1from fastapi import FastAPI 2from workers import WorkerEntrypoint 3 4app = FastAPI() 5 6@app.get("/") 7async def root(): 8 return {"message": "Hello from FastAPI"} 9 10class Default(WorkerEntrypoint): 11 async def fetch(self, request): 12 # Route through FastAPI 13 return await app(request)
Limitations:
- Async-only (no sync endpoints)
- No WSGI middleware
- Beta stability
See Cloudflare FastAPI example for details.
Official Documentation
- Python Workers Overview
- Python Workers Basics
- How Python Workers Work
- Python Packages
- FFI (Foreign Function Interface)
- Python Workflows
- Pywrangler CLI
- Pyodide Package List
Dependencies
json1{ 2 "workers-py": "1.7.0", 3 "workers-runtime-sdk": "0.3.1", 4 "wrangler": "4.58.0" 5}
Note: Always pin versions for reproducible builds. Check PyPI workers-py for latest releases.
Production Validation
- Cloudflare changelog: Dec 8, 2025 (Pywrangler + cold start improvements)
- workers-py 1.7.0: Latest stable (Jan 2026)
- Python Workflows beta: Aug 22, 2025
- Handler pattern change: Aug 14, 2025
Compatibility Date Guidance:
- Use
2025-12-01for new projects (latest features including pywrangler improvements) - Use
2025-08-01only if you need to match older production Workers