· 7 min read

TypeScript API on Cloudflare Workers with Azure

Building a serverless API that runs at the edge and connects to Azure services.

TypeScript API on Cloudflare Workers with Azure

TypeScript API on Cloudflare Workers with Azure

I built a serverless TypeScript API that runs on Cloudflare Workers at the edge, authenticates users through Azure AD, and stores data in Azure PostgreSQL. Mixing cloud providers like this isn’t the path of least resistance - each service assumes you’re using their ecosystem - but it let me pick the best tool for each job.

Why Mix Providers?

Cloudflare Workers give you edge compute: your code runs in data centres around the world, close to wherever your users are. For an API, this means lower latency than a single-region deployment. The serverless model also means no servers to manage, automatic scaling, and pay-per-request pricing.

But the organisation I was building for was already on Azure. Their identity provider was Azure AD (now Entra ID), and they wanted data to stay in Azure’s Australian regions for compliance. So the architecture became: edge compute from Cloudflare, identity from Azure AD, data from Azure PostgreSQL.

This combination wasn’t documented anywhere. Cloudflare’s examples assume you’re using their own database products. Azure’s examples assume you’re running on Azure compute. Making them work together required understanding each service well enough to bridge the gaps.

The Workers Environment

Cloudflare Workers aren’t a full Node.js runtime. They run on V8 isolates - the same JavaScript engine Chrome uses - but without the Node.js standard library. No fs module, no net module, no child_process. If your code depends on Node built-ins, it won’t work.

This constrained environment has benefits: startup time is measured in milliseconds, and isolates are cheap enough that Cloudflare can spin one up for each request. But it means you need Workers-compatible libraries or polyfills.

For the API framework, I used Hono - a lightweight router that works in Workers, Deno, Bun, and Node. It’s similar to Express in feel but designed for the edge runtime constraints. Routing, middleware, request/response handling all work as expected.

TypeScript compilation required Wrangler, Cloudflare’s CLI tool. It bundles your code with esbuild, handles the Workers-specific configuration, and deploys to Cloudflare’s network. The development experience was good - wrangler dev gives you a local server that simulates the Workers environment.

One thing to watch: execution time is limited. Workers have CPU time limits (10-50ms depending on plan) and wall-clock time limits (30 seconds). If your code does heavy computation or makes slow external requests, you can hit these limits. For a typical API that validates input, queries a database, and returns JSON, it’s plenty.

Azure AD Integration

Azure AD handles user authentication and authorization. Users log in through Microsoft’s OAuth flow, and the API receives a JWT access token with their identity and roles.

Validating tokens in Workers required care. The typical approach with JWTs is to fetch the provider’s public keys (the JWKS endpoint), cache them, and use them to verify token signatures. In Workers, you don’t have persistent state between requests, so I needed to handle the key fetching on each request - with caching in Cloudflare’s KV store to avoid hitting Azure’s JWKS endpoint constantly.

The validation flow:

  1. Extract the Bearer token from the Authorization header
  2. Decode the JWT header to get the kid (key ID)
  3. Check KV cache for that key; if missing, fetch from Azure’s JWKS endpoint and cache it
  4. Verify the signature using the public key
  5. Check standard claims: issuer matches Azure AD, audience matches my app, token isn’t expired
  6. Extract custom claims for role-based access

Azure AD supports app roles - you define roles in the app registration, assign users to roles, and the roles appear in the token. I used this for basic RBAC: some endpoints required “admin”, others just needed “user”. The middleware checked the required role before the route handler executed.

For more granular permissions, I extended Azure AD’s built-in roles with custom logic. The token tells me who the user is and their high-level role; my database tells me what specific resources they can access.

Connecting to PostgreSQL

This was the hard part. PostgreSQL uses a TCP-based protocol. Traditional serverless environments (AWS Lambda, Azure Functions) can connect to databases because they have outbound TCP capability. Workers historically couldn’t - they were HTTP-only.

That changed with Cloudflare’s TCP Sockets API. Workers can now open TCP connections, which means PostgreSQL connections are possible.

I used @neondatabase/serverless - a Postgres client designed for edge environments. It uses the TCP Sockets API under the hood and handles the Postgres wire protocol. From my code’s perspective, it looks like any other Postgres client:

import { neon } from '@neondatabase/serverless';

const sql = neon(env.DATABASE_URL);
const result = await sql`SELECT * FROM users WHERE id = ${userId}`;

But there are constraints:

  • No connection pooling across requests. Each Worker invocation is isolated, so you can’t maintain a persistent connection pool like you would in a long-running server. Every request opens a new connection. This adds latency and database connection overhead.
  • Connection establishment takes time. The TCP handshake plus TLS negotiation plus Postgres authentication adds hundreds of milliseconds to the first query. Subsequent queries on the same connection are faster, but if your request only makes one query, that overhead dominates.
  • Connection limits matter more. If you have a burst of traffic, each request opens its own connection. Azure PostgreSQL has connection limits (varies by tier). I had to size the database tier to handle peak concurrent connections.

One mitigation: Cloudflare offers Hyperdrive, a managed connection pooler that sits between Workers and your database. It maintains persistent connections from Cloudflare’s side, so Workers get the benefit of connection reuse without managing it themselves. I evaluated this but ended up not using it for this project - the added complexity wasn’t worth it for the traffic levels involved.

Azure PostgreSQL Configuration

Azure PostgreSQL needs to accept connections from Cloudflare’s IP ranges. By default, Azure’s firewall blocks external connections. I had to:

  1. Enable public access to the database (private endpoints weren’t an option with Workers)
  2. Add firewall rules for Cloudflare’s IP ranges (they publish these)
  3. Require TLS for all connections
  4. Use certificate verification on the client side

This is less secure than a private VNet deployment where the database has no public exposure. It’s a trade-off of the edge architecture - your code runs on Cloudflare’s network, which isn’t your VNet. Defense in depth through TLS, strong credentials, and minimal firewall rules mitigates the risk.

I also enabled Azure’s threat detection and auditing to monitor for suspicious connection patterns.

Putting It Together

A typical request flow:

  1. Request arrives at Cloudflare’s edge, routed to a Worker
  2. Worker extracts and validates the Azure AD token
  3. Worker checks roles/permissions for the requested endpoint
  4. Worker opens a TCP connection to Azure PostgreSQL
  5. Worker executes the query, gets results
  6. Worker formats the response and returns JSON
  7. Connection closes when the Worker completes

Latency was acceptable but not amazing. The edge routing is fast, but the database round-trip to Azure’s Australian region adds 50-100ms. For read-heavy workloads, caching in Cloudflare’s KV or Cache API would help. For this project, the latency was fine.

What I Learned

Mixing ecosystems is possible but not easy. Each cloud provider optimises for their own services. Going cross-provider means reading a lot of documentation, finding edge cases that aren’t covered, and testing carefully. It works, but budget extra time for integration issues.

Edge compute has real constraints. The Workers environment is different from Node.js. Libraries that assume Node won’t work. Database connections are expensive per-request. Execution time is limited. These constraints are fine for many use cases but mean edge compute isn’t a drop-in replacement for traditional servers.

Connection management matters at scale. The naive “new connection per request” pattern works for low traffic but becomes a bottleneck as load increases. Understanding your database’s connection limits and planning accordingly is important.

This project also reminded me of my early serverless work, when I was building my first Stripe integration and every step was unfamiliar. The uncertainty feels similar when you’re combining services in ways the documentation doesn’t cover. You have to piece together understanding from multiple sources and figure out what actually works through experimentation.

Back to Blog