· 8 min read

Sessions and Cookies with Go and Nuxt 3

Why I moved from JWTs to session-based auth, and what I learned building a social OAuth system.

Sessions and Cookies with Go and Nuxt 3

Sessions and Cookies with Go and Nuxt 3

I’d been reaching for JWTs on every project that needed authentication. It’s an easy default - tokens are self-contained, stateless, and well-understood. But I realised I didn’t really understand the alternative. What are the actual trade-offs with session-based auth? When would you choose one over the other?

To find out, I built a social OAuth system from scratch. Go backend, Nuxt 3 frontend, session-based auth with cookies. Here’s what I learned.

Why Sessions Instead of JWTs?

The JWT pitch is compelling: the server doesn’t need to store anything. The token contains the user’s identity and claims, signed so it can’t be tampered with. Verify the signature, trust the payload. Scales horizontally because there’s no shared session state.

But there are downsides. You can’t really revoke a JWT - once it’s issued, it’s valid until it expires. You can maintain a blacklist, but now you’re storing state anyway. The token payload is base64-encoded, not encrypted, so anything in there is readable client-side. And JWTs can get large if you embed too many claims, adding overhead to every request.

Sessions flip the model. The client gets an opaque identifier (just a random string), and the server stores the actual session data. To revoke access, you delete the session. The client never sees the user’s roles or permissions. The trade-off is you need somewhere to store sessions - usually Redis or a database.

Neither approach is universally better. JWTs make sense when you need stateless auth across many services and can tolerate delayed revocation. Sessions make sense when you need immediate revocation and want to keep claims server-side. I wanted to actually build both to internalise the trade-offs instead of just knowing them abstractly.

The OAuth Flow

Social OAuth adds another layer. Instead of managing passwords yourself, you delegate authentication to Google, GitHub, or whoever. The flow goes:

  1. User clicks “Sign in with Google”
  2. You redirect them to Google with your client ID and a callback URL
  3. User authenticates with Google
  4. Google redirects back to your callback URL with an authorization code
  5. Your server exchanges that code for tokens (server-to-server, not through the browser)
  6. You use the access token to fetch user info from Google’s API
  7. You create or update the user in your database
  8. You create a session and send back a session cookie

The tricky part is step 2-4. You need to include a state parameter when redirecting to Google, and verify it matches when Google redirects back. This prevents CSRF attacks where an attacker tricks you into linking your account to their Google account.

I implemented Google and GitHub providers. Each has slightly different endpoints and response formats, but the flow is the same. Go’s golang.org/x/oauth2 package handles most of the plumbing - you configure the client ID, secret, and scopes, and it generates the redirect URLs and handles the token exchange.

Session Management in Go

The session itself is just a struct: user ID, roles, CSRF token, created timestamp, expiry. I serialized it to JSON and stored it in Redis with the session ID as the key. Redis handles expiry natively - set a TTL when you write the session, and it disappears automatically.

type Session struct {
    UserID    string    `json:"userId"`
    Roles     []string  `json:"roles"`
    CSRFToken string    `json:"csrfToken"`
    CreatedAt time.Time `json:"createdAt"`
    ExpiresAt time.Time `json:"expiresAt"`
}

The session ID needs to be unpredictable - I used crypto/rand to generate 32 bytes of randomness, then base64-encoded it. Predictable session IDs are a classic vulnerability; if an attacker can guess valid session IDs, they can hijack sessions.

On each request, middleware reads the session cookie, looks up the session in Redis, and attaches the user to the request context. If the session doesn’t exist or is expired, the user is unauthenticated. This lookup happens on every request, which is why Redis (or something equally fast) matters.

Getting cookies right involves more settings than you’d expect:

  • Name: Something like session_id. Doesn’t really matter, but keep it consistent.
  • Value: The session ID itself.
  • Path: Set to / so it’s sent with every request to your domain.
  • Domain: Usually omit this to use the current host. Set it explicitly if you need cookies shared across subdomains.
  • MaxAge: How long until the browser deletes the cookie. I used 24 hours.
  • HttpOnly: Set this. It prevents JavaScript from reading the cookie, which blocks a whole class of XSS attacks that try to steal session tokens.
  • Secure: Set this in production. It means the cookie only gets sent over HTTPS. In local development you might need to disable it unless you’re running local TLS.
  • SameSite: Set to Lax or Strict. This controls whether the cookie is sent with cross-site requests. Strict is safest but can break some legitimate flows (like clicking a link to your site from an email). Lax is a good default.

One gotcha: if your frontend and backend are on different origins (different ports count as different origins), you need to handle CORS carefully. The cookie won’t be sent with cross-origin requests unless you set credentials: 'include' on the fetch and configure the backend to allow credentials from that origin.

CSRF Protection

Even with SameSite cookies, you should implement CSRF tokens for state-changing requests. The idea: each session has a random token stored server-side. When rendering forms or making requests that change state (POST, PUT, DELETE), include this token. The server verifies the token matches the session.

An attacker on a different site can’t read your CSRF token (same-origin policy prevents it), so they can’t forge valid requests. They can trigger a request to your server, but without the CSRF token, it’ll be rejected.

I stored the CSRF token in the session and sent it to the frontend as a header on authenticated responses. The frontend then includes it in an X-CSRF-Token header on subsequent requests. The middleware checks that the header matches the session’s token.

Role-Based Access Control

Beyond just “is the user logged in,” I added role-based permissions. Users have roles (admin, user, etc.), and routes can require specific roles. This is pretty standard, but implementing it myself clarified how the pieces fit together.

Roles are stored in the session, loaded from the database on login. Middleware checks if the session’s roles include the required role for the route. This means role changes don’t take effect until the user’s session is refreshed or they log in again - a trade-off of caching roles in the session.

For more dynamic permissions, you’d check against the database on each request instead of caching in the session. More accurate, but more database load.

The Frontend: Nuxt 3

Nuxt 3 handled the frontend. The auth state lives in a Pinia store - user info, loading state, and methods to log in/out and fetch the current user.

On app initialization, I call a /api/me endpoint to check if there’s a valid session. If so, populate the store with user info. If not, the user is logged out. The cookie is sent automatically with the request (assuming credentials are configured correctly).

Login is just a redirect to the backend’s OAuth initiation endpoint. The backend handles the Google redirect, creates the session, sets the cookie, then redirects back to the frontend. By the time the frontend loads, the cookie is already set.

Logout makes a request to the backend to invalidate the session (delete it from Redis), and the backend clears the cookie in the response. The frontend then clears its local state.

Local Development with Docker

Running PostgreSQL and Redis locally used to mean installing them natively and hoping the versions matched production. Docker Compose makes this trivial:

services:
  postgres:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: dev
    ports:
      - "5432:5432"
  redis:
    image: redis:7
    ports:
      - "6379:6379"

docker compose up and you’ve got both services running. Same versions every time, isolated from the host, easy to reset by deleting the volumes. When I eventually deploy this, the production config points to managed Redis and PostgreSQL instead, but the application code doesn’t change.

What I Took Away

The main value was understanding when each approach makes sense. JWTs aren’t wrong, but they’re not always right either. If I need to revoke access immediately - say, when a user’s account is compromised or they leave an organization - sessions make that trivial. If I’m building a distributed system where services need to verify identity without hitting a central session store, JWTs work better.

Building the OAuth flow from scratch also demystified it. The protocol looks complex in diagrams, but it’s really just a series of redirects and server-to-server token exchanges. Understanding each step makes debugging OAuth issues much easier.

And honestly, Go was a pleasure for this kind of backend work. The standard library handles HTTP well, the type system catches mistakes, and the explicit error handling (annoying as it can be) made me think carefully about failure cases in the auth flow.

Back to Blog