You can get OAuth 2.0 “working” in a day.
You can spend the next five years fixing what that first implementation broke.
If you have ever chased a mysterious token leak, tried to reason about refresh token rotation, or wondered whether your SPA should still use the implicit flow, you already know this: designing secure authorization flows is the hard part of OAuth, not the syntax of the requests.
In plain language, OAuth 2.0 is a protocol that lets one application call another application’s API with the user’s permission, without sharing the user’s password. An authorization flow is the step by step dance between browser, app, authorization server, and API that turns “user clicked Allow” into “we now have a token with the right scope and lifetime.”
This article is about designing that dance so it holds up under real attackers, real scale, and real product constraints.
To write it, I leaned heavily on the people who spend their lives modeling attacks on OAuth deployments. Aaron Parecki, OAuth specialist and editor of several related drafts, consistently recommends treating the authorization code flow with PKCE as the default for all modern apps, including SPAs, and avoiding the implicit flow entirely. Vittorio Bertocci, identity architect and host of “Identity, Unlocked”, describes OAuth 2.1 as the codification of years of hard lessons, where PKCE is mandatory and legacy flows like implicit and password are removed rather than merely discouraged. Daniel Fett, security researcher and coauthor of the OAuth Security Best Current Practice, has shown how attacks like code injection and mix up are mitigated when you pair PKCE with strict redirect URI validation. Put those perspectives together and you get a clear direction: if you design your flows like the latest OAuth 2.0 Security Best Current Practice (now RFC 9700) and upcoming OAuth 2.1, you are much safer than any “tutorial from 2015” implementation.
Let’s turn that into a concrete, step by step playbook.
Why OAuth 2.0 Authorization Flows Are Hard To Get Right
OAuth looks simple when you see the diagrams. One arrow to the authorization server, one arrow back with a code, then a token. In reality you are balancing at least four moving pieces:
-
A browser or native app you do not fully control.
-
An authorization server that speaks spec language, not your product requirements.
-
Resource servers (APIs) with their own lifetimes, scopes, and constraints.
-
Attackers with surprising creativity.
The original threat model for OAuth 2.0, documented in RFC 6819, lists dozens of attack classes, from token leakage via referrer headers and browser history, to code substitution at the redirect, to open redirectors that turn into token exfiltration. Many of these are not obvious if you only look at the happy path.
On top of that, remnants of the original spec still float around in blog posts. You will see the implicit flow recommended for SPAs, or the resource owner password credentials flow used as a “temp hack.” Current guidance is clear: those flows are insecure by modern standards, and both are removed in OAuth 2.1.
So the hard part is not “how do I send an authorization request,” it is how do I choose flows, parameters, and token lifetimes that match my platform and threat model.
What Secure OAuth 2.0 Looks Like Today
If you only remember one thing, make it this: “OAuth 2.0 done securely” today mostly means “OAuth 2.1 plus RFC 9700 practices, even if your software still says 2.0.”
That usually implies:
-
Use Authorization Code + PKCE (S256) for user facing apps, including SPAs and native apps.
-
Use Client Credentials for machine to machine calls, never for end user login.
-
Never use Implicit or Password flows in new designs.
-
Require exact redirect URI matching, no wildcards or loose matching.
-
Treat PKCE as mandatory for authorization code, regardless of client type.
-
Keep access tokens short lived, use refresh tokens with rotation and reuse detection, and scope tokens tightly.
We will unpack all of that. First, you design the flow itself.
Step 1: Choose The Right Flow For Your Application
Before you write a single line of code, label your client:
-
Is it a confidential client (can keep a secret, for example a backend service)?
-
Or a public client (cannot keep a secret, for example SPA or mobile app)?
Then pick the flow that current best practice recommends for that combination.
Here is a compact map you can use during design reviews.
| App type | Client type | Recommended flow | Notes |
|---|---|---|---|
| Server rendered web | Confidential | Authorization Code + PKCE | Backend exchanges code, cookies for session |
| SPA in browser | Public | Authorization Code + PKCE | No implicit; use redirect to AS, not hidden iframe |
| Native mobile / desktop | Public | Authorization Code + PKCE | Use system browser plus custom URI scheme |
| Machine to machine API | Confidential | Client Credentials | No end user, service identity only |
| TV / console devices | Public | Device Authorization Grant | User completes flow on secondary device |
This table reflects the direction of OAuth 2.0 for Native Apps, Browser Based Apps, and the Security BCP, not just personal taste.
A quick worked example:
Suppose you have a JavaScript SPA that calls your API. Old tutorials might recommend the implicit flow. Current guidance says: treat the SPA as a public client, use Authorization Code with PKCE, do a full page redirect to the authorization server, and complete the code to token exchange in the SPA using PKCE.
That one change removes an entire class of token leakage issues through URL fragments, browser history, and referrer headers.
Step 2: Lock Down The Authorization Request
Once you know which flow you need, the first place attackers will poke is the authorization request itself.
Use PKCE correctly
PKCE looks simple, but the details matter:
-
Generate a high entropy code_verifier per transaction.
-
Derive a code_challenge with S256, never plain.
-
Send code_challenge and code_challenge_method in the authorization request.
-
Send the original code_verifier only to the token endpoint when you exchange the code.
RFC 9700 and the Security BCP are explicit that PKCE is required for authorization code and that S256 is the only recommended method, because methods that expose the verifier weaken the protection against interception and replay.
If an attacker manages to steal an authorization code but not the code_verifier, they still cannot redeem it. That is the entire point.
Enforce strict redirect URI validation
Open or loosely matched redirect URIs are one of the classic ways to steal codes and tokens.
Modern guidance requires:
-
Exact string matching between the registered redirect URI and the one in the request, no partial or prefix matches.
-
No user controlled query parameters in the registered URI, unless you very carefully validate them.
-
No open redirectors in your redirect chain.
When Daniel Fett and others described the updated attacker model behind RFC 9700, they highlighted scenarios where an attacker controls part of the redirect target and captures the authorization code before the real client ever sees it. Strict comparison plus PKCE is how you break those chains.
Bind the response to the original request
Two parameters are used here:
-
state: a CSRF token that the client generates and validates. Prevents cross site request forgery and mix up between parallel flows.
-
nonce: for OpenID Connect login, binds ID token to the original request and reduces replay risk.
-
Generate both with high entropy, store them in a secure context (for example httpOnly cookie or encrypted storage in native apps), and validate them on return. If they do not match, discard the response, even if the code looks valid.
Step 3: Protect Tokens At Rest And In Transit
The whole reason you went through OAuth is to get tokens. If the tokens are not handled carefully, everything else is theater.
Set lifetimes that match real world usage
A useful mental model:
-
Access tokens are like hotel key cards. Short lived, cheap to revoke by waiting them out, and easy to replace.
-
Refresh tokens are like the booking record in the hotel system. Longer lived, very sensitive, and tightly audited.
A worked example with numbers:
-
You have a mobile app used daily by 10,000 users.
-
You set access token lifetime to 10 minutes and refresh tokens to 24 hours.
-
In the worst case, each user keeps the app open for an hour and triggers one token refresh per hour.
-
That gives you at most 10,000 users × 24 refreshes per day, or 240,000 refresh requests per day.
For a modern authorization server that is not a scary number, and it buys you a simple revocation story: after 24 hours, all sessions require reauthentication anyway.
OAuth BCP guidance leans toward short lived access tokens plus refresh tokens rather than very long lived access tokens, especially for browser based apps and mobile.
Use refresh token rotation and reuse detection
If you issue refresh tokens, you should:
-
Rotate the refresh token on each use, issuing a new one while invalidating the old.
-
Detect when an old refresh token is used after rotation, which signals possible theft.
-
Revoke the session or step up authentication in that case.
This pattern is recommended in modern OAuth 2.0 security guidance and is increasingly part of managed identity providers.
Store tokens in the right place
For SPAs in the browser:
-
Prefer in memory storage for access tokens, not localStorage, to reduce exposure to XSS.
-
Use httpOnly, sameSite cookies to store session identifiers when the backend holds tokens, instead of storing tokens in the browser at all.
For native apps:
-
Use the platform secure storage (Keychain, Keystore, etc.).
-
Do not write access tokens into logs or crash reports.
For server side apps:
-
Keep tokens in encrypted storage or memory scoped to the session.
-
Never expose them back to the browser unless absolutely required.
And everywhere:
-
Use TLS for every hop, including between internal services. It is part of the basic assumptions in the OAuth threat model that the transport channel is protected.
Step 4: Design Scopes, Consent, And UX Together
A technically perfect OAuth flow that scares users into clicking “Allow all” is not really secure.
Use scopes for least privilege
Treat scopes as your contract between clients and APIs:
-
Avoid single “god scopes” that unlock everything.
-
Use small, task oriented scopes such as
payments:readorprofile:email. -
Version or deprecate scopes over time rather than reusing them for new permissions.
RFC 9700 explicitly ties scope design to risk management, because smaller scopes reduce the blast radius when tokens are stolen.
Make consent screens meaningful
You control the text. Use it.
-
Group scopes into human understandable descriptions.
-
Indicate which access is needed for core functionality versus optional features.
-
For sensitive actions (payments, data export) consider step up consent with a fresh authorization flow and higher assurance.
You can also align scopes with business tiers. For example, a “Basic” plan might never receive the admin:* scopes at all, which simplifies security reviews.
Remember OpenID Connect for login
If you use OAuth for “login with X”, you probably want OpenID Connect on top:
-
You get an ID token that carries the authenticated user’s identifier.
-
You can validate it locally using the provider’s JWKs without a separate API call.
Trying to treat access tokens as identity proof without OIDC semantics tends to create subtle security bugs, especially around token audience and lifetime.
Step 5: Operationalize Security In Production
The last piece is what you do after the first version ships.
Log and monitor the right signals
At minimum, track:
-
Unusual spikes in failed token exchanges or invalid_grant errors.
-
Repeated use of old refresh tokens after rotation.
-
Consent screens with very high or very low acceptance rates.
Several real incidents that informed RFC 9700 and the Security BCP were caught because operators noticed anomalies in authorization and token logs first.
Automate key and secret hygiene
For JWT access tokens or ID tokens:
-
Rotate signing keys regularly, advertise them through a JWKS endpoint.
-
Use
kidheaders so resource servers can pick the correct key.
For client secrets:
-
Treat them as real credentials, with rotation processes, not as configuration constants.
Modern providers and libraries help, but you still need a playbook that your team follows.
Build on mature libraries and providers
Unless you are an identity vendor, rolling your own authorization server is rarely a good idea.
Use:
-
A managed identity provider, or
-
A battle tested open source server that explicitly follows RFC 9700 and OAuth 2.1 drafts.
Then keep an eye on the OAuth 2.1 spec progress and vendor security advisories. The protocol landscape does evolve, and you want your flows to evolve with it rather than lag five years behind.
FAQ
Do I have to migrate everything to OAuth 2.1 to be secure?
You do not need the label, you need the practices. If your flows already use Authorization Code with PKCE, avoid implicit and password grants, enforce exact redirect matching, and follow RFC 9700, you are effectively operating in an OAuth 2.1 style.
Is it ever acceptable to use the implicit flow now?
Current best practice is simple: no, do not use it in new designs. It is removed entirely from OAuth 2.1 drafts and is strongly discouraged in the Security BCP because of known token leakage paths in browsers.
How do I pick between opaque and JWT access tokens?
Opaque tokens with introspection keep logic in the authorization server and reduce risk if token contents are misunderstood. JWTs can reduce round trips and are common for cross service architectures, but they require stricter key management and audience checking. RFC 9700 discusses both and does not mandate one or the other, it cares more about lifetime and audience than format.
Where should I start if my current system is legacy and messy?
Start with the highest value, highest risk flows. For example, migrate SPAs from implicit to authorization code with PKCE, shorten access token lifetimes, and add refresh token rotation. Then move on to cleaning up scopes and redirect URIs. The migration guides for OAuth 2.1 provide practical checklists that you can adapt.
Honest Takeaway
Designing secure OAuth 2.0 authorization flows is not about memorizing every RFC. It is about making a handful of structural choices correctly: pick modern flows, use PKCE everywhere, lock down redirect URIs, handle tokens like real credentials, and build on components that track the evolving specs for you.
If you get those foundations right, most of the scary attack diagrams in the threat model become either impossible or very hard to pull off in your environment. You will still spend time tuning lifetimes, scopes, and UX, but you will be doing it from a solid baseline instead of trying to retrofit security into a fragile flow.
Treat this as a living design, not a one time project. Every new client, API, or product feature is another chance to revisit your OAuth flows and keep them aligned with where the standards, and the attackers, are going.
A seasoned technology executive with a proven record of developing and executing innovative strategies to scale high-growth SaaS platforms and enterprise solutions. As a hands-on CTO and systems architect, he combines technical excellence with visionary leadership to drive organizational success.
























