Stale-while-revalidate, ISR, or runtime cache for the product catalog?
Catalog pages drive 71% of organic traffic. The cache strategy ranges from 'always fresh and slow' to 'always fast and a little stale'.
- Ticket
- WEB-431
- Decider
- Tech lead · storefront
- Team
- DTC commerce, 14 engineers, ~12k SKUs, traffic skewed to 200 hero pages
The blocker
Why this stalled long enough to need a brief.
- TTFB on hot product pages spiked to 850ms during a sale; SEO traffic dropped two days later.
- Cache invalidation runbook is six pages of Confluence and two engineers can read it.
- Marketing keeps asking 'why does the price take 10 minutes to update?' — the answer is buried in three different layers.
Options on the table
Each one was a real proposal, not a strawman.
- (a) Cache Components + cacheLife + targeted updateTag on price/inventory writesPicked
Single mental model, framework-native, and 'invalidate this exact thing' is one function call from the write path.
- (b) Edge runtime cache with manual SWR via Vercel Runtime Cache API
More control, but every team needs to learn a new API. Solves a problem we don't actually have outside of two pages.
- (c) Keep ISR with revalidate=60, accept the staleness window
Simplest, but the marketing pain is real and 60-second staleness on price during a sale is a customer-facing bug.
The memo
Why we picked Cache Components + cacheLife + targeted updateTag on price/inventory writes.
We pick (a). The Cache Components + tag-based invalidation model lets the write path say `updateTag('product:' + sku)` once, and every render that depends on that SKU re-renders. No cron, no manual purge, no Confluence.
Concrete cacheLife buckets: hero pages (`stale: 5m, revalidate: 30m`), long-tail pages (`stale: 1h, revalidate: 6h`), price/inventory (`stale: 0, revalidate: on-write`). These are explicit and grep-able from a single config.
Risk: cache key explosion if we tag too granularly. Mitigation: cap at SKU + collection + storefront-wide; never tag per user, per session, or per query string.
What actually happened
Followed up roughly 30 days later.
Shipped behind a flag 2026-04-08, fully on by 2026-04-15. Hot-page TTFB went from 850ms p75 to 95ms p75.
Marketing's 'price update lag' complaints stopped. The actual lag is now whatever Stripe + the warehouse take to confirm the write — single-digit seconds.
Cache key cardinality is well-bounded; we never crossed 14k tags despite the 12k SKUs. The collection-level tag absorbs most of the long tail.
The other doors
The arguments we didn't take, preserved.
- (b) Edge runtime cache with manual SWR via Vercel Runtime Cache APIMore control, but every team needs to learn a new API. Solves a problem we don't actually have outside of two pages.
- (c) Keep ISR with revalidate=60, accept the staleness windowSimplest, but the marketing pain is real and 60-second staleness on price during a sale is a customer-facing bug.