Add detailed landing page development plan in docs/frontend_landing_plan.md: - Complete landing page structure (Hero, Problem/Solution, Features, Demo, CTA) - Design guidelines from downloaded skills (typography, color, motion, composition) - Security considerations (XSS prevention, input sanitization, CSP) - Performance targets (LCP <2.5s, bundle <150KB, Lighthouse >90) - Responsiveness and accessibility requirements (WCAG 2.1 AA) - Success KPIs and monitoring setup - 3-week development timeline with daily tasks - Definition of Done checklist Download 10+ frontend/UI/UX skills via universal-skills-manager: - frontend-ui-ux: UI/UX design without mockups - frontend-design-guidelines: Production-grade interface guidelines - frontend-developer: React best practices (40+ rules) - frontend-engineer: Next.js 14 App Router patterns - ui-ux-master: Comprehensive design systems and accessibility - ui-ux-systems-designer: Information architecture and interaction - ui-ux-design-user-experience: Platform-specific guidelines - Plus additional reference materials and validation scripts Configure universal-skills MCP with SkillsMP API key for curated skill access. Safety first: All skills validated before installation, no project code modified. Refs: Universal Skills Manager (github:jacob-bd/universal-skills-manager) Next: Begin Sprint 3 landing page development
42 lines
1.3 KiB
Markdown
42 lines
1.3 KiB
Markdown
---
|
|
title: Cross-Request LRU Caching
|
|
impact: HIGH
|
|
impactDescription: caches across requests
|
|
tags: server, cache, lru, cross-request
|
|
---
|
|
|
|
## Cross-Request LRU Caching
|
|
|
|
`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.
|
|
|
|
**Implementation:**
|
|
|
|
```typescript
|
|
import { LRUCache } from 'lru-cache'
|
|
|
|
const cache = new LRUCache<string, any>({
|
|
max: 1000,
|
|
ttl: 5 * 60 * 1000 // 5 minutes
|
|
})
|
|
|
|
export async function getUser(id: string) {
|
|
const cached = cache.get(id)
|
|
if (cached) return cached
|
|
|
|
const user = await db.user.findUnique({ where: { id } })
|
|
cache.set(id, user)
|
|
return user
|
|
}
|
|
|
|
// Request 1: DB query, result cached
|
|
// Request 2: cache hit, no DB query
|
|
```
|
|
|
|
Use when sequential user actions hit multiple endpoints needing the same data within seconds.
|
|
|
|
**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.
|
|
|
|
**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching.
|
|
|
|
Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)
|