When many of us started building web applications in the 1990s and early 2000s, the âdynamic websiteâ was the holy grail. Every request ran through a Perl or later PHP script, dozens of database queries were executed as soon as we discovered MySQL, and the page was rebuilt on the fly. The promise was seductive: millisecond-accurate content, fully dynamic pages, and the feeling that your website was âaliveâ, providing fresh content on each frequent reload - we saw tens of page updates in a single hour.
Two decades later, most of us have learned: this approach is a very bad idea. Not just because of nostalgia for simpler times, but because it fundamentally misuses resources and scales poorly. And it ignores the basic design principles behind HTTP and many of the beautiful features it provides that have been carefully crafted into the protocol.

The Sheer Load
Dynamic generation on every request means every read triggers full database activity. This is wasteful, since in nearly all cases, content changes far slower than it is read. Consider a small business or an association website: their event calendar might change once per week, yet the page could be read hundreds of times per day. Why hammer your database for every read?
Typical read/write ratios are heavily skewed toward reads - easily 1000:1, usually even way more. For such scenarios, it makes much more sense to update the site when writing and serve static, pre-generated pages for reads. An Apache httpd or nginx server can handle tens of thousands of static requests per second on cheap commodity hardware. By contrast, even 10 dynamic requests per second can start to saturate a weak CMS, consuming CPU and database I/O (this is actually what happened to the server this page is hosted on - not for this static page but another website ran on the same hardware, this somehow motivated the writing of this article).

Consistency and Scalability
Once your project grows, central dynamic generation becomes not just inefficient but impossible to scale. Modern systems are built on the principles of:
- Statelessness: Each request should be independent, handled by any application entry point. This allows horizontal scaling without any need to provide consistency. This is one of the key properties for scalable systems - and itâs what modern clouds are extremely good and flexible at.
- No central session store: We all have done it - start a PHP session on every new request, setting a session cookie and storing data on the server using the session cookie to refer to the given local state (most of the time because it was cool or we could display some unique information per user that was more of a gimick than a necessity). We taught that this is the right way since clients can manipulate cookies and servers cannot rely on client state (which is true). But - state like this kills scalability. Sessions should be client-side, for example protected by modern cryptographic means against manipulation, or distributed in other ways. This is also how HTTP was designed.
- Eventual consistency: At scale, there is no global, immediate consistency. You cannot have this, this is not negotiable. Itâs really impossible. Updates propagate, but for any system with more than three nodes, synchrony is fundamentally impossible. Even ordering of events cannot be guaranteed across distributed systems - physics itself prevents this. Recall that relativity teaches us that simultaneity is observer-dependent; trying to impose strict ordering beyond two observers is a foolâs errand.
Thus, aiming for central, synchronous consistency is misguided. Designing for eventual consistency and static delivery of precomputed content is the path to resilience. In practice, the illusion of realtime content can be offered through event brokers or pub/sub systems, where clients subscribe to updates as they propagate. These updates arrive without strong guarantees of ordering or consistency, but they are often good enough for reality - allowing static updates and eventual propagation to coexist with a responsive user experience.

Caching: Exploit Time Scales
Caching is the practical compromise between freshness and scalability. For most content - and most content is quasi static - the exact moment of update doesnât matter:
- News pages: readers donât care if the article is 5â30 minutes old. Usually even a little bit longer timescales do not really matter.
- Associations or small businesses: event schedules can easily tolerate a delay of hours or even days.
- Personal web pages: Visitors often come by only once a month. It does not matter when content reaches them on scales of days.
ETags and Cache-Control headers allow you to exploit this in combination with proxy servers, CDNs and clients that use mechanisms like If-Modified headers. A 30-minute TTL can cut load by many orders of magnitude. For many static cases, you can cache for days or weeks. Static generation takes this even further: the write path updates a file, and the delivery path serves that file directly. These are totally independent pipelines, you can apply complete separation of concerns. During development we are often tempted to set very low timeouts just to always see the latest changes, but this is not how production should work. One can always actively reload content when needed, flushing caches. Experienced developers know you rarely need a totally consistent view of scripts, static page files and layout files all expiring at the same moment if you do development in a well versed way; this is simply unrealistic. Good developers handle such asynchrony gracefully, while believing in seamless, synchronized reloads of the entire system is a mark of amateur design.
There are, however, rare situations where even professionals require low reload times - still they have to account for non synchrony of states of different resources in any case. In those cases, this is carefully planned upfront: short timeouts are set, time is allowed for them to propagate, and only then are changes rolled out. After a brief period the timeouts are raised again. This is often employed when deploying new mechanisms like DNS signing algorithms or similar. But this is a last-resort strategy and should usually be avoided.

Security and Attack Surface
Dynamic CMS systems often expose outdated software with a broad attack surface. Each request is funneled through layers of middleware - the PHP interpreter or similar runtime, custom scripts, and the database backend - each introducing potential vulnerabilities and each of them exposed to each and every request from any untrusted source. By contrast, serving static files is far more robust: the web server simply reads from disk or distributed storage, a process with very few exploitable points. Static file serving also allows clear separation between the delivery system and the editing/authoring environment. Even if a frontend node is compromised, your underlying content remains intact - you patch the issue, redeploy, and your producers can continue working without disruption meanwhile.
Modern CMS Without Dynamic Per-Request Overhead
This doesnât mean you have to give up the convenience of a CMS. Modern static site generators (Hugo, Jekyll, etc.) let you author in Markdown or even WYSIWYG editors, then compile to static HTML. Even some âdynamicâ CMSs now offer âstatic exportâ features, where content is generated dynamically at write-time but served statically at read-time. Still, one should carefully consider whether inâbrowser editing is really necessary. More often than not it turns into a hassle rather than a feature - reducing performance and providing an editing experience that is rarely as pleasant as alternatives, even if it seems like a cool idea at first glance. Another downside is the requirement of a working network connection at all times. Collaborative solutions add even more fragility: they depend on stable bandwidth, low latency, and minimal packet loss - and even then they frequently suffer from inconsistent updates and lost edits, as most of us have experienced with âmodernâ collaborative tools. These problems simply donât exist with proper offline-first editing workflows that only push changes once you are done.
The result: best of both worlds. A smooth editorial experience for writers, but fast, secure, cache-friendly delivery for readers.
Hybrid in Practice: Static-First, Serverless/Edge
Static-first doesnât mean ânever dynamic.â It means move the dynamic parts off the hot path. When a page is mostly stable, serve it statically; when a tiny slice truly must be dynamic, push that slice into serverless functions or edge functions that run outside the page render pipeline.
Good fits for serverless/edge:
- Forms & webhooks: contact forms, newsletter signups, payment webhooks - short, authenticated handlers that donât touch the page template.
- Personalized fragments: a user menu, unread counts, âitems in cartâ, license status - fetched as JSON and stitched in on the edge or if the optional JavaScript support is enabled in the client even only in the client.
- Search & autocomplete: query an index (or a precomputed JSON/SQLite bundle), return results via a function; keep the landing pages static.
- On-demand regeneration: recalculate and re-publish a single page when data changes (for example webhook-triggered), while all other reads remain static.
- Rate-limited utilities: image proxying, signed URL issuance, small data transformations.
You can segregate the typical pattern into two independent paths - the read path vs write path:
- Read path: The CDN serves static HTML/CSS/JS. The page optionally fetches small JSON fragments from serverless endpoints (cached with short TTLs or stale-while-revalidate). No per-request HTML rendering.
- Write path: authors (or upstream systems) change content. A build hook (continuous integration, queue, or webhook) generates or re-generates only the affected pages. The CDN cache is surgically purged using surrogate keys or path-based invalidation - or one just waits for cache timeouts and the natural information propagation.
A few major design tips so it scales:
- Keep functions stateless and idempotent. Store state in durable services; pass identity via signed cookies/JWTs rather than server sessions.
- Cache aggressively at the edge. Return
Cache-Control, ETag, and consider stale-while-revalidate for snappy UX under load.
- Bound the dynamic footprint. Prefer 1â3 tiny JSON calls over one templated dynamic page. Measure cold-start and p95 latency.
- Plan for failure modes. If a function times out, the page still works - only the small dynamic fragment is missing or shows a cached value.
- Mind costs & limits. Serverless is great for spiky traffic; watch egress, invocation counts, time limits, and concurrency caps.
What not to do: put full page HTML rendering inside a function for every read. That merely recreates the 90s mistake on newer infrastructure. Use functions as narrow dynamic sidecars, not as a page factory. And do not rely on JavaScript in the browser, always provide a fallback (you can sacrifice propagation speed of information and some of useability but the information should still be accessible without any active code execution on the client).
Even web pages that are often shown as some of the most well performing high traffic WordPress instances like the NewYork times are in fact static webpages. Large portals like them donât use WordPress as the user facing frontend, they usually utilize those systems in headless mode. This means they utilize WordPress to manage their content and allow their editors to edit content. Then a rendering backend - distinct from wordpress - generates the actual frontend pages. This happens on different timescales for different article types - like interactive realtime rendering for breaking news that is fetched from the frontend, pre-rendered static HTML that is also published via content delivery networks on a shedule and at publish time for evergreen articles (which resembles a classical static webpage). In addition there is incremental and on-demand regeneration for pages that are mostly static but can be re-generated when content changes.

Conclusion
The 90s beginners dream of millisecond-accurate dynamic websites has aged poorly. For most organizations - small businesses, associations, newspapers, etc. - dynamic generation on every request makes no sense.
Static or cache-heavy delivery is not only faster and more resource-efficient, it is more scalable, secure, and resilient. The future of the web lies not in rebuilding every page every time, but in serving static truths and letting updates propagate when they truly matter. For true realtime propagation you should rely on dedicated pub/sub systems, not the WWW itself, since the web was never designed for strict realtime guarantees. And importantly: static-first should not be mistaken for an âold schoolâ approach. On the contrary, it aligns with the most modern web stacks - from Jamstack architectures to global CDNs - proving that efficiency, scalability and resilience are timeless design choices.
This article is tagged: