Back to Blog

Nyquist's Rule: Why I Always Plan for Double Capacity

The most valuable lesson I learned from signal processing: To capture reality, you must sample at twice the rate. This rule applies everywhere—from server capacity to disaster recovery.

January 9, 2026
5 min read
Nyquist's Rule: Why I Always Plan for Double Capacity

There's a theorem that all electronics and telecommunications engineers know by heart: the Shannon-Nyquist sampling theorem. In its simplest form, it says: To accurately capture a signal, you must sample at twice its highest frequency.

This principle, developed in the 1920s and mathematically proven in the 1940s, explains why CD audio is sampled at 44.1 kHz (just over twice the 20 kHz upper limit of human hearing) and why a 100 Hz signal requires at least 200 samples per second to reconstruct without distortion.

Over my 20+ years in IT infrastructure, I've found that this principle applies far beyond signal processing. In fact, I've turned it into an operational rule: Always plan for double.

The Real-World Translation

When I design a system or plan capacity, I always ask: "What's the expected load?" Then I multiply by two.

This isn't pessimism—it's engineering reality:

  • A server expected to handle 1,000 concurrent users? Provision for 2,000.
  • A UPS rated for 16 hours? Assume you'll need 32.
  • Network bandwidth calculated at 500 Mbps? Design for 1 Gbps.
  • Backup storage for 5 TB? Allocate 10 TB.

Just as Nyquist proved that undersampling loses information irreversibly, undersizing infrastructure leads to failures that can't be recovered in real-time.

Examples from the Electronics World

The 2x rule shows up everywhere in electrical engineering:

Capacitor voltage ratings: If your circuit runs at 16V, you use a 32V-rated capacitor. Why? Because voltage spikes happen, and you want headroom.

Power supply sizing: A device drawing 100W gets a 200W power supply. The power supply runs cooler, lasts longer, and handles transients gracefully.

Cable ampacity: Electrical codes require conductors rated well above expected current draw—because real-world conditions are never ideal.

These aren't arbitrary safety margins. They're recognition that theoretical calculations describe ideal conditions, while infrastructure must survive reality.

Single Points of Failure

The 2x rule leads naturally to eliminating Single Points of Failure (SPOF). Any component that exists only once is, by definition, below the Nyquist threshold for reliability.

Critical questions I ask for every system:

  • What happens if this disk fails? (RAID or replication)
  • What happens if this server fails? (Clustering or failover)
  • What happens if this datacenter fails? (Geographic redundancy)
  • What happens if this person is unavailable? (Documentation and cross-training)

Each "what if" should have an answer that doesn't involve "we're down until we fix it."

The Ankara Advantage

When I advise clients on disaster recovery, I often recommend Ankara as a secondary site for Istanbul-based operations. Why?

Seismic separation: Istanbul sits on major fault lines with significant earthquake risk. Ankara, 450 km away, sits on a different geological structure. A catastrophic Istanbul earthquake won't simultaneously affect Ankara.

Network diversity: Different fiber routes, different upstream providers, genuine path diversity.

Operational timezone: Same timezone, same business hours, easier coordination than an offshore DR site.

This is the Nyquist principle applied to geography: if Istanbul is your "primary sample," Ankara provides the "second sample" needed for reconstruction if the first is lost.

Capacity Planning in Practice

Here's how I apply the 2x rule to infrastructure planning:

Compute: If performance testing shows you need 8 CPU cores at peak, deploy 16. The extra capacity handles unexpected spikes and gives you room for growth.

Memory: Application profiling shows 32 GB RAM usage? Provision 64 GB. Memory is cheap; downtime isn't.

Storage I/O: Calculated IOPS requirement of 10,000? Design for 20,000. Storage bottlenecks are notoriously difficult to diagnose under pressure.

Network: Expected throughput of 2 Gbps? Deploy 10 Gbps links. Network upgrades are disruptive; overprovisioning is not.

The Cost Objection

"But that's expensive!" Yes. It costs more upfront.

But consider the alternative costs:

  • Emergency hardware procurement at premium prices
  • Overtime for staff firefighting performance issues
  • Customer churn from degraded service
  • Reputation damage from outages
  • Opportunity cost of engineers debugging instead of building

In my experience, the "savings" from thin provisioning are almost always consumed—with interest—by incident response costs.

When NOT to Apply 2x

The rule isn't absolute. Some situations call for different multipliers:

Development environments: 1x or even 0.5x is fine. You're testing functionality, not capacity.

Experimental projects: Right-size initially, scale when validated. Don't over-invest before product-market fit.

Predictable batch workloads: If you have perfect knowledge of demand (rare), you can optimize more tightly.

But for production systems serving customers? For infrastructure that must be reliable? 2x is the floor, not the ceiling.

The Mathematical Foundation

Nyquist's theorem isn't just a rule of thumb—it's mathematically provable. Sampling below the Nyquist rate causes aliasing: the original signal becomes unrecoverable, and you can't tell it apart from completely different signals that produce the same samples.

In infrastructure terms: once you're overwhelmed, you can't recover gracefully. You can't "unsaturate" a network link or "uncorrupt" data lost to a disk failure. Prevention is the only strategy that works.

The engineers of the 1940s figured this out for signals. The principle applies just as rigorously to systems.

Conclusion

Every time I see a system designed to "just barely" handle expected load, I think of Nyquist. That design will fail—not might fail, will fail—because expected load is a theoretical average, and reality includes peaks, anomalies, and surprises.

Plan for double. Sleep better at night.

Shannon and Nyquist figured this out 80 years ago. The math hasn't changed. Neither should our approach to infrastructure planning.

...