Back to Blog

One Line of SQL, 150x Faster: The Power of Indexing

When I reduced a 30-second query to 0.2 seconds on an e-commerce site in 2004, I learned the power of database indexing firsthand. I've seen the same lesson in every project since.

January 10, 2026
5 min read
One Line of SQL, 150x Faster: The Power of Indexing

In 2004, I was working on an e-commerce site—one of Turkey's early entrants in online retail. The site was struggling. Page loads were slow, the database server CPU was perpetually maxed out, and customer complaints were mounting.

One query in particular was killing us. The product search page took 30 seconds to load. Thirty seconds. Users were abandoning their carts before they could even find products.

I added one line of SQL:

sql

CREATE INDEX idx_products_category_name ON products(category_id, name);

Query time dropped to 0.2 seconds. A 150x improvement. Same hardware, same data, same query. Just one index.

That day taught me a lesson I've seen confirmed hundreds of times since: before you scale horizontally, before you add caching layers, before you rewrite the application—check your indexes.

Why Indexes Matter: O(n) vs O(log n)

Without an index, the database must examine every row to find matches. This is a full table scan—an O(n) operation where time grows linearly with data volume.

With a proper index, the database uses a B-tree structure to locate rows directly. This is O(log n)—time grows logarithmically with data volume.

The practical difference is dramatic:

Rows

Full Scan (O(n))

Index Scan (O(log n))

1,000

1,000 operations

~10 operations

100,000

100,000 operations

~17 operations

10,000,000

10,000,000 operations

~23 operations

As your data grows, the gap widens. That 30-second query in 2004? The table had grown to several hundred thousand products, and every search scanned them all.

The Composite Index Insight

A single-column index is good. A composite (multi-column) index is often better—if designed correctly.

The key insight: composite indexes work left-to-right. An index on (category_id, name) supports queries filtering on:

  • category_id alone ✓
  • category_id AND name ✓
  • name alone ✗ (index not used)

This is why my 2004 index worked: the product search filtered by category first, then searched within that category. The composite index matched the query pattern perfectly.

Design your indexes around your actual query patterns, not theoretical data structure.

EXPLAIN: Your Diagnostic Tool

Every major database has an EXPLAIN command that shows how queries execute. Learning to read EXPLAIN output is perhaps the single highest-value skill for database performance.

sql

EXPLAIN SELECT * FROM products WHERE category_id = 5 AND name LIKE 'phone%';

Key things to look for:

  • type: "ALL" means full table scan (bad). "ref" or "range" means index usage (good).
  • rows: Estimated rows examined. Lower is better.
  • key: Which index is being used (if any).
  • Extra: "Using filesort" or "Using temporary" often indicate optimization opportunities.

I make it a habit to EXPLAIN every query that runs in a loop or serves user-facing pages. The few minutes invested often save hours of debugging later.

Cross-Database Principles

The specific syntax varies, but indexing principles are universal:

MySQL:

sql

CREATE INDEX idx_name ON table(column1, column2);
EXPLAIN SELECT ...;

PostgreSQL:

sql

CREATE INDEX idx_name ON table(column1, column2);
EXPLAIN ANALYZE SELECT ...;

SQL Server:

sql

CREATE INDEX idx_name ON table(column1, column2);
SET STATISTICS IO ON;

MongoDB:

javascript

db.collection.createIndex({ field1: 1, field2: 1 });
db.collection.find({...}).explain("executionStats");

The syntax differs; the underlying B-tree mechanics are nearly identical. Master the concept once, apply it everywhere.

When Indexes Hurt

Indexes aren't free. Each index:

  • Consumes storage (sometimes substantially)
  • Slows down INSERT, UPDATE, DELETE operations
  • Requires maintenance (fragmentation over time)

The worst case: dozens of indexes on a table with heavy write volume. Every insert updates every index. I've seen tables where removing unused indexes improved write performance by 10x.

Rules of thumb:

  • Index columns that appear in WHERE, JOIN, and ORDER BY clauses
  • Avoid indexing columns with low cardinality (e.g., boolean flags)
  • Monitor index usage; remove indexes that queries never use
  • Consider covering indexes for read-heavy, critical queries

The AI Era Doesn't Change the Fundamentals

Today, AI can help you write queries and suggest optimizations. ChatGPT can explain EXPLAIN output. GitHub Copilot can autocomplete CREATE INDEX statements.

This is wonderful—it lowers the barrier to entry.

But it doesn't change the fundamentals. O(n) is still slower than O(log n). B-trees still work the same way. Query planners still make the same decisions.

What's changed is accessibility. In 2004, understanding indexing required reading textbooks and trial-and-error. Today, you can have a conversation with an AI to build intuition faster. But you still need to understand what's happening.

The junior developer who learns "just add an index" without understanding why will eventually create indexes that make things worse. The principles matter.

Conclusion

That e-commerce site from 2004 is long gone. But the lesson remains fresh every time I see a slow query in a production system.

Before you add servers: check your indexes. Before you add caching: check your indexes. Before you rewrite in a "faster" language: check your indexes.

One line of SQL, 150x improvement. Twenty years later, the same optimization is still available, still underutilized, still the highest-ROI performance fix in most systems.

...
Database Index Optimization: How to Achieve 150x Speed Improvement