Introducing Complexity Estimator: See Exactly How Your Algorithm Scales
Complexity Estimator is a free browser tool that turns Big-O notation into real numbers. Pick a complexity class, enter your input size, and instantly see operation counts at 10x and 100x growth.
Introducing Complexity Estimator: See Exactly How Your Algorithm Scales
There's a gap between knowing Big-O notation and actually feeling what it means.
Most engineers know that O(n²) is worse than O(n log n). Fewer can tell you how much worse at a specific input size. The difference between "I know the theory" and "I can reason about this tradeoff" is usually a few concrete numbers.
Complexity Estimator bridges that gap. It's a free, browser-based tool that takes your complexity class and input size and shows you the actual operation counts — at your current scale and at 10x and 100x growth.
What It Does
Open complexity-estimator.jagodana.com, pick a Big-O class from the standard set (O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ), O(n!)), and enter your input size.
You immediately see:
- Current operations — the operation count at your input
n - At 10x input — what happens when your dataset grows by a factor of 10
- At 100x input — what happens at 100 times your current volume
All computed instantly, entirely in your browser. No server, no account, no network request.
Why Numbers Beat Rules of Thumb
"O(n²) doesn't scale" is true but vague. Here's what it looks like with actual numbers:
Take n=10,000 (a modest dataset — 10k users, records, items).
- O(n log n) at n=10,000: ~133,000 operations. At 100x (n=1,000,000): ~20 million.
- O(n²) at n=10,000: 100,000,000 operations. At 100x (n=1,000,000): 1 trillion.
That's the gap between "will scale with some optimization" and "will bring your server to its knees." Both algorithms look similar at n=100. The difference at n=1,000,000 is 50,000x.
Complexity Estimator shows you this before you build it.
When to Use It
Before picking an algorithm. You have two approaches. One is O(n log n), one is O(n²). Your current dataset is 50,000 records. Plug both in and see what 100x looks like before you commit.
During architecture discussions. "This endpoint scans the full table" lands differently when you can immediately show what that means at projected user volume.
When a feature starts slowing down. You suspect a quadratic relationship between input size and response time. Plug your current n into O(n²) and compare the output to your performance metrics. If they match, you have your answer.
Preparing for technical interviews. Big-O analysis is central to algorithmic interviews. Having a feel for actual numbers — not just ordering — makes you sharper at the whiteboard.
Teaching algorithms. Showing students why you care about complexity class is much more effective when O(n!) at n=20 is 2.4 quintillion operations and O(n log n) at n=20 is 86. The numbers make the point.
Built for Speed, Not Complexity
The tool is intentionally minimal. One input, one output, instant results. There's no dashboard, no configuration, no account creation. You open it, enter your values, get your numbers, and close it.
Everything runs client-side in JavaScript. The network tab stays empty. There's nothing to install.
Try It
complexity-estimator.jagodana.com
Enter your n. See the scale. Stop guessing.