Every algorithm can perform differently depending on the input it receives. Best Case is the fastest possible outcome (you got lucky). Worst Case is the slowest possible outcome (worst luck). Average Case is the typical outcome across all possible inputs. We use three mathematical notations to express these: Big-Omega (best), Big-Theta (average), and Big-O (worst).
Finding your keys in a messy room. Best Case: they are in your pocket (1 second). Average Case: you check a few spots — the table, the couch, the kitchen (few minutes). Worst Case: they are in the very last place you look (hours).
Best Case
Luckiest input. Minimum steps possible.
Floor
Average Case
Typical input. Expected steps in real world.
Middle
Worst Case
Unluckiest input. Maximum steps — the guarantee.
Ceiling
1function linearSearch(arr, target) {2for (let i = 0; i < arr.length; i++) {3if (arr[i] === target) return i; // found it!4}5return -1; // not found6}78// BEST CASE: target is at index 09// → 1 comparison → Ω(1)10//11// AVERAGE CASE: target is somewhere in the middle12// → ~n/2 comparisons → Θ(n)13//14// WORST CASE: target is at the very end, or not in the array15// → n comparisons → O(n)16//17// We say: "Linear Search is O(n)" because we plan for the worst.
1const names = ["Zoe", "Alice", "Bob", "Charlie", "Diana"];23// Searching for "Zoe": BEST CASE — found at index 0!4linearSearch(names, "Zoe"); // 1 step56// Searching for "Charlie": AVERAGE CASE — found at index 37linearSearch(names, "Charlie"); // 4 steps89// Searching for "Eve": WORST CASE — not found, checked all 510linearSearch(names, "Eve"); // 5 steps1112// Same algorithm, same code, but different inputs13// give wildly different performance.
You can't control your input. Users send unpredictable data.
Best case is luck, not engineering. You can't build a system on "hopefully the data is nice."
Worst case is a promise: "No matter what happens, performance won't be worse than this."
When someone says "complexity," they mean Big-O (worst case) unless stated otherwise.
Let's use a simple, concrete example everyone can relate to.
The "Find the Card" Game
I shuffle a deck of 52 cards and place them face down. You need to find the Ace of Spades by flipping cards one by one.
Best Case (Big-Omega / Ω)
You flip the FIRST card and it is the Ace of Spades! Done in 1 flip.
This is the luckiest possible outcome. We write it as Ω(1) — constant time.
In plain English: "The algorithm will take AT LEAST this many steps, no matter what." It is the floor — things can never be faster than this for this algorithm.
Average Case (Big-Theta / Θ)
On average, you will flip about half the deck — roughly 26 cards — before finding it.
This is the realistic, everyday scenario. We write it as Θ(n/2), which simplifies to Θ(n).
In plain English: "For most inputs, the algorithm takes roughly this many steps." It is the middle ground between lucky and unlucky.
Worst Case (Big-O / O)
The Ace of Spades is the very LAST card. You flip all 52 cards.
This is the unluckiest possible outcome. We write it as O(n) — linear time.
In plain English: "The algorithm will NEVER take more than this many steps." It is the ceiling — the guarantee.
Why do we care most about Worst Case?
Because in production systems:
That is why when someone says "the complexity is O(n)," they almost always mean the worst case. It is your safety net.
Quick Summary:
In real-world systems, you cannot control what data users send to your app. A search engine might get a query that matches the very first result (best case) or one buried at the very end (worst case). Engineers always plan for the Worst Case because you cannot rely on luck when millions of users depend on your system.
"In linear search, finding the target at index 0 is which case?"
Best Case — Ω(1). You found it on the very first check, pure luck!