\( \mathrm{O} \) and \( \Omega \)
What about big \( \mathrm{O} \)?
Yeah, a lot of this seems similar to CS 42/60, but back then we would have said \( \mathrm{O} \) instead of \( \Theta \).
Now is the perfect time to talk about it!
Big \( \mathrm{O} \) is also a way to say something about the asymptotic complexity. But what it means is different than \( \Theta \).
Big \( \mathrm{O} \)
For a function \( g(n) \), \( \mathrm{O}(g(n)) \) is a set of functions; all the functions \( f \) that grow more slowly or at a similar rate to \( g \) as \( n \to \infty \).
More specifically,
$$ f(n) \in \mathrm{0}(g(n)) \mathrm{\quad{}if\ and\ only\ if\quad{}} \lim_{n \to \infty} \frac{ f(n) }{ g(n) } < \infty $$
Whereas \( f(n) \in \Theta(g(n)) \) sort of means “\( f \) is similar to \( g \)”, \( f(n) \in \mathrm{O}(g(n)) \) sort of means “\( f \) is no worse than \( g \)”.
Saying it's “worse” is placing a value judgement on a function, which is weird.
Sure, but in computer science we're usually talking about costs, and we generally agree that costs growing more quickly is worse.
Example 1
Let's take the limit:
$$ \lim_{n \to \infty} \frac{ 2n + 1 }{ n } \\[3ex] = \lim_{n \to \infty} \frac{ 2n }{ n } \; + \; \lim_{n \to \infty} \frac{1}{n} \\[3ex] = 2. $$
Because \( 2 < \infty \), we can say that \( 2n + 1 \in \mathrm{O}(n) \).
Wouldn't we say that \( 2n + 1 \in \Theta(n) \)?
Sure, we could say that. That's also true!
These aren't mutually exclusive statements. They can both be true.
Just like \( x \leq 5 \) and \( x = 5 \) can both be true.
Of course, like \( x = 5 \), \( 2n + 1 \in \Theta(n) \) gives us more information, so if we know that's true, it's probably better to just say that!
Example 2
Let's take the limit:
$$ \lim_{n \to \infty} \frac{ 2n + 1 }{ n^2 } \\[3ex] = \lim_{n \to \infty} \frac{ 2n }{ n^2 } \; + \; \lim_{n \to \infty} \frac{ 1 }{ n^2 } \\[3ex] = 0 $$
Since \( 0 < \infty \), we can say that \( 2n + 1 \in \mathrm{O}(n^2) \).
Hay, wait. In CS 60 we would never have said that \( 2n + 1 \in \mathrm{O}(n^2) \)!
Sure. That's because it's not a very useful thing to say.
Still, now that we understand more fully what \( \mathrm{O} \) means, we can see that it is true!
Of course, because the limit is \( 0 \), we know that \( 2n + 1 \notin \Theta(n^2) \), but it is safe to say that it is “no worse than” quadratic.
Or saying that paying for an all-you-can-eat buffet is “no worse than” paying for every side dish individually. I mean, it is better, 'cause you can eat MORE for the same price, but it's also no worse.
Example 3
Let's take the limit:
$$ \lim_{n \to \infty} \frac{ 2n + 1 }{1} \\[3ex] = \lim_{n \to \infty} 2n + 1 \\[3ex] \to \infty. $$
So, we can say that \( 2n + 1 \in \mathrm{O}(1) \).
That makes sense, because linear time is asymptotically worse than constant time. And \( \mathrm{O} \) means “no worse than”.
Now you're getting it!
What is \( \mathrm{O} \) For?
I feel like I understand how to use \( \Theta \), but when do we use \( \mathrm{O} \)?
Well, we use it whenever we want to say that the complexity is no worse than something else!
For example, let's revisit linear search…
On the previous page, we concluded that linear search has
- \( \Theta(1) \) best-case complexity and
- \( \Theta(n) \) worst-case complexity.
Depending on the input, the complexity could be of completely different orders!
So we can't just assign an order of complexity to linear search overall.
But one thing we could reasonably say is that the complexity of linear search is never worse than linear.
Big \( \mathrm{O} \) is how we say that with asymptotic notation: the complexity of linear search is \( \mathrm{O}(n) \).
We use \( \Theta \) when we want to say that two cost functions are asymptotically similar.
For example, “the worst-case complexity of linear search is \( \Theta(n) \)”.
And we use \( \mathrm{O} \) when we want to say that one cost function is upper-bounded by another in an asymptotic sense.
For example, “the complexity of linear search is \( \mathrm{O}(n) \)”.
The second statement leaves room for the possibility that the complexity overall might sometimes be better than linear, because it might actually be!
Whereas we know that the complexity is definitely linear in the worst case; it can't be better or worse.
Quick Practice: Using \( \mathrm{O} \)
There is a very widely used sorting algorithm called “Quicksort” that's a classic example that people are familiar with. We'll use it to explore when and how \(\mathrm{O}\)-notation can be useful.
Hay! I don't think I know that one. I mean… I think I've heard the name, but I don't know how it works.
Me either.
Okay, let's have a little diversion to learn about it. (Even if you already know about Quicksort, you'll still need to click the link below, skim through the page, and answer a question about it just to be sure you know the key points.)
As you've seen from the visualization and discussion,
- In the best case, Quicksort takes linearithmic (\( n\log{n} \)) time.
- In the worst case, Quicksort takes quadratic time.
Honestly, I think it's misrepresenting things to say that the complexity of Quicksort is \( \mathrm{O}( n^2 ) \). It makes it sound like it's often quadratic, which it isn't.
Except when Duck implements it with a lousy pivot selection strategy. Then it is often quadratic.
Perhaps there's a valuable lesson there, too. Asymptotic analysis is useful, but it doesn't tell the whole story!
Big \( \Omega \)
So, if \( \mathrm{O} \) means “no worse than”, is there a way to say “no better than”?
Yep! That's Big \( \Omega \) (the Greek capital letter omega).
\( \Omega \) is just the mirror image of \( \mathrm{O} \).
For a function \( g(n) \), \( \Omega( g(n) ) \) is a set of functions. It is all the functions \( f \) that grow faster than or at a similar rate to \( g \) as \( n \to \infty \).
More specifically,
$$ f(n) \in \Omega\left( g(n) \right) \mathrm{\quad{}if\ and\ only\ if\quad{}} \lim_{n \to \infty} \frac{ f(n) }{ g(n) } \; > \; 0. $$
Whereas \( f(n) \in \mathrm{O}(g(n)) \) sort of means “\( f \) is no worse than \( g \)”, \( f(n) \in \Omega(g(n)) \) sort of means “\( f \) is no better than \( g \)”
Useful facts about \( \mathrm{O} \) and \( \Omega \)
- If \( f(n) \in \mathrm{O}(g(n)) \), then \( g(n) \in \Omega(f(n)) \) and vice versa.
- If \( f(n) \in \mathrm{O}(g(n)) \), and \( f(n) \in \Omega(g(n)) \), then \( f(n) \in \Theta(g(n)) \) and vice versa.
So, if you can work with \( \mathrm{O} \), then you can work with \( \Omega \).
You just have to flip things around!
We're usually more interested in the question, “How bad can things get?” than, “How good can things be?”
For that reason, \( \Omega \) doesn't get as much use as \( \mathrm{O} \) or \( \Theta \), but it comes in handy sometimes.
(When logged in, completion status appears here.)