ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices

Welcome to LinuxQuestions.org, a friendly and active Linux Community.

You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

I'm confused between Big O, Big Omega, and Big Theta notations. I understand that Big O gives you an upper bound which gives you the worst case growth rate of the function. And Big Omega gives you a lower bound or best case growth rate of the function, and a function that has an upper and lower bound of the same order is Big Theta. Does this explicitly mean that Big O is worst case; Big Omega is best case, and Big Theta is the average?

That's the way I believed it to be, but then this confused me: Insertion Sort has a worst and average case of Big O(n^2) and Big Theta (n^2) respectively, and a best case of Big Omega (n). Those are two different functions: one is quadratic (n^2) and the other is linear (n), wouldn't you need to find separate lower and upper bounds on them?

Any help would be appreciated. This has been confusing me for a while.

Big Theta is not the average runtime. It only describes the case where the upper and lower bounds of a function are on the same order of magnitude. As far as I know there is no asymptotic notation to describe the average runtime of a function.

You are correct that Big O describes the upper (worst case) bound, and Big Omega describes the lower (best case) bound. Insertion Sort has an upper bound of O(n^2) and a lower bound of Omega(n). Big Theta notation does not apply to Insertion Sort, even though Insertion Sort has an average runtime.

Thanks for your reply. My book says that Insertion Sort is Big Theta of n^2 after it says that it has a best case running time of n-1 comparisons, but they don't go on to explain why it's Big Theta of n^2. Is this simply because it n(n-1)/2 can be bounded from below as well?

Big Theta is not the average runtime. It only describes the case where the upper and lower bounds of a function are on the same order of magnitude. As far as I know there is no asymptotic notation to describe the average runtime of a function.

You are correct that Big O describes the upper (worst case) bound, and Big Omega describes the lower (best case) bound. Insertion Sort has an upper bound of O(n^2) and a lower bound of Omega(n). Big Theta notation does not apply to Insertion Sort, even though Insertion Sort has an average runtime.

I believe Big Omega is not best case scenario. It is only a notation to state the minimum lower bound that can be defined for the running time. It only states that f(n) dominates g(n) for all values of n. If we define minimum time to be the best case scenario, only then we could call it best case.

Big Theta is not the average runtime. It only describes the case where the upper and lower bounds of a function are on the same order of magnitude. As far as I know there is no asymptotic notation to describe the average runtime of a function.

You are correct that Big O describes the upper (worst case) bound, and Big Omega describes the lower (best case) bound. Insertion Sort has an upper bound of O(n^2) and a lower bound of Omega(n). Big Theta notation does not apply to Insertion Sort, even though Insertion Sort has an average runtime.

Be careful not to confuse best/average/worst case with asymptotic bounds. Big O describes the asymptotic upper bound, but can do so for all three cases. Big Omega likewise can describe an asymptotic lower bound in all three cases. Some of these bounds make less sense than others:

In case of insertion sort: Its best-case running time is O(n) and Omega(n) and therefore Theta(n). It will neither go faster nor slower than the time it takes to process all n elements. Its average-case running time is O(n^2): When sifting element A[e] down array A, it will on average be compared with half of the n elements of A[1..e-1], and since n/2 is proportional to n by a constant factor, insertion sort has an average-case running time of O(n^2). A lower bound for its average-case running time can be defined in a number of complicated ways.

Also note that something that is (for example) O(n) is O(n^2) as well, but O(n) is more useful and descriptive.

Yes, O(n) would be a tighter bound and would better describe the runtime complexity of the algorithm. This means that the algorithm is also o(n^2) (little-o of n-squared). Like little-o is the untight counterpart to Big-O, little-omega (ω) is the untight counterpart to Big-Omega (Ω).

Quote:

Originally Posted by MTK358

This thread is 7 years old!

The reason I replied was because the thread had a high search engine rating and asks a good question, but the answer lacked some details. Hopefully when people encounter this thread asking the same question, they will learn more. I hope this has not offended anyone.

LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.