Table of Contents
Introduction
In this post, I tried to collect all the information you will need to learn about algorithms complexity analysis to help you design better algorithms and be able to answer interview questions easily about time and space complexity.
There are typically several different algorithms to solve a given computational problem. It is natural, then, to compare these alternatives. But how do we know if algorithm A is better than algorithm B?
Important Criteria: Time and Space
One important factor that determines the “goodness” of an algorithm is the amount of time it takes to solve a given problem. If algorithm A takes less time to solve the same problem than does algorithm B, then algorithm A is considered better.
Another important factor in comparing two algorithms is the amount of memory required to solve a given problem. The algorithm that requires less memory is considered better.
In this post, we will focus on time analysis as it’s the most tricky one.
Common Asymptotic Notations
1. Big-
(Big-Theta) notation (Describes the ‘average’ case scenario)

Big theta notation – algorithm complexity analysis
- When we say that particular running time is
it means that when
gets big enough, running time will be at least
and at most
for some constant
.
- For small values of n we do not care how running time compares to
and
.
- We are not stricted to
we can use any function as
or
or any function.
- We are usually using
or
in
notation.
2. Big-O (Big Ooh) notation (Describes the ‘worst’ case scenario)

Big O notation – Algorithm complexity analysis
- We use the big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes.
- Big O notation is only giving the maximum running time without caring about the minimum.
- Recall that we write
to express the fact that
grows no faster than
: there exist constants
so that for all
.
3. Big-
(Big Omega) notation (describes the ‘best’ case scenario)
- We use big-
notation for asymptotic lower bounds.
- We use it to say that it will take at least this amount of time to run this algorithm.
Why Big ‘O’ is preferred over other notations
All of these notations have a variety of applications in mathematics. However, in algorithm analysis, we tend to focus on the worst-case time and space complexity. It tends to be more useful to know that the worst-case running time of a particular algorithm will grow at most as fast as a constant multiple of a certain function than to know that it will grow at least as fast as some other function. In other words, Big Omega is often not very useful for use with worst-case running time.
What about Big Theta, though? Indeed it would be great to have a tight bound on the worst-case running time of an algorithm. Imagine that there is a complex algorithm for which we haven’t yet found a tight bound on the worst-case running time. In such a case, we can’t use Big Theta, but we can still use a loose upper bound (Big O) until a tight bound is discovered. It is common to see Big O being used to characterize the worst-case running time even for simple algorithms where a Big Theta characterization is easily possible. That is not technically wrong, because Big O is a subset of Big Theta.
We will not take about the little o and little omega notations as they require a strict level of inequality (< or >) and the ability to show that there is a valid for any valid of choice of
. This is not always easy to do.
For all the above reasons, it is most common to see Big O being used when talking about an algorithm’s time or space complexity.
A Comparison of Some Common Functions
It is easy to work with simple polynomials in , but when the time complexity involves other types of functions like
, you may find it hard to identify the “highest order term”. The following table lists some commonly encountered functions in ascending order of rate of growth. Any function can be given as Big O of any other function that appears later in this table.
Order or growth
Function | Name |
1. Any constant | Constant |
2. |
Logarithmic |
3. |
Log-square |
4. |
Root-n |
5. |
Linear |
6. |
Linearithmic |
7. |
Quadratic |
8. |
Cubic |
9. |
Quartic |
10. |
Exponential |
11. |
Exponential |
12. |
n-Factorial |
The following graph visually shows some of the functions from the above table.
Quick math revision and hints that are useful in complexity analysis:
Here is a list of handy formulas which can be helpful when calculating the Time Complexity of an algorithm:
General Tips
- Every time a list or array gets iterated overc \times lengthtimes, it is most likely inO(n)time.
- When you see a problem where the number of elements in the problem space gets halved each time, it will most probably be inO(log n)runtime.
- Whenever you have a single nested loop, the problem is most likely in quadratic time
List of Important Complexities
1. Simple for-loop with an increment of size 1
for(intx=0;x<n;x++){ //statement(s)thattakeconstanttime }
Running time Complexity Analysis
.
Dropping the leading constants .
Dropping lower order terms
2. For-loop with increments of size k
for (int x = 0; x < n; x+=k) { //statement(s) that take constant time }
Running time Complexity Analysis
3. Simple nested For-loop
for (int i=0; i<n; i++){ for (int j=0; j<m; j++){ //Statement(s) that take(s) constant time } }
Running time Complexity Analysis
4. Nested For-loop with dependent variables
for (int i=0; i<n; i++){ for (int j=0; j<i; j++){ //Statement(s) that take(s) constant time } }
Running time Complexity Analysis
5. Nested For-loop With Index Modification
for (int i=0; i<n; i++){ i*=2; for (int j=0; j<i; j++){ // Statement(s) that take(s) constant time } }
Running time Complexity Analysis
6. Loops with log(n) time complexity
i = //constant n = //constant k = //constant while (i < n){ i*=k; // Statement(s) that take(s) constant time }
Running time Complexity Analysis
Now let see some examples in java to make sure we totally understand it
1. Big O of a Nested Loop with Addition
class NestedLoop { public static void main(String[] args) { int n = 10; // 1 step --> Note: n can be anything. This is just an example int sum = 0; // 1 step double pie = 3.14; // 1 step for (int var = 0; var < n; var = var + 3) { // n/3 steps System.out.println("Pie: " + pie); // n/3 steps for (int j = 0; j < n; j = j + 2) { // (n/3 * n/2) steps sum++; // (n/3 * n/2) steps System.out.println("Sum = " + sum); // (n/3 * n/2) steps } } } }
Running time Complexity Analysis
2. Big O of a Nested Loop with Subtraction
class NestedLoop { public static void main(String[] args) { int n = 10; // O(time complexity of the called function) int sum = 0; //O(1) double pie = 3.14; //O(1) for (int var = n; var >= 1; var = var - 3) { // O(n/3) System.out.println("Pie: " + pie); // O(n/3) for (int j = n; j >= 0; j = j - 1) { // O((n/3)*(n+1)) sum++; // O((n/3)*(n+1)) } } //end of outer for loop System.out.println("Sum: " + sum);//O(1) } //end of main } //end of class
Running time Complexity Analysis
3. Big O of Nested Loop with Multiplication
class NestedLoop { public static void main(String[] args) { int n = 10; // O(time complexity of the called function) int sum = 0; //O(1) double pie = 3.14; //O(1) int var = 1; while(var < n) { // O(log n) System.out.println("Pie: " + pie); // O(log n) for (int j = 0; j < var; j++) { // 2n sum++; // (2n-1) } var *= 2; // O(log n) } //end of while loop System.out.println("Sum: " + sum); //O(1) } //end of main } //end of class
Running time Complexity Analysis
4. Nested Loop with Multiplication (Basic)
class NestedLoop { public static void main(String[] args) { int n = 10; // O(time complexity of the called function) int sum = 0; //O(1) double pie = 3.14; //O(1) int var = 1; while(var < n) { // O(log3 n) System.out.println("Pie: " + pie); // O(log3 n) for (int j = 1; j < n; j = j + 2) { // O((log3 n)* (n/2)) sum++; // O((log3 n)* (n/2) * 2) } var *= 3; // O(log3 n) } //end of while loop System.out.println("Sum: " + sum); //O(1) } //end of main } //end of class
Running time Complexity Analysis
5. Nested Loop with Multiplication (Intermediate)
class NestedLoop { public static void main(String[] args) { int n = 10; int sum = 0; int j = 1; double pie = 3.14; for (int var = 1; var < n; var += 3) { // O(n/3) System.out.println("Pie: " + pie); // O(n/3) j = 1; // O(n/3) while (j < n) { // O((n/3) * (log3 n)) sum += 1; // O((n/3) * (log3 n)) j *= 3; // O((n/3) * (log3 n)) } } System.out.println("Sum: " + sum); //O(1) } }
Running time Complexity Analysis
6. Nested Loop with Multiplication (Advanced)
class NestedLoop { public static void main(String[] args) { int n = 10; //O(1) int sum = 0; //O(1) double pie = 3.14; //O(1) for (int var = 0; var < n; var++) { //O(n) int j = 1; //O(n) System.out.println("Pie: " + pie); //O(n) while(j < var) { // O((n) * (log2 var)) sum += 1; // O((n) * (log2 var)) j *= 2; // O((n) * (log2 var)) } } //end of for loop System.out.println("Sum: " + sum); //O(1) } //end of main } //end of class
Running time Complexity Analysis
7. Nested Loop with Multiplication (Pro)
class NestedLoop { public static void main(String[] args) { int n = 10; // O(1) int sum = 0; // O(1) int j = 1; // O(1) double pie = 3.14; // O(1) for (int var = 0; var < n; var++) { // 0(n) System.out.println("Pie: " + pie); // 0(n) while(j < var) { // 0(n) sum += 1; // 0(n) j *= 2; // 0(n) } } //end of for loop System.out.println("Sum: " + sum); // O(1) } //end of main } //end of class
Running time Complexity Analysis
For More information about algorithms complexity analysis please visit:
- HackerEarth practice time and space complexity
- Wikipedia Analysis of algorithms
- Wikipedia Time Complexity
- GeeksForGeeks Analysis of Algorithms | Big-O analysis