# Algorithms Complexity Analysis

## Introduction

In this post, I tried to collect all the information you will need to learn about algorithms complexity analysis to help you design better algorithms and be able to answer interview questions easily about time and space complexity.

There are typically several different algorithms to solve a given computational problem. It is natural, then, to compare these alternatives. But how do we know if algorithm A is better than algorithm B?

## Important Criteria: Time and Space

One important factor that determines the “goodness” of an algorithm is the amount of time it takes to solve a given problem. If algorithm A takes less time to solve the same problem than does algorithm B, then algorithm A is considered better.

Another important factor in comparing two algorithms is the amount of memory required to solve a given problem. The algorithm that requires less memory is considered better.

In this post, we will focus on time analysis as it’s the most tricky one.

# Common Asymptotic Notations

## 1. Big-$\theta$ (Big-Theta) notation (Describes the ‘average’ case scenario)

Big theta notation – algorithm complexity analysis

• When we say that particular running time is $\theta({n})$ it means that when $n$ gets big enough, running time will be at least $k{_\text{1}} * n$ and at most $k{_\text{2}} * n$  for some constant $k{_\text{1}} \& k{_\text{2}}$ .
• For small values of n we do not care how running time compares to $k{_\text{1}} * n$ and $k{_\text{2}} * n$.
• We are not stricted to $n$ we can use any function as $n^2$ or $n \log (n)$ or any function.
• We are usually using $n \log (n)$ or $n \log{_\text{2}} (n)$ in $\theta$ notation.

## 2. Big-O (Big Ooh)  notation (Describes the ‘worst’ case scenario)

Big O notation – Algorithm complexity analysis

• We use the big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes.
• Big O notation is only giving the maximum running time without caring about the minimum.
• Recall that we write $f(n)=O(g(n))$ to express the fact that $f(n)$ grows no faster than $g(n)$ : there exist constants $N \& c >0$ so that for all $n \ge N, f(n) \le c * g(n)$.

## 3. Big-$\Omega$ (Big Omega) notation (describes the ‘best’ case scenario)

• We use big-$\Omega$ notation for asymptotic lower bounds.
• We use it to say that it will take at least this amount of time to run this algorithm.

## Why Big ‘O’ is preferred over other notations

All of these notations have a variety of applications in mathematics. However, in algorithm analysis, we tend to focus on the worst-case time and space complexity. It tends to be more useful to know that the worst-case running time of a particular algorithm will grow at most as fast as a constant multiple of a certain function than to know that it will grow at least as fast as some other function. In other words, Big Omega is often not very useful for use with worst-case running time.

What about Big Theta, though? Indeed it would be great to have a tight bound on the worst-case running time of an algorithm. Imagine that there is a complex algorithm for which we haven’t yet found a tight bound on the worst-case running time. In such a case, we can’t use Big Theta, but we can still use a loose upper bound (Big O) until a tight bound is discovered. It is common to see Big O being used to characterize the worst-case running time even for simple algorithms where a Big Theta characterization is easily possible. That is not technically wrong, because Big O is a subset of Big Theta.

We will not take about the little o and little omega notations as they require a strict level of inequality (< or >) and the ability to show that there is a valid $n{_\text{1}}$ for any valid of choice of $c$. This is not always easy to do.

For all the above reasons, it is most common to see Big O being used when talking about an algorithm’s time or space complexity.

## A Comparison of Some Common Functions

It is easy to work with simple polynomials in $n$, but when the time complexity involves other types of functions like$\log (n)$, you may find it hard to identify the “highest order term”. The following table lists some commonly encountered functions in ascending order of rate of growth. Any function can be given as Big O of any other function that appears later in this table.

### Order or growth

 Function Name 1. Any constant Constant 2. $\log n$ Logarithmic 3. $\log^2 n$ Log-square 4. $\sqrt n$ ​​​Root-n 5. $n$ Linear 6. $n * \log n$ Linearithmic 7. $n ^ 2$ Quadratic 8. $n ^ 3$ Cubic 9. $n ^ 4$ Quartic 10. $2 ^ n$ Exponential 11. $e ^ n$ ​​ Exponential 12. $n !$ n-Factorial

The following graph visually shows some of the functions from the above table.

## Quick math revision and hints that are useful in complexity analysis:

Here is a list of handy formulas which can be helpful when calculating the Time Complexity of an algorithm:

• $\left(\sum_{i=1}^n c \right) = c+c+c+ \cdots +c = c n$
• $\left(\sum_{i=1}^n i \right)= 1+2+3 + \cdots +n = \frac{n(n+1)}{2}$
• $\left(\sum_{i=1}^n i^2 \right) = 1 + 4 + 9 + \cdots + n^2 = \frac{n(n+1)(2n+1)}{6}$
• $\left(\sum_{i=0}^n r^i\right) = r^0 + r^1 + r^2 + \cdots + r^n = \frac{(r^{n+1}-1)}{r-1}$
• ${2^{2n}}={(2^2)^n}={4^n}$
• ${log({a^b})}={b . log(a)}$
• ${log_b ({a}) = {c}} \leftrightarrow {b^c} = {a}$
• ${log_a({n})} = \frac{log_b(n)}{log_b({a})}$

## General Tips

1. Every time a list or array gets iterated overc \times lengthtimes, it is most likely inO(n)time.
2. When you see a problem where the number of elements in the problem space gets halved each time, it will most probably be inO(log n)runtime.
3. Whenever you have a single nested loop, the problem is most likely in quadratic time

## List of Important Complexities

### 1. Simple for-loop with an increment of size 1

for(intx=0;x<n;x++){
//statement(s)thattakeconstanttime
}

#### Running time Complexity Analysis

$T(n) = (2n+2) + cn = (2+c) n + 2$.
Dropping the leading constants $\Rightarrow n + 2$.
Dropping lower order terms $\Rightarrow O(n)$

### 2. For-loop with increments of size k

for (int x = 0; x < n; x+=k) {
//statement(s) that take constant time
}


#### Running time Complexity Analysis

$n(\frac{2+c}{k}) = O(n)$

### 3. Simple nested For-loop

for (int i=0; i<n; i++){
for (int j=0; j<m; j++){
//Statement(s) that take(s) constant time
}
}


#### Running time Complexity Analysis

$nm(2+c)+2+4n = O(nm)$

### 4. Nested For-loop with dependent variables

for (int i=0; i<n; i++){
for (int j=0; j<i; j++){
//Statement(s) that take(s) constant time
}
}


#### Running time Complexity Analysis

$O(n^2)$

### 5. Nested For-loop With Index Modification

for (int i=0; i<n; i++){
i*=2;
for (int j=0; j<i; j++){
// Statement(s) that take(s) constant time
}
}

#### Running time Complexity Analysis

$O(n)$

### 6. Loops with log(n) time complexity

i = //constant
n = //constant
k = //constant

while (i < n){
i*=k;
// Statement(s) that take(s) constant time
}

#### Running time Complexity Analysis

$\log_k(n) = O(\log_k(n))$

## Now let see some examples in java to make sure we totally understand it

### 1. Big O of a Nested Loop with Addition

class NestedLoop {
public static void main(String[] args) {
int n = 10; // 1 step --> Note: n can be anything. This is just an example
int sum = 0; // 1 step
double pie = 3.14; // 1 step

for (int var = 0; var < n; var = var + 3) { // n/3 steps
System.out.println("Pie: " + pie); // n/3 steps
for (int j = 0; j < n; j = j + 2) {  // (n/3 * n/2) steps
sum++;   // (n/3 * n/2) steps
System.out.println("Sum = " + sum);  // (n/3 * n/2) steps
}
}
}
}

#### Running time Complexity Analysis

$O(n^2)$

### 2. Big O of a Nested Loop with Subtraction

class NestedLoop {
public static void main(String[] args) {
int n = 10; // O(time complexity of the called function)
int sum = 0; //O(1)
double pie = 3.14; //O(1)

for (int var = n; var >= 1; var = var - 3) { // O(n/3)
System.out.println("Pie: " + pie); // O(n/3)

for (int j = n; j >= 0; j = j - 1) { // O((n/3)*(n+1))
sum++; // O((n/3)*(n+1))
}
} //end of outer for loop
System.out.println("Sum: " + sum);//O(1)
} //end of main
} //end of class

#### Running time Complexity Analysis

$O(n^2)$

### 3. Big O of Nested Loop with Multiplication

class NestedLoop {
public static void main(String[] args) {
int n = 10; // O(time complexity of the called function)
int sum = 0; //O(1)
double pie = 3.14; //O(1)
int var = 1;

while(var < n) {  // O(log n)
System.out.println("Pie: " + pie); // O(log n)

for (int j = 0; j < var; j++) {  // 2n
sum++;  //  (2n-1)
}
var *= 2; // O(log n)
} //end of while loop

System.out.println("Sum: " + sum); //O(1)
} //end of main
} //end of class

#### Running time Complexity Analysis

$log_2(n) + n \Rightarrow O(n)$

### 4. Nested Loop with Multiplication (Basic)

class NestedLoop {
public static void main(String[] args) {
int n = 10; // O(time complexity of the called function)
int sum = 0; //O(1)
double pie = 3.14; //O(1)
int var = 1;

while(var < n) { // O(log3 n)
System.out.println("Pie: " + pie); // O(log3 n)

for (int j = 1; j < n; j = j + 2) {  // O((log3 n)* (n/2))
sum++;  // O((log3 n)* (n/2) * 2)
}
var *= 3;  // O(log3 n)
} //end of while loop
System.out.println("Sum: " + sum); //O(1)
} //end of main
} //end of class

#### Running time Complexity Analysis

$log_3(n)+nlog_3(n) \Rightarrow nlog_3(n) \Rightarrow n \frac{log_2(n)}{log_2(3)} = n \frac{log_2(n)}{1.585} \Rightarrow O(n log_2(n))$

### 5. Nested Loop with Multiplication (Intermediate)

class NestedLoop {
public static void main(String[] args) {
int n = 10;
int sum = 0;
int j = 1;
double pie = 3.14;

for (int var = 1; var < n; var += 3) {   // O(n/3)
System.out.println("Pie: " + pie);    // O(n/3)
j = 1;  // O(n/3)
while (j < n) { // O((n/3) * (log3 n))
sum += 1;      // O((n/3) * (log3 n))
j *= 3;        // O((n/3) * (log3 n))
}
}
System.out.println("Sum: " + sum); //O(1)
}
}

#### Running time Complexity Analysis

$O(n log_2(n))$

### 6. Nested Loop with Multiplication (Advanced)

class NestedLoop {
public static void main(String[] args) {
int n = 10; //O(1)
int sum = 0; //O(1)
double pie = 3.14; //O(1)

for (int var = 0; var < n; var++) {    //O(n)
int j = 1;  //O(n)
System.out.println("Pie: " + pie); //O(n)
while(j < var) { // O((n) * (log2 var))
sum += 1; // O((n) * (log2 var))
j *= 2;  // O((n) * (log2 var))
}
} //end of for loop
System.out.println("Sum: " + sum); //O(1)
} //end of main
} //end of class

#### Running time Complexity Analysis

$O(n log_2(n))$

### 7. Nested Loop with Multiplication (Pro)

class NestedLoop {
public static void main(String[] args) {
int n = 10;   // O(1)
int sum = 0;  // O(1)
int j = 1;    // O(1)
double pie = 3.14; // O(1)

for (int var = 0; var < n; var++) {  // 0(n)
System.out.println("Pie: " + pie); // 0(n)
while(j < var) { // 0(n)
sum += 1;  // 0(n)
j *= 2;  // 0(n)
}
} //end of for loop
System.out.println("Sum: " + sum); // O(1)
} //end of main
} //end of class

#### Running time Complexity Analysis

$O(n)$

1 year ago

Nice Article

1 year ago

Thanks!

#### Amr Alaa

1 year ago

“Whenever you have a single nested loop, the problem is most likely in quadratic time”
Nested loops is the worst, especially with matrix operations, that’s why it should be vectorized…
Question: any tool to assess algorithms performance during development?

1 year ago

To help a bit with computing complexity in TypeScript / JavaScript / Lua files, I am using https://marketplace.visualstudio.com/items?itemName=kisstkondoros.vscode-codemetrics

#### Amr Alaa

1 year ago

Super, i will give it a try
I see you are using visual code also, that’s awesome