*Analysis of Algorithms* is concerned with the complexity of an algorithm. The complexity (also called cost) of an algorithm is the resource such as time or memory used by the algorithm. This is commonly represented as a function $f(n)$ of the size of the input ($n$). An example of a complexity is $cn^2$, where $c$ is some constant. Instead of representing complexity in seconds (time) or in megabytes (memory), we abstract them with a function whose value depends only on the size of the input. The main advantage of doing this is that the complexity becomes independent of the computer being used or the programming language on which the algorithm is being implemented. If an algorithm runs in time $cn^2$ in one computer with slower CPU then it runs in $bn^2$ in another computer with faster CPU. That means in both the cases, the **order** of the complexity is same.

Usually, it is not worth determining the exact cost of running an algorithm. Most of the time we are interested to know the boundary. The boundary can be a lower, exact or upper of the exact complexity. To express these boundaries, we use notations $O, \Theta, \Omega, o $ and $\omega$. These are called *Asymptotic* notations.

$O$ gives the upper bound on the exact complexity. In other words, $O$ gives the maximum time the algorithm takes to run for given input of size $n$. If a complexity of the algorithm is $O(n^2)$ then the algorithm can take up to $n^2$ time to run but does not go beyond $n^2$ i.e. it can take $1, n, n\log n, n^2 + n + 12, 2n^2$ etc, time but cannot take $n^{2.0000001}, n\log n,$ etc time.

$\Theta$ gives the exact (tight) bound. If a complexity of the algorithm is $\Theta(n^2)$ then the algorithm takes exactly $n^2$ time to run i.e. it can take $2n^2, n^2 + n + 12, 0.01n^2$, etc time but cannot take $n^{2.0000001}, n\log n, n^{1.99999}$ etc time.

$\Omega$ gives the lower bound. If a complexity of the algorithm is $\Omega(n^2)$ then the algorithm takes at least $n^2$ time to run i.e. it can take $2n^2, n^2 + n + 12, n^{2.0001}, n^3$, etc time but cannot take $n, n\log n, n^{1.99999}$ etc time.

$o$ gives the **strict** upper bound. If a complexity of the algorithm is $o(n^2)$ then the algorithm takes $< n^2$ time to run i.e. it can take $1, n, n\log n, n^{1.9999999}$, etc time but cannot take $2n^2, n^2 + n, n^{2.000001}$ etc time.

$\omega$ gives the **strict** lower bound. If a complexity of the algorithm is $\omega(n^2)$ then the algorithm takes $> n^2$ time to run i.e. it can take $2^{2.00001}, n^3, n^2\log n, n^{5}$, etc time but cannot take $2n^2, n^2 + n, n^{1.999999}$ etc time.

The figure below shows these bounds for $n^2$.

The big-O analysis seems very confusing and difficult at first. It requires time and hard work to master it. In this tutorial series on *Analysis of Algorithm*, I am trying to explain the concept from scratch. Please go through each tutorial one by one and try to understand the content.