Web
Analytics Made Easy - StatCounter

Running Time, Growth of Function and Asymptotic Notations

Introduction

Writing a computer program that handles a small set of data is entirely different than writing a program that takes a large number of input data. The program written to handle a big number of input data MUST BE algorithmically efficient in order to produce the result in reasonable time and space. In this article, I discuss some of the basics of what is the running time of a program, how do we represent running time and other essentials needed for the analysis of the algorithms. Please bear with me the article is fairly long. I promise you will learn quite a few concepts here that will help you to cement a solid foundation in the field of design and analysis of algorithms.

Running Time

Suppose you developed a program that finds the shortest distance between two major cities of your country. You showed the program to your friend and he/she asked you “What is the running time of your program?”. You answered promptly and proudly “Only 3 seconds”. It sounds more practical to say the running time in seconds or minutes but is it sufficient to say the running time in time units like seconds and minutes? Did this statement fully answer the question? The answer is NO. Measuring running time like this raises so many other questions like

  • What’s the speed of the processor of the machine the program is running on?
  • What is the size of the RAM?
  • What is the programming language?
  • How experience and skillful the programmer is?
  • and much more

In order to fully answer your friend’s question, you should say like “My program runs in 3 seconds on Intel Core i7 8-cores 4.7 GHz processor with 16 GB memory and is written in C++ 14”. Who would answer this way? Of course, no one. Running time expressed in time units has so many dependencies like a computer being used, programming language, a skill of the programmer and so on. Therefore, expressing running time in seconds or minutes makes so little sense in computer programming.

You are now convinced that “seconds” is not a good choice to measure the running time. Now the question is how should we represent the running time so that it is not affected by the speed of computers, programming languages, and skill of the programmer? In another word, how should we represent the running time so that we can abstract all those dependencies away?. The answer to the question is simple which is “input size”. To solve all of these dependency problems we are going to represent the running time in terms of the input size. If the input size is $n$ (which is always positive), then the running time is some function $f$ of $n$. i.e.
$$\text{Running Time} = f(n)$$
The functional value of $f(n)$ gives the number of operations required to process the input with size $n$. So the running time would be the number of operations (instructions) required to carry out the given task.
Function $f(n)$ is monotonically non-decreasing. That means, if the input size increases, the running time also increases or remains constant. Some examples of the running time would be $n^2 + 2n$, $n^3$, $3n$, $2^n$, $\log n$, etc. Having this knowledge of running time, if anyone asks you about the running time of your program, you would say “the running time of my program is $n^2$ (or $2n$, $n\log n$ etc)” instead of “my program takes 3 seconds to run”.

The running time is also called a time complexity

What exactly is the input size?

Input size informally means the number of instances in the input. For example, if we talk about sorting, the size means the number of items to be sorted. If we talk about graphs algorithms, the size means the number of nodes or edges in the graph.

Growth of Functions

In the previous section, I said that the running times are expressed in terms of the input size ($n$). Three possible running times are $n$, $n^2$ and $n^3$. Among these three running times, which one is better? In other words, which function grows slowly with the input size as compared to others? To find this out, we need to analyze the growth of the functions i.e we want to find out, if the input increases, how quickly the running time goes up.

One easiest way of comparing different running times is to plot them and see the natures of the graph. The following figure shows the graphs of $n$, $n^2$ and $n^3$. (x-axis represents the size of the input and y-axis represents the number of operation required i.e. running time)

Looking at the figure above, we can clearly see the function $n^3$ is growing faster than functions $n$ and $n^2$. Therefore, running time $n$ is better than running times $n^2$, $n^3$. One thing to note here is the input size is very small. I deliberately use the small input size only to illustrate the concept. In computer science especially in the analysis of algorithms, we do the analysis for very large input size.

Another way of checking if a function $f(n)$ grows faster or slower than another function $g(n)$ is to divide $f(n)$ by $g(n)$ and take the limit $n \to \infty$ as follows
$$\lim_{n \to \infty}\frac{f(n)}{g(n)}$$

If the limit is $0$, $f(n)$ grows faster than $g(n)$. If the limit is $\infty$, $f(n)$ grows slower than $g(n)$.

The table below shows common running times in algorithm analysis. The entities in the table are presented from slower to quicker (best to worst) running times.

Running TimeExamples
Constant1, 2, 100, 300, …
Logarithmic$\log n$, $5\log n$, …
Linear$n$, $n + 3$, $2n + 3$, …
$n\log n$$n\log n$, $2n\log n + n$, …
PolynomialQuadratic, Cubic, or higher order
Exponential$2^n$, $3^n$, $2^n + n^4$, …
Factorialn!, n! + n, …

The figure below shows the graphical representations of these functions (running times).

Graph showing the growth of common running times

Asymptotic Notations

Earlier I said that the running time are expressed as $n^2 + 3n +2$ or $3n$ etc. These are called exact running time or exact complexity of an algorithm. We are rarely interested in the exact complexity of the algorithm rather we want to find the approximation in terms of upper, lower and tight bound. The $O$ notation gives the upper bound to the exact complexity and denoted by $O$ (Big-o), $\Theta$ gives the tight bound on exact complexity and $\Omega$ gives the lower bound on exact complexity. There are other two notations $o$ (Little-o) and $\omega$ with slight variation in $O$ and $\Omega$. All these notations are described in detail below.

$\Theta$ - notation.

This notation is called Big Theta notation. Formally, $f(n)$ is $\Theta(g(n))$ if there exist constants $c_1$, $c_2$, and $n_0$ such that
$$0 \le c_1g(n) \le f(n) \le c_2g(n) \text{ for all $n \ge n_0$}$$
Example: Let $g(n) = n^2$ and $f(n) = 5n^2 + 3n$. We want to prove $f(n) = \Theta(g(n))$. That means we need three constants $c_1$, $c_2$ and $n_0$ such that
$$c_1n^2 \le 5n^2 + 3n \le c_2n^2$$
Simplification results,
$$c_1 \le 5 + 3/n \le c_2$$
If we choose $n_0 = 1$, then the expression in the middle can not be smaller than 5. That means $c_1 \le 5$. Choose $c_1 = 5$. Similarly, it can not be greater than 8 so $c_2 \ge 8$. Choose $c_2 = 9$. The expression now becomes
$$4n^2 \le 5n^2 + 3n \le 9n^2 \text{ for all $n > 1$}$$
This proves $5n^2 + 3n$ is $\Theta(n^2)$. The graphs of $4n^2$, $5n^2 + 3n$ and $9n^2$ is shown below.

The figure clearly shows that $5n^2 + 3n$ is sandwiched between $4n^2$ and $9n^2$. That is way, we say big theta gives the asymptotic tight bound.

More examples

  • $2n + 2$ = $\Theta(n)$
  • $4n^4 + 5n^2 + 10 = \Theta(n^4)$
  • $5/25n = \Theta(n)$
  • $2^n + n^{100} = \Theta(2^n)$
  • $n \ne \Theta(n^2)$
  • $\log n \ne \Theta(n)$
  • $7n^3 + 2 \ne \Theta(n^2)$

Coding Example: Following code for matrix addition runs in $\Theta(n^2)$.

1
2
3
4
5
6
7
8
9
10
11
12
13
// add two matrices
for(i = 0 ; i < rows; i++){
for(j = 0; j < columns; j++)
matrix2[i][j] = matrix1[i][j] + matrix2[i][j];
}

// display the result
for(i = 0 ; i < rows; i++){
for(j = 0; j < columns; j++){
printf("%d ", matrix2[i][j]);
}
printf("\n");
}

Assume both matrices are square matrix of size $n \times n$. We need two for loops that go from 1 to $n$. Also assume the addition takes only a constant time $1$. Total time taken by those two loops are, therefore, $1\times n \times n = n^2$. Similarly, to display the result it takes another $n^2$ time. Total time is
$$n^2 + n^2 = 2n^2$$
We can easily show $2n^2 = \Theta(n^2)$ using technique discussed above.

O - notation

This notation is also called Big Oh notation. Formally, $f(n)$ is $O(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) \le cg(n) \text{ for all $n \ge n_0$}$$

Big-O gives the Asymptotic Upper Bound of a function. $f(n) = O(g(n))$ means $g(n)$ defines the upper bound and $f(n)$ has to be equal or less than $cg(n)$ for some value of $c$.
Example: Let $g(n) = n^3$ and $f(n) = 50n^3 + 10n$. We want to prove $f(n)= O(g(n))$. To prove this, we need two constants $c$ and $n_0$ such that the following relation holds for all $n \ge n_0$
$$50n^3 + 10n \le cn^3$$
Simplification results
$$50 + \frac{10}{n^2} \le c$$
If we choose $n_0 = 1$ then the maximum value the left hand side expression can get is 60. Choose $c = 61$ and we are done. We found two constants $c = 61$ and $n_0 = 1$. Therefore we can write $50n^3 + 10n = O(n^3)$. The graphs for functions $50n^3 + 10n$ and $61n^3$ is shown in the figure below.

The graph above clearly shows that the function $50n^3 + 10n$ is bounded from above by the function $61n^3$ for all values of $n \ge 1$.

More Examples

  • $1/2n^2 + 2n + 1 = O(n^2)$
  • $n = O(n\log n)$
  • $100000n = O(n^{1.00001}$
  • $\log n = O(n)$
  • $n = O(n^4)$
  • $n^4 = O(2^n)$
  • $n^2 \ne O(n)$
  • $3n + 4 \ne O(\log n)$

Code Example: The matrix multiplication code above also runs in $O(n^2)$ (Please try to prove it yourself). The following code example runs in $(O(n))$.

1
2
3
4
5
6
7
8
9
int search(int a, int n, int item) {
int i;
for (i = 0; i < n; i++) {
if (a[i] == item) {
return i
}
}
return -1
}

Function search returns the index of the item if the item is in the array and -1 otherwise. The running time varies depending upon where in the array the item is located. If its located in the very first position, the running time would be 1 (best case) and if its located in the last position, the running time would be $n$ (worst case). But worst case the running time can not go beyond $n$. So we can say that the worst case running time of this algorithm is $O(n)$. The running time of the above function can also be written as $O(n^2)$ as $O(n)] = O(n^2)$, but we never write this way. Once we know it can not go away beyond $n$, we write O(n).

$\Omega$ - notation

This notation is called Big-Omega notation. Formally, $f(n)$ is $\Omega(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) \ge cg(n) \text{ for all $n \ge n_0$}$$

Big-$\Omega$ gives the Asymptotic Lower Bound of a function. $f(n) = \Omega(g(n))$ means $g(n)$ defines the lower bound and $f(n)$ has to be equal or greater than $cg(n)$ for some value of $c$.

Example: Let $g(n) = n^2$ and $f(n) = 10n^2 + 14n + 10$. We want to prove $f(n)= \Omega(g(n))$. To prove this, we need two constants $c$ and $n_0$ such that the following relation holds for all $n \ge n_0$
$$10n^2 + 14n + 10 \ge cn^2$$
Simplification results
$$10 + \frac{14}{n} + \frac{10}{n^2} \ge c$$
If we choose $n_0 = 1$ then the minimum value the left hand side expression can get is 10. Choose $c = 9$ and we are done. We found two constants $c = 9$ and $n_0 = 1$. Therefore we can write $10n^2 + 14n + 10 = \Omega(n^2)$. The graphs for functions $10n^2 + 14n + 10$ and $9n^2$ is shown in the figure below.

The graph above clearly shows that the function $10n^2 + 14n + 10$ is bounded from below by the function $9n^2$ for all values of $n \ge 1$.

More Examples:

  • $n^{2.001} = \Omega(n^2)$
  • $5n^2 + 5 = \Omega(n^2)$
  • $n\log n = \Omega(n)$
  • $n^{100} = \Omega(2^n)$
  • $n^2 \ne \Omega(n^3)$

Coding Example: Take any comparison based sorting algorithms. The running time of all such algorithms is $\Omega(n\log n)$

o - notation

This notation is called Small Oh notation. We use o-notation to denote an upper bound that is not asymptotically tight. Formally, $f(n)$ is $o(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) < cg(n) \text{ for all $n < n_0$}$$

The definitions of O-notation and o-notation are similar. The main difference is that in $f(n) = O(g(n))$, the bound $f(n) \le cg(n)$ holds for some constant $c > 0$, but in $f(n) = o(g(n))$, the bound $f(n) < cg(n)$ holds for all constants $c > 0$.

Examples:

  • $2n = o(n^2)$
  • $2n^2 + 5n \ne o(n^2)$

Alternatively, $f(n)$ is $o(g(n))$ if
$$\lim_{n \to \infty}\frac{f(n)}{g(n)} = 0$$

$\omega$-notation

This notation is called Small Omega notation. We use $\omega$-notation to denote an upper bound that is not asymptotically tight. Formally, $f(n)$ is $\omega(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) > cg(n) \text{ for all $n < n_0$}$$

Examples:
$n^2/2 = \omega(n)$ $n^3 + 2n^2 = \omega(n^2)$
$n\log n = \omega(n)$ $n^2/2 \ne \omega(n^2)$

Alternatively, $f(n)$ is $\omega(g(n))$ if
$$\lim_{n \to \infty}\frac{f(n)}{g(n)} = \infty$$

Last but not the least

  • All the analysis we do in the algorithms are only for a large input. It can be wrong when applied to the small input.
  • When your program has a small number of input instances, do not worry about the complexity. Use the algorithm that is easier to code.
  • Most of the people use $O$ notations instead of $\Theta$ notations even though $\Theta$ could be more appropriate. This is not wrong because all the running times that are $\Theta$ are also O.

References

  1. Brassard, G., & Bratley, P. (2008). Fundamentals of Algorithmics. New Delhi: PHI Learning Private Limited.
  2. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (n.d.). Introduction to algorithms (3rd ed.). The MIT Press.