Introduction
In computer science, when a function (or method or subroutine) calls itself, we call it recursion. Most of the programming languages out there support recursion and its one of the fundamental concepts you need to master while learning data structures and algorithms. Recursion is the key to divide and conquer paradigm where we divide the bigger problem into smaller pieces, solve the smaller pieces individually and combine the results. Recursions are heavily used in Graphs and Trees and almost all the data structures that have a parentchild relationship.
Why is recursion so useful?
Many problems we encounter in computer science are of recursive nature. Consider an example of finding a factorial of a number $n$. The formula for calculating the factorial of $n$ is given below.
$$n! = n \times (n  1) \times (n  2) \times … \times 1$$
If you observe the above equation carefully, you will notice that everything after first $n$ is the factorial of $n  1$. So the equation can be written as
$$n! = n \times (n  1)!$$
The equation above says that to find the value of $n!$, we need to first find the value of $(n1)!$. If we find the value of $(n1)!$, we can simply multiply it by $n$ and we are done!. Now, how do we calculate the value of $(n1)?$ We repeat the same process i.e.
$$(n  1)! = (n  1) \times (n  2)!$$
We go on like this. Now the important question is: how far do we go? or when do we stop? We stop the process when we can trivially find the factorial. We know the factorial of 0 is 1. That means we keep the process going until we reach 0. This is called the base case (or stopping criteria) and is mandatory for all the recursive programs. If we do not have a base case, the process goes on forever.
So, to find the value of $n$ factorial, we find the value of $0!$ trivially. Then we find the value of $1!$ by multiplying the result of $0!$ by 1. We find the value of $2!$ by multiplying the value of $1!$ by 2 and so on. Basically, we divide the bigger problem (i.e. finding the factorial of $n$) into smaller problems whose factorial can be found trivially (i.e. $0! = 1$) and solve the bigger problem by gluing together the result of smaller problems.
Using the very same concept discussed above, we can solve so many problems in computer science. Some of them are: quick sort, merge sort, FFT, Closest Pair of Points, Convex Hull and much more.
How does it work?
We learned what is recursion and why it is useful. Now we learn how does it work? i.e. how is it implemented in programming languages? Knowing the working of it helps you to better understand the recursive programs, helps you to write the recursive programs and most importantly help you to think a recursive solution of a problem. Before understanding the working of recursion, we need to first understand how function calls work.
Activation Record (Frame) and Call Stack
When a compiler detects a function call, it creates a data structure called Activation Record (also called Frame) and pushes this record into a call stack. The simplified structure of the Activation Record is shown in the figure below.
It contains three major parts  (i) Local Variables: All the local variables declared inside the body of the function, (ii) Pointer to the caller: The location in the memory from where it was called, and (iii) Function parameters: The formal parameters of the function. When a function calls another function the activation record of the called function lies on the top of the activation record of the caller. When a function returns, the activation record is popped from the stack, local variables and parameters are destroyed and the control goes to the location of the caller.
Consider a simple C program with two functions  main
and sum
.
1 

When the above program compiles, the compiler detects a main
function and creates an activation record for it. It adds a
, b
and result
to the local variable section. It then detects that there is a function call to function sum
. Now it creates the activation record for sum
and places the activation record on the top of the stack. The call stack looks as follows
The sum
function calculates the result and returns. As soon as the sum
function returns, it is popped from the stack and all the variables are removed. The main function begins its execution and assigns the result to the result
variable. Finally, the main
function also returns and the stack becomes empty.
Back to the recursion
We now know what does compiler handles the function call from one function to another. In the case of recursion, the exact same thing happens. The only difference, in this case, is that both the functions (caller and callee) are same. Let us try to understand how the factorial program described above works. The C code for the factorial function is given below (I have changed the base condition to 2 to reduce the area of call stack).
1 

When you run the program, the main
function calls the fact
functions with n = 5
. The fact
function calls itself each time reducing the value of $n$ by 1 until the base condition is satisfied. The call stack is shown in the figure below.
When the control is inside the base condition, it returns value 2 and pops the record from the stack. The result from the stack popped is multiplied with 3 and the result is returned and so on. Finally, all the records for fact
functions are popped off from the stack and we get the final result in the main
function.
Drawbacks of recursion
Although recursions are intuitive and feel more natural to many problems, there are many drawbacks. Some includes
 It is much slower than the iterative approach. For each recursive call, the compiler creates activation record, pushes it to the stack and pops it when the function returns. This adds extra overhead to the program.
 It uses more stack space than the iterative approach.
 It is hard to understand especially when the number of recursive calls (sequential) is more than one.
 For some programs, it takes really long to run. For example, the recursive solution of Fibonacci number is way slower than its iterative counterpart.
So, try to avoid recursion as much as possible when you are doing simple programs.
When should we use recursion?
 When you are dealing with graphs or solving a problem like sorting, the recursive solution captures the idea more intuitively.
 The recursive solutions, most of the time, require less code and the code looks more elegant and clean.
 It helps to calculate the complexity of many problems (at least theoretically) using recurrence solution.
 We can use mathematical induction to prove the correctness of a recursive program.
Please note that every recursive program can be replaced by iterative solution and vice versa.