If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Recursion

Have you ever seen a set of Russian dolls? At first, you see just one figurine, usually painted wood, that looks something like this:
You can remove the top half of the first doll, and what do you see inside? Another, slightly smaller, Russian doll!
You can remove that doll and separate its top and bottom halves. And you see yet another, even smaller, doll:
And once more:
And you can keep going. Eventually you find the teeniest Russian doll. It is just one piece, and so it does not open:
We started with one big Russian doll, and we saw smaller and smaller Russian dolls, until we saw one that was so small that it could not contain another.
What do Russian dolls have to do with algorithms? Just as one Russian doll has within it a smaller Russian doll, which has an even smaller Russian doll within it, all the way down to a tiny Russian doll that is too small to contain another, we'll see how to design an algorithm to solve a problem by solving a smaller instance of the same problem, unless the problem is so small that we can just solve it directly. We call this technique recursion.
Recursion has many, many applications. In this module, we'll see how to use recursion to compute the factorial function, to determine whether a word is a palindrome, to compute powers of a number, to draw a type of fractal, and to solve the ancient Towers of Hanoi problem. Later modules will use recursion to solve other problems, including sorting.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

## Want to join the conversation?

• I was wondering when you should use recursion instead of loops. Which is more efficient? Thanks! •   Given the same algorithm, the iterative version will generally be faster than the recursive version.

The big reason is function calls (used in recursion) are expensive operations. A function call requires recording the current state of variables, and making a copy of them in stack memory (which is a limited resource), so that they can be restored when the function returns. (Some languages use tail call optimization to avoid this in the special case of recursion known as tail recursion)

However, for many problems, recursion is a much more intuitive approach. As a result, developers will often start with a recursive version and then convert it to an iterative version only if they need to get every last bit of performance out of the program.
• Is recursion one of the techniques of divide and conquer algorithm? •  Recursion is a common technique used in divide and conquer algorithms.

The most common example of this is the Merge Sort, which recursively divides an array into single elements that are then "conquered" by recursively merging the elements together in the proper order.
• isn't it the same as binary search ? • Recursion is a separate idea from a type of search like binary. Binary sorts can be performed using iteration or using recursion. There are many different implementations for each algorithm. A recursive implementation and an iterative implementation do the same exact job, but the way they do the job is different. Recursion involves a function that calls itself. Iteration uses a counter to track through a structure or complete an operation n times.
• is recursion just like a box in box in a box in a box in an anothger box kind of thing • how is recursion used in programming?
(1 vote) • I will agree that recursion is cool and easy to understand, but being that it makes less efficient algorithms due to all that stack pushing and popping during function calls, it seems to me it is better to use loops. Are there any reasonable exceptions to that? • A thesis in the mid '60s proved and that any recursive function can be recast as loops and vice versa. However, the loop variants would often require the same memory as the recursive variants. So you can spend your memory on the stack or in an array... Your choice.

Recursion vs loops are always the same order. So, efficiency is rarely a concern. That said, most optimizing compilers do "tail recursion" elimination as a matter of course.

Things like parsing and searching would be next to impossible to code without recursion.
• Hi can somebody help me with Dynamic Programming especially the part where you have come up with a recurrence / formula to solve the problem for ex in 0-1 knapsack problem or rod cutting problem etc.

ps: I know I shouldn't be asking this question here but I cannot find a specific topic related to dynamic programming • For the knapsack problem try this:

Let O be set of possible objects. An object o(i) has value v(i), and weight w(i). o(0) [o(zero)] is the null object with no value and no weight and is an element of every subset of O. Let O[-i] be the set of possible objects with object o(i) removed. The max weight of the knapsack is W. The weight of the knapsack at each stage is w and remaining weight is reduced by the weight of the selected object: w-w(i)
Setup
w=W
The bellman equation for the optimal value will have the following form
``c(O, w) = max  {v(i) + c(O[-i], w-w(i))}           (∀o(i)∈O and w(i) <= w)``

The recursion terminates when O[-i] is the empty set and it returns the value of zero or w is less than w(i).
For each object you get its value and create the subproblem excluding that object and with the available max weight reduced by the excluded object's weight.
The value returned from a given level is the max value of an item plus the value from the subproblem (which is the max value of the subproblem).
The 0-1 aspect is that an item is either selected at some level or that the null item from that level is selected and the subproblem without that item returns the max value.

The actual code that implements this would have an array of selected items that is augmented with each recursion's return (or some other "bookkeeping" technique).
• what is the significance of the Russian Dolls? • so if i was tasked to write a recursive function in python, for example mult(L,K)
where both L and K are int.s would I have to start at the end of the base case of the function (the tiniest doll) and work my way to find the recursive call? • Let's assume that K >= 0
You would identify a base case, for a problem which you could trivially solve:

Base Case:
mult(L,0) = 0
so we would use:
if( K === 0) {
return 0;
}

You would also identify how you could solve your problem by using the results of a smaller problem (by smaller we mean that the sub problems converge towards the base case):
e.g for K > 1 we have:
mult(L,K) = mult(L,K-1) + L;
so for K > 1 we would use:
return mult(L,K-1) + L;

Note that the above is only to illustrate recursion. It would be a very inefficient way to multiply two numbers together.

Hope this makes sense 