We can define the factorial of a number as follows:

$x! = x \cdot (x - 1) \cdot (x -2) ... \cdot 2 \cdot 1$

If we want to write a Java method that computes the factorial of a number, we can do it with a loop like so:

```
public static int fact(int x) {
int product = 1;
while(x > 1) {
product = product * x;
x--;
}
return product;
}
```

Alternatively we could define the factorial of a number recursively like so:

$x! = x \cdot (x - 1)!$

What "recursion" really means is that we define something in terms of itself. Here we say that the factorial of a number x is equal to x times the factorial of x - 1. If we use this definition for 5!, we would have:

$5!= 5 \cdot 4!$

Then replacing 4!, we get:$5! = 5 \cdot 4 \cdot 3!$

If we keep going, we'll get:$5! = 5 \cdot 4 \cdot 3 \cdot 2!$

Next, we have:$5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1!$

If we keep expanding like this, however, we will never be able to stop:$5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 \cdot 0 \cdot -1 \cdot -2 ...$

Which will give us zero.

Recursion like this can turn into a circular definition that is not helpful. For recursion to work and make sense, we have to specify when the recursion stops. In the factorial definition, we can say that the factorial of 0 is 1, and otherwise it's the recursive definition:

\[ x! = \begin{cases} 1, & \text{when } x = 0 \\ x \cdot (x - 1)!, & \text{otherwise} \\ \end{cases} \]

Now when we expand a factorial, we will stop at x = 0, and just use the value 1 instead of expanding on forever. Mathematicians frequently use recursive definitions like this.

We can use recursive definitions like this to write code as well. To do this, we just have a method that calls itself. Following is a factorial method based on the recursive definition:

```
public static int fact(int x) {
if(x == 0) {
return 1;
} else {
return x * fact(x - 1);
}
}
```

Notice that the method calls itself — that's what makes it recursive.

If we call fact(4), then we will start to execute the fact method with 4 as a parameter. In doing so, we call fact(3) which then begins to execute. Then fact(2) gets called which calls fact(1) which calls fact(0).

fact(0) stops the recursion by just returning a value of 1 directly. Then the program goes back to where it left off in fact(1) and multiples 1 * 1 and returns 1. Then we are back where we left off in fact(2) which multiplies 2 by the result of fact(1) or 1 giving 2.

fact(2) then returns that 2 up to fact(3) which multiplies 3 by the 2 to get 6. The 6 is then returned up to fact(4) which multiplies 4 by the result of fact(3) or 6 giving 24. Then fact(4) returns to where it was called at with the final answer of 24.

This program can be found here: Fact.java

If the factorial method forgets to check for the 0 value, and always calls itself like so:

```
int fact(int x) {
return x * fact(x - 1);
}
```

Then we run into something called infinite recursion. Infinite recursion is like an infinite loop, except that it will eventually crash the program:

Exception in thread "main" java.lang.StackOverflowError at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) // 1000 lines removed at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9) at Fact.fact(Fact.java:9)

The stack trace consists of 1024 calls to this method before the virtual machine killed us. Recall that whenever a method is called, an stack frame is pushed onto the stack. The stack is only so big, so if we keep pushing stack frames over and over, without popping any, the stack will overflow.

Whenever writing a recursive program, it's important to make sure that there is always a "base case" that returns something directly. The recursive case should then move us closer to the base case so that the recursion will eventually stop. Following are some other examples of recursive method.

It's actually the case that *any* programming problem can be solved
with recursion. There are no cases when you actually need to use loops —
everything can be done with recursion instead. There are actually programming
languages (such as Haskell) in which there are no loops statements at all.

In Java, it's commonplace to use loops most of the time, but there are some cases when recursion is the more natural option. We'll look today at using recursion to solve some problems we'd normally use loops for — just to get the hang of using recursion.

For instance, we can use recursion to sum an array of numbers with code like this:

```
public static int sum(int[] numbers, int size) {
if (size == 0) {
return 0;
}
int last = numbers[size - 1];
int rest = sum(numbers, size - 1);
return last + rest;
}
```

This works by first including a base case. If the array is size 0, the sum must be 0 (since there are no numbers at all). Remember that we always have to have the base case!

Next we "peel off" the last number of the array into the last variable.
Then we recursively add up all of the numbers *except* the last one
into the variable called rest.

Then we simply add the two together. The sum of an array is equal to the last element plus the sum of all the rest of the elements.

This is available in Sum.java.

A palindrome is a word that reads the same backwards as forwards. For example, "step on no pets" or "never odd or even" are palindromes.

Let's write a method to test if a string is a palindrome or not. We can base on on the following recursive idea:

"A phrase is a palindrome if the first and last letters are the same, and if the rest of the phrase is also a palindrome."

Here is this implemented with Java:

```
public static boolean isPalindrome(String str, int first, int last) {
if (last <= first)
return true;
if (str.charAt(first) != str.charAt(last))
return false;
return isPalindrome(str, first + 1, last - 1);
}
public static boolean isPalindrome(String str) {
return isPalindrome(str, 0, str.length() - 1);
}
```

This is actually two methods, the first method is the one that does the work, but it needs to know some extra information, so it has extra parameters, to keep track of where it is in the string. The second method is just a "helper method" that supplies the default values to let us call the method on just a String.

The recursive method uses "first" and "last" to keep track of where we are checking. first starts at 0, and moves up. last starts at the end of the String and moves down. If they meet in the middle, then we have gone through the whole String, so it must be a palindrome. That's the base case here.

Next, we check if the characters on either end don't match. If they don't, then it must not be a palindrome.

Finally, we recurse. We do that by calling ourselves again and moving our first and last markers.

This example is available as Palindrome.java.

There is a myth that there is a temple with three large towers. On the first tower are 64 golden rings where each one is smaller than the one under it. Monks at the temple are trying to move all of the rings from the first tower to the third tower.

However they must obey the rule that no ring can be placed on a ring that is smaller than it. To move the rings, they must use the third tower as a place holder to move the rings over while obeying the rule.

The myth states that when the monks are finished, and have moved all 64 rings to the third tower, the world will end.

We want to write a program that will figure out how to move all of the rings over, one by one.

Thinking of this recursively, to move any disc from the left tower to the right tower, we would first have to move all of the discs above it to the middle, then put the disc we want on the right, and then put all of the discs on the middle tower back on top at the right.

This is recursive because in defining how to move a disc, we say that we have to move other discs. This idea can be used to come up with a recursive algorithm like so:

```
To move a disc from left to right, using middle:
First move all discs above us to the middle, using right.
Then move this disc from the left to the right.
Lastly move all discs from the middle to the right using left.
```

This program demonstrates this technique to solve the Towers of Hanoi: Towers.java.

This method calls itself *twice* each time through. This means the
total number of steps needed *doubles* for each additional ring that we
add. The number of moves is given in the following table:

Rings | Steps |

1 | 1 |

2 | 3 |

3 | 7 |

4 | 15 |

5 | 31 |

6 | 63 |

7 | 127 |

8 | 255 |

9 | 511 |

10 | 1023 |

... | ... |

64 | 18446744073709551615 |

The number of steps is defined by the following formula:

$steps(rings) = 2^{rings} - 1$

This means that this algorithm is $O(2^n)$. This is called an exponential algorithm, and is completely impractical for large input sizes.

For doing a small number of discs, this doesn't take too terribly long. On
my computer, given how long smaller numbers of discs take, it will take roughly
17 *million years* for the program to complete if we try all 64. Given
that the monks are probably a lot slower than a computer, the world won't be
ending any time soon :).

The Fibonacci numbers are a sequence of numbers starting with {1, 1}. Each subsequent number is then the sum of the two previous. We can find Fibonacci numbers with a loop, but we can also use recursion.

A recursive definition of the Fibonacci sequence is as follows:

$fib(1) = 1$

$fib(2) = 1$

$fib(x) = fib(x - 1) + fib(x - 2)$

We can use this definition to get the following method:

```
public static int fib(number) {
if(number < 3) {
return 1;
} else {
return fib(number - 1) + fib(number - 2);
}
}
```

This method is also $O(2^n)$. As you can see every call to the method expands into two more, so the number of calls explodes exponentially.

This is a case where the most natural recursive solution is not very
efficient. Just because this particular algorithm is $O(2^n)$ doesn't mean
that this problem is hard to solve. Remember that Big-O complexities are for
*algorithms*, not *problems*. Some problems don't have efficient
solutions, but this one happens to. There is a $O(n)$ solution, which we can
write with a loop like this:

```
int fib(int number) {
int a = 1;
int b = 1;
int n = 1;
for(int i = 2; i < number; i++) {
n = a + b;
a = b;
b = n;
}
return n;
}
```

This Fibonacci method runs *far* faster. This algorithm is
$O(n)$, which you can see from the fact that there is one loop which
runs up to the number given.

In this case, the slow solution is easiest implemented with recursion and the fast solution is easiest implemented with iteration. But we can write fast recursive solutions as well. We could also write the slow algorithm with a loop (but why would we want to?).

There are two fast recursive solutions to this problem. The simplest
is to employ a technique called *memoization* which is basically
to cache solutions to the problem in an array. When we call the recursive
method, it first checks if the array holds the answer. If so, it returns it
directly. If not, then it computes it from scratch (saving the result in the
array for later). This reduces the repetitious work the original recursive
solution does.

The Fib_Memoized.java code can be seen here:

```
// recursive fibonacci sequence w/ memoization
public static long recursive_fib(int number) {
// if we've already calculated this, use the saved result
if (results[number] != 0) {
return results[number];
}
// the base case of 0 or 1
if (number < 2) {
results[number] = 1;
return 1;
} else {
// figure it out, then save it in the array
long answer = recursive_fib(number - 1) + recursive_fib(number - 2);
results[number] = answer;
return answer;
}
}
```

This program runs far faster because it only computes each unique Fibonacci number one time. Memoization is nice because you can use it as an optimization to an existing solution without needing to rewrite the algorithm.

The other way to write a fast recursive solution is to just copy the looping solution but use recursion in place of the loop. Fib_Fast.java does just this:

```
// recursive loop function
public static long recursive_fib(int number, long a, long b, long n, int i) {
if (i < number) {
return recursive_fib(number, b, n, a + b, i + 1);
} else {
return n;
}
}
// helper function with default values
public static long recursive_fib(int number) {
return recursive_fib(number, 1, 1, 1, 2);
}
```

Notice this is basically the same algorithm as the one with the for loop. It simply replaces a recursive call with the idea of going back to the top of the loop. The parameters are used to keep track of the loop variables. This is also $O(n)$.

Because method calls cause a push onto the stack, and method returns cause a pop from the stack, we can use recursion to write any algorithm using a stack using recursion. For example, the depth first search algorithm was given as follows:

- Set current to the start position.
- While current is not the end position:
- Mark current as visited.
- If we haven't gone left, push left.
- If we haven't gone right, push right.
- If we haven't gone up, push up.
- If we haven't gone down, push down.
- If the stack is empty, there is no path!
- Set current to pop().
- We made it!

We can write this using recursion by replacing the pushes with recursive calls. DFS_Recursive.java demonstrates a recursive depth first search.

- Break big problems into smaller ones.
- Make sure there is a base case.
- Make sure the general case gets us closer to the base case.

Copyright © 2019 Ian Finlayson | Licensed under a Creative Commons Attribution 4.0 International License.