*"number (N) of ways ... using N kinds"* these two `N`

s are clearly not the same. so let's say `K`

kinds of coins.

We have many coins, but each coin is either 1, 5, 10, 25 or 50 cents, in total 5 kinds of coins. We need to buy something for a dollar, 100 cents. Assume unlimited supply of each kind of coins. How many ways are there for us to reach the total sum of 100?

We either use some coins (one or more) of 50 cents, or we don't. If not, we still have to get to the 100 with only 4 kinds of coins. But if we do, then after using *one* 50 cents coin, the total sum becomes 100 - 50 = 50 cents, and we may still use all 5 kinds of coins to reach the new, smaller total sum:

`ways{ 100, 5 } = ways{ 100, 5 - 1 } ; never use any 50-cent coins + ; OR ways{ 100 - 50, 5 } ; may use 50-cent coins, so use one `

Or in general,

`ways( sum, k ) = ways( sum, k - 1 ) + ways( sum - first_denomination(k), k ) `

That's all there is to it. See? Generalization comes naturally with abstraction (replacing concrete values with symbols and making them parameters in a function definition).

Then we need to take care of the base cases. If `sum = 0`

, the result is 1: there's one way to reach total sum of 0 (and it is: take no coins).

If `k = 0`

, this means we are not allowed to use *any* kind of coins; in other words there's no way for us to reach a sum, *any* sum, without using at least *some* coins (unless the sum is 0, but we've already handled that case above). So the result must be 0.

Same if `sum < 0`

, of course. Impossible, i.e. 0 ways to sum up to it, using any coins with any positive denomination.

Another way to look at this is from the other end of the time arrow, if you will.

Imagine someone have already done all that for you and have put in front of you all these piles of bills, each pile summing up to the target sum. Without loss of generality, let each pile be sorted so that bigger bills are on top.

Divide all the piles into two groups: one with the biggest denomination bill on top each pile, and the other - without it. If the total number of piles is `ways( denomsList, targetSum)`

, then clearly the number of piles in the *second group* is `ways( rest(denomsList), targetSum)`

.

Then, we can get the top bill off each pile in the *first group*, and the number of piles in it clearly won't be changed by that. Having removed the top bill in each pile, we see that they all sum up to `targetSum - first(denomsList)`

, hence they number `ways( denomsList, targetSum - first(denomsList))`

in total.

The point to (structural) recursion is thinking in the small -- *not* trying to picture the whole sequence of operations at once, but rather standing still and trying to understand your *current* situation. It is a mental tool for approaching your problem, it is about solving it in the easiest most natural way, making *as small a step* as possible.

Calling (a copy of) yourself is a technicality. The main thing is the leap of faith, that you are *allowed* to call yourself: assuming you have already written down your definition, *just use it* were appropriate. And that's how it gets written down. You just describe what you have, how it's made of *smaller* parts (some of them *similar* to the full thing), and how the *results* for *those* parts can be combined back with the rest to get the full solution.

*edit* (from comments): The key to solving a problem recursively is to recognize that it can be broken down into a collection of smaller sub-problems to each of which *that same general solving procedure that we are seeking* applies, and the total solution is then found in some *simple* way from those sub-problems' solutions (which are found by that same general procedure as if it were available to us already). Each of thus created sub-problems being "smaller" guarantees the base case(s) will eventually be reached.

In other words, try to find the structure in the problem so that it has substructure(s) similar to the whole (like fractals; or e.g. a list's suffix is also a list; etc.); then, *recursion* is: *assuming* we already have the solution; *taking* the problem instance *apart* (according to the way in which we've structured our problem); *transforming* the *"smaller"* substructure(s) by the solution; and then *combining* it all *back* in some *simple* way (according to the way in which we structured our problem). The trick is to recognize the *existing*, inherent structure in your problem so that the solution comes naturally.

Or, in *Prolog* (of all the programming languages :) ) :

`recursion( In, Out) :- is_base_case( In), base_relation( In, Out). recursion( In, Out) :- not_base_case( In), constituents( In, SelfSimilarParts, LeftOvers), % (* forth >>> *) maplist( recursion, SelfSimilarParts, InterimResults), constituents( Out, InterimResults, LeftOvers). % (* and back <<< *) `

Which is to say, in pseudocode,

`(In <--> Out) are related by recursion when either In is indivisible, and Out its counterpart or In = Sub_1 <+> Sub_2 <+> ... <+> Sub_N <++> Shell ------ r e c u r s i o n ------ Out = Res_1 {+} Res_2 {+} ... {+} Res_N {++} Shell where (Sub_i <--> Res_i) , for each i = 1, ..., N `

The combination operation `+`

for `In`

and `Out`

might be different, because they can be different type of values.