The problem is the addition.
rand() returns an
int value of
0...RAND_MAX. So, if you add two of them, you will get up to
RAND_MAX * 2. If that exceeds
INT_MAX, the result of the addition overflows the valid range an
int can hold. Overflow of signed values is undefined behaviour and may lead to your keyboard talking to you in foreign tongues.
As there is no gain here in adding two random results, the simple idea is to just not do it. Alternatively you can cast each result to
unsigned int before the addition if that can hold the sum. Or use a larger type. Note that
long is not necessarily wider than
int, the same applies to
long long if
int is at least 64 bits!
Conclusion: Just avoid the addition. It does not provide more "randomness". If you need more bits, you might concatenate the values
sum = a + b * (RAND_MAX + 1), but that also likely requires a larger data type than
As your stated reason is to avoid a zero-result: That cannot be avoided by adding the results of two
rand() calls, as both can be zero. Instead, you can just increment. If
RAND_MAX == INT_MAX, this cannot be done in
(unsigned int)rand() + 1 will do very, very likely. Likely (not definitively), because it does require
UINT_MAX > INT_MAX, which is true on all implementations I'm aware of (which covers quite some embedded architectures, DSPs and all desktop, mobile and server platforms of the past 30 years).
Although already sprinkled in comments here, please note that adding two random values does not get a uniform distribution, but a triangular distribution like rolling two dice: to get
12 (two dice) both dice have to show
11 there are already two possible variants:
6 + 5 or
5 + 6, etc.
So, the addition is also bad from this aspect.
Also note that the results
rand() generates are not independent of each other, as they are generated by a pseudorandom number generator. Note also that the standard does not specify the quality or uniform distribution of the calculated values.