It would be safest for most rand() functions to omit both zero and one, unless a user was really sure they wanted otherwise. If we were generating real numbers, we'd never see precisely zero or one. The fact that we do is an artifact of limited precision. These boundary cases cause problems in common computations like u.log(u) or (1-u).log(1-u).
Interesting suggestion, but I'm not sure I agree with you.
You could use that argument about any of the random numbers that your rng returns. "Hey, 0.023 is infinitesimally unlikely to occur, so let's exclude that as well".
Generally when I use a real-valued rng, the numbers generated are meant to be 'representative' of what I should get from the distribution. And when I get e.g. 0.023, it kind-of means "0.023 and/or numbers near to 0.023". If I excluded 0.023 from the possible results of the rng, then I would have a 'hole' in my distribution 'in the region of' 0.023.
Maybe what you want is to go from the possible rnds:
0.000, 0.001, 0.002, 0.003, ... , 0.999
to:
0.0005, 0.0015, 0.0025, ... , 0.9995
whereas with your suggestion, you are under-representing the boundary numbers near to 0 and/or near to 1. Generally not a problem I guess, but what you are doing is, imo, 'wrong in theory', even though your high-precision floats will probably cover it up ok in almost all cases.
I agree with you about 0 and 1 causing problems when they are fed into other functions - e.g. generating Gaussian rngs by using InverseCumulativeGaussian(0.0) - a bug I wasted some time hunting down in my company's rng library. My view was that the developer who wrote the library did not understand the maths of what he was doing, rather than that the 0to1 random number generator was at fault.
It's pretty common in practice to want [0,1) (that is, 0 <= x < 1) from your random number generator. Never generating 0 would be a problem when the range is small and discrete - generating random letters for example.
If you're generating integers, or selecting from a discrete set with uniform probabilities, there's no reason to involve floating point at any point in the process.
Generating random numbers in (0,1) is mathematically different than generating random numbers, even from the same type of distribution, in [0,1), (0,1], or [0,1]. Equating all of these, as the article points out, rarely causes problems in practice, but it is nonetheless a conceptual error.
If you'll pardon me pulling out my soapbox for a moment, this kind of bug would be less common if the "programmers don't need math" attitude was not a successful meme in our culture. The notion of "between zero and one" may seem straghtforward and obvious, but is actually ambiguous. Perhaps, given the number of distinct floating point values that can be represented in the interval (0,1), it would hardly seem to matter what interpretation of "between zero and one" is used. But in mathematics you will never see such terminology used without a qualifying statement that clarifies* the meaning.
*E.g. "between 0 and 1, inclusive", "on the closed interval bounded by 0 and 1", "[0,1]", etc.