"But this is nonsense! Those 1024 bits you added before aren't depleted just because you pulled 1024 from /dev/random!"
An observer of the produced random numbers can potentially deduce the next numbers from the first 1024 random numbers. This is the reason why /dev/random requires more randomness added -- to prevent the guessing of the next number.
No, they can't. That's not how crypto DRBGs work. If you could do that, you'd have demonstrated a flaw in the entire /dev/random apparatus, not a reason not to use urandom.
Think of a CSPRNG almost exactly the way you would a stream cipher --- that's more or less all a CSPRNG is. Imagine you'd intercepted the ciphertext of a stream cipher and that you knew the first 1024 plaintext bytes, because of a file header or because it contained a message you sent, or something like that. Could you XOR out the known plaintext, recover the first 1024 bytes of keystream, and use it to predict the next 1024 bytes of keystream? If so, you'd have demolished the whole stream cipher. Proceed immediately to your nearest crypto conference; you'll be famous.
Modern CSPRNGs, and Linux's, work on the same principle. They use the same mechanisms as a stream cipher (you can even turn a DRBG into a stream cipher). The only real difference is that you select keys for a stream cipher, and you use a feed of entropy as the key/rekey for a CSPRNG.
It's facts like this that make the Linux man page so maddening, with its weird reference to attacks "not in the unclassified literature".
In theory yes. In practice you're talking about a massive cryptographic break, to the extent that it's not worth worrying about if you're going to be using those random numbers for anything that involves real-world cryptography. If you can't trust your CSPRNG to be secure when your attacker gets ahold of 1024 bits of output then you can't trust anything you're going to do with the output of your random number generator anyway.
> An observer of the produced random numbers can potentially deduce the next numbers from the first 1024 random numbers.
By definition, a cryptographically secure pseudorandom number generator cannot be predicted like that by a computationally bounded attacker.
Thus if any attacker could deduce the next number from /dev/random by observing the numbers before, the algorithms they adopt is fundamentally wrong, and nothing could the save the security in that case.
An observer of the produced random numbers can potentially deduce the next numbers from the first 1024 random numbers. This is the reason why /dev/random requires more randomness added -- to prevent the guessing of the next number.