It doesn't even matter if the LLM was exposed during training. A clean-room rewrite can be done by having one LLM create a highly detailed analysis of the target (reverse engineering if it's in binary form), and providing that analysis to another LLM to base an implementation.
It doesn't have to be 2 LLMs, but nowadays there's LLM auto-memory, which means it could be argued that the same LLM doing both analysis and reimplementation isn't "clean". And the entire purpose behind the "clean" is to avoid that argument.