I think this works, not because LLMs have a "hallucination" dial they can turn down, but because it serves as a cue for the model to be extra-careful with its output.
Sort of like how offering to pay the LLM $5 improves its output. The LLM's taking your prompt seriously, but not literally.
Sort of like how offering to pay the LLM $5 improves its output. The LLM's taking your prompt seriously, but not literally.