Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Information about cut-off date is very much part of its fine-tuning.


Apparently it is also part of its system prompt, since otherwise it wouldn't know what the cutoff date is just by feeding it fresher information - it has to be told the date explicitely somewhere.


It's possible the date is hallucinated. There is no reason that a combination of system prompt and regular prompt when combined can not generate a hallucinated cut off date that does not match the actual date.

LLMs are statistical models and simply generate probable sequences of tokens based on a context (very much like sampling from Markov chains) so there is no a priori reason to believe that the cut off date is accurate.

More generally, all output from the model that seems to be model metadata should be assumed to be a hallucination.


When it can be repeated dozens of times consistently that is strong reason to believe it is part of the system prompt. Baseless hallucinations will be different everytime.


If the model didn’t change, why would the hallucinations change?


Temperature.


I believe the model did change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: