Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the whole obsequious nature of how LLMs also amp them up thinking they're onto something incredible is throwing gas on this dumpster fire.

"What a great idea! This will revolutionize linkedin commenting. Let's implement it together."



Anthropic tried to fix this I think. Because it's the only model that will push back, but it's even funnier.

Ask a question, it will say yes, ask "are you sure?", it will reverse direction full throttle, then ask are you sure again and it'll go back to initial response saying "yeah I confused myself there". You can do this until context window exhaustion and this will never stop.

On the other side of this, Gemini will stand by whatever it generated the first time, no matter how much you push back and no matter how stupid the idea is.


Oh for sure. When I present something to the LLM it always tells me how great it is until I make it "question" it, then it says it was overestimating this or that. Eh. Quite annoying.



You have to remember that LLM's don't have any persistent capacity to hold a "judgement". You ask for something, it provides an attempt at a completion for it. No fact checking, no reasoning, just a plausible looking output, tuned to hopefully get you to repeat the interaction.

Half the reason the dominant UX is a "Chat" is that's the only way to provide a facsimile of memory or persistence across requests. Append the last few turns, press go. Over time you can develop an eye for the model's tics/attractor topics.

Remember that they bill by token use, and suddenly, the entire UX/architecture starts making sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: