Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

u: what do you think to this idea?

gpt: yeah you need to do it now

u: actually I think it's a bad idea

gpt: yes, you're right and here's why

u: no, actually it's genius.

gpt: you're absolutely right - it's genius and here's why



Stop asking such open ended questions. What does "what do you think" even mean?

You need to give it some direction. Is this safe, does this follow best practices, is it efficient in terms of memory, etc.

You have to put in a little fucking effort man, it's not magic.


I believe OP's point is that the LLM rarely pushes back, whatever you tell it it will go with.


I've faced this a lot while working with LLMs in order to develop an architecturally sound spec; especially when I am learning a new language/framework where I am not intimately aware of what the best practices in that framework are. Are there some methods you're aware of to make LLMs less subservient and more confident in their approach?


Right, and my point is you have to ask objective pointed questions. For example, even "what do you think is best in this case, a database or keep it all in memory?" Best for whom? To what end? Easiest to maintain? Most resilient? Etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: