Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll worry about this when users can reliably write useful specification documents. Garbage In, Garbage Out and all that.


I don't know if the intention of the article was to stoke worry but actually embrace as opportunity! Users being able to make their own software for their own needs is a worthy and beautiful goal. It's like if people could only eat what was at restaurants and not cook for themselves.

And of course with iteration and feedback loops people can definitely learn how to specify what they want in a fairly precise way.


However, the easier programming gets the less people who doesn't know it need to do it because it's more likely somebody has already solved their problem.


If the thing you want it to do isn’t too unusual, LLMs can guide you into doing something vaguely sensible.

This is the one area of AI I’m actually pretty positive about. Computing largely passed people by, and computers have not lived up to their potential. I could see this approach making computers more useful for a large number of people.

Especially semi-technical people who’s specialty isn’t programming.


> Especially semi-technical people who’s specialty isn’t programming.

That would be worse kind . Imaging fixing bug of those people with LLM Hallucinated code that runs but having wrong logic all over the place.

I reduced use of LLM for coding these days. I only ask it to generate templates or write some repetitive codes and sampledata.


Cool how we're far enough into the adoption cycle that 'I reduced use of LLM' is a mainstream thing to hear among sophisticated techies.


Even with GPT4 the most useful usecase are to generate stuff that are so repetative and daunting to type and refactor code that interns wrote. And making it write documentations.

What i love most is to draft out project specs from user's requirements and to generate user stories.

Or let it write leet-code like problems that is too boring to do manually.

The actual layer of coding is still best done by experienced engineer's biological brains.


Some of us haven’t even incorporated LLMs into our development workflows yet. :)


Some of us looked at it and decided the results were so bad it was a net loss to even try.


Yep. Even the few times I use it via the ChatGPT interface I spend as much time fixing the bad assumptions it made.


Some of us looked at it and decided the results were so bad it was a net loss to even try, but it was just so cool...


I do not envy a non-programmer stuck with buggy code generated by an LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: