I'm unsure how someone could use LLMs regularly and not encounter significant mistakes. I use them a lot less than some devs and still run into basic errors pretty often, to the point that I rarely bother using them for niche or complicated problems even though they are pretty helpful in other cases. Just in the past few days I've had Claude trip all over itself on multiple basic tasks.
One case was asking how to do a straightforward thing with a popular open source JavaScript library, right in the sweet spot of what models should excel at. Claude's whole approach was completely broken because it relied on a hallucinated library parameter that didn't exist and didn't have an equivalent. It invented a keyword that doesn't appear in the entire open source library repo, to control functionality the library doesn't have.
One case was asking how to do a straightforward thing with a popular open source JavaScript library, right in the sweet spot of what models should excel at. Claude's whole approach was completely broken because it relied on a hallucinated library parameter that didn't exist and didn't have an equivalent. It invented a keyword that doesn't appear in the entire open source library repo, to control functionality the library doesn't have.