Hacker Newsnew | past | comments | ask | show | jobs | submit | anonymous_sorry's commentslogin

But the LLM contributions would likely be ruled public domain, so AGPL may not be enforceable on these.


There was a recent ruling that LLM output is inherently public domain (presumably unless it infringes some existing copyright). In which case it's not possible to use them to "reimplement everything we can as copyleft".


it's more complicated, the ruling was that AI can't be an author and the thing in question is (de-facto) public domain because it has no author in context of the "dev" claim it was fully build by AI

but AI assisted code has an author and claiming it's AI assisted even if it is fully AI build is trivial (if you don't make it public that you didn't do anything)

also some countries have laws which treat it like a tool in the sense that the one who used it is the author by default AFIK


You could reimplement it as public domain on your machine, and then edit it by hand and copyleft your own edits.


Eggs in the UK are safe to eat raw (and I presume the EU as well [but please verify before doing so!]).


Finland too. Fundamentally it is putting food safety over profits. Eggs being salmonella free is based on regular testing and culling infected flocks. It is a process that need constant work.


Have you ever had a chatbot solve your problem? I don't think this has ever happened to me.

As a reasonably technical user capable of using search, the only way this could really happen is if there was no web/app interface for something I wanted to do, but there was a chatbot/AI interface for it.

Perhaps companies will decide to go chatbot-first for these things, and perhaps customers will prefer that. But I doubt it to be honest - do people really want to use a fuzzy-logic CLI instead of a graphical interface? If not, why won't companies just get AI to implement the functionality in their other UIs?


Actually, I have, Amazon has an excellent one. I had a few exchanges with it, and it initiated a refund for me, it was much quicker than a normal customer service call.

Outside of customer service, I'm working on a website that has a huge amount of complexity to it, and would require a much larger interface than normal people would have patience for. So instead, those complex facets are exposed to an LLM as tools it can call, as appropriate based on a discussion with the user, and it can discuss the options with the user to help solve the UI discoverability problem.

I don't know yet if it's a good idea, but it does potentially solve one of the big issues with complex products - they can provide a simple interface to extreme complexity without overwhelming the user with an incredibly complex interface and demanding that they spend the time to learn it. Normally, designers handled this instead by just dumbing down every consumer facing product, and I'd love to see how users respond to this other setup.


I'm happy that LLMs are encouraging people to add discoverable APIs to their products. Do you think you can make the endpoints public, so they can be used for automation without the LLM in the way?

If you need an LLM spin to convince management, maybe you can say something about "bring your own agent" and "openclaw", or something else along those lines?


Yep, I’m developing the direct agent access api in parallel as a first class option, seems like the human ui isn’t going to be so necessary going forward, though a little curation/thought on how to use it is still helpful, rather than an agent having to come up with all the ideas itself. I’ve spun off one of the datasets I’ve pulled as an independent x402 api already, plan to do more of those.


What I mean is that I want to be able to build my own UIs and CLIs against open, published APIs. I don't care about the agent, it's an annoyance. The main use of it is convincing people who want to keep the API proprietary that they should instead open it.


I did think about this use-case as I was typing my first message.

I can see it working for complex products, for functionality I only want to use once in a blue moon. If it's something I'm doing regularly, I'd rather the LLM just tell me which submenu to find it in, or what command to type.


Yeah true, might be a good idea to have the full UI and then just have the agent slowly “drive” it for the user, so they can follow along and learn, for when they want to move faster than dealing with a chatbot. Though I think speech to text improves chatbot use speed significantly.


Amazon's robot did replace the package that vanished. I don't believe it ever understood that I had a delivery photograph showing two packages but found only one on my porch. But I doubt a human would have cared, either--cheap item, nobody's going to worry about how it happened. (Although I would like to know--wind is remotely possible but the front porch has an eddy that brings stuff, it doesn't take stuff.)


Your docs are probably read many more times than they are written. It might be cheaper and quicker to produce them at 90% quality, but surely the important metric is how much time it saves or costs your readers?


"I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault"


In the same way it's better that adults are the recipients of the harms of smoking, drinking or gambling. It's still not desirable, but societies have settled upon thresholds for when people have some capacity to take responsibility for their choices.

Not saying those thresholds are always right and should definitely apply in this case, but it surely isn't an alien or non-obvious concept.


Others have suggested "bullshit". A bullshitter does not care (and may not know) whether what they say is truth or fiction. A bullshitter's goal is just to be listened to and seem convincing.


The awareness of the bullshitter is used to differentiate between 'hard' and 'soft' bullshit. https://eprints.gla.ac.uk/327588/1/327588.pdf


It's very impressive indeed.

Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?


There's an interesting asymmetry in language in this area.

Jobs are "created" by a company or an industry.

But they never seem to be "destroyed", instead they are "lost".

If the company starts hiring again, they're "creating" new jobs, not "finding" the ones they were careless enough to lose.


Doublespeak? Try to speak in such a legalese way that although things are technically true, yet still those words are crafted in such a way to influence others...

So in this case of our capitalist system, doublespeak exists in such a way because they can create money or not lose money by doing such doublespeak as saying to investors this as a destroyed would make them have a negative connotation with amazon itself which would reduce their stock price.

Everything is done for the stock price. Everything. The world is a little addicted on those shareholders returns that we can change our language because of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: