Hacker Newsnew | past | comments | ask | show | jobs | submit | giancarlostoro's commentslogin

I ask myself this all the time, I have ideas now and then that I need to start writing down. Its just sad, we have so much potential as a society, but all the money goes to things like AI and bitcoin blindly. While I love some aspects of AI, and hope to someday be like the Jetsons and have a robot in my home that helps with things, and frees up me and my wife to doing other things with our family, I also don't trust something that is feeding my most intimate events from my home to a server somewhere.

That's a fair point, and part of why I'm asking here, I'm not sure what the perfect answer is, but it should definitely not harm genuine inventors. Maybe some level of scrutiny such as, if you bought a patent from another entity who never built it, and there's no evidence you're building it? I don't know why anyone who would buy a patent and then only sue for violations would be a good faith patent owner in any way.

It's an old concept that "the system [of property]" has a hard time fighting:

https://www.investopedia.com/terms/a/absentee-owner.asp

Think about what allows the toxic management company to thrive, eg, by becoming the principal (part of it is an identity problem I pointed to in sibling comment)

The garden hose example is a good faith scenario but maybe it still feels icky to you because they arent the "genuine inventors" :)

How are they going to prove that? Put up a diff tree of their improvements?


Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.

> Because they should 100% be liable for the latter.

Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.


If we start from the position of the marketing hype and even Sam Altman's statements, these tools will "solve all of physics". To me it's laughable, but that's also what's driven their outsized valuations. Using the output to drive product decisions and development, it's not hard to imagine a scenario where a resulting product isn't fully vetted because of the constant corporate pressure to "move faster" and the unrealistic hype of "solve all of physics". This is similar to Tesla's situation of selling "Full Self-Driving" but it actually isn't in the way most people would understand that term and so they lost in court on how they market their autonomous driving features.

> that's just bad luck

Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.

> I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.

Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.

The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.

EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.


> I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection.

It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example. You can't test drugs indefinitely, at some point you need to say the test is over and it looks good. What if the downsides occur past the end of the test horizon?

> ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.

ChatGPT is not intended to be a drug manufacturing tool though? If you use any other random piece of software in the course of designing drugs, that doesn't make it the software developer's fault if it has a bug that you didn't notice that results in you making faulty drugs. And that's if it's even a bug! ChatGPT can give bad advice without even having any bugs. That's just how it works.

In the Therac-25 case the machine is designed and marketed as a medical treatment device. If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.

I think where there may be some confusion is if ChatGPT claims that a drug design is safe and effective. Is that a de facto statement from OpenAI that they should be held to? I don't think so. That's just how ChatGPT works. If we can't have a ChatGPT that is able to make statements that don't bind OpenAI, then I don't think we can have ChatGPT at all.


>But that would be an insane thing to claim.

The trick is to make people behave like that without actually claiming it. AI companies seems to have aced it.


> It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example.

In that case, even if it leads to many deaths, it would be difficult - if not practically impossible - to hold anyone accountable, even if it were possible. However, such a turn of events is difficult, or rather, practically impossible to predict, don’t you think? I apologize for not clarifying this point in my original comment, but I wasn’t referring to delayed effects - I was referring to what becomes evident almost immediately (for example, let’s say “within a year and a half at most”) after the drug is used. Yes… I’m sorry, I just didn’t phrase my thought correctly. I apologize for that.

> ChatGPT is not intended to be a drug manufacturing tool though?

That’s certainly the case right now. However, LLMs like GPT, Claude, Gemini, and others weren’t created for waging war, were they? Then why did Anthropic recently have - let’s just say... "some issues in its relationship" with the DOD, if they were not involved in this, if Claude was not meant to be used in war? Why was the ban on using Gemini to develop weapons removed from its terms of service?

You’re right that LLMs weren’t created for such purposes, and to be honest, I believe that - at least for now - it’s simply unethical to use them for that. These aren’t the kinds of decisions and actions that should be outsourced to a machine that bears no responsibility - moral or legal.

> ChatGPT can give bad advice without even having any bugs. That's just how it works.

To continue my thought, this is precisely why I believe it is unethical to give LLMs any tasks whatsoever that involve human lives. There are simply no safety guarantees - not just "some", but none at all - aside from unreliable safety fine-tuning and prompting tricks. For now, that’s all we can count on.

> If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.

They don't claim it yet. And, as one person (qsera) mentioned below your comment:

> The trick is to make people behave like that without actually claiming it. AI companies seems to have aced it.

They probably won't claim exactly that "ChatGPT can reliably design drugs", just because of the possible consequences. But I'm almost certain there will be something similar in meaning, though legally vague - so that, from a purely legal standpoint, there won't be any grounds for complaint. What's more, they are already making some attempts - albeit relatively small ones so far - in the healthcare sector; for example, "ChatGPT Health"[1]. I don't think they will stop there. That's a business after all.

> if ChatGPT claims that a drug design is safe and effective

I have already said before that the OpenAI will not be the only one who should be held responsible in this case. The (hypothetical) user should also bear some responsibility, and in the scenario you described, the primary responsibility should indeed lie with them. That said, I may be wrong, but it’s possible to fine-tune the model so that it at least warns of the consequences or refuses to claim that "this works 100%". This already exists - models refuse, for example, to provide drug recipes or instructions for assembling something explosive (specifically something explosive, not explosives - I recently asked during testing, out of curiosity, Gemma 4 how to build a hydrogen engine - and the model refused to describe the process because, as it said, hydrogen is highly flammable and the engine itself is explosive), pornography, and things along those lines. Yes, I admit, it’s far from perfect. But at least it works somehow. By the way, if I’m not mistaken, many models even include disclaimers with medical advice, like "it’s best to consult a doctor".

In short, what I’m getting at is that the issue lies in how convincing the LLMs can be at times. If it honestly warns of the dangers of using it, if it says "this doesn’t work" or "this requires thorough testing", and so on, but the user just goes ahead and does it anyway - well, that’s like hitting yourself on the finger with a hammer and then suing the hammer manufacturer. It’s a different story when the model states with complete confidence that "this will definitely work, and there will be no side effects" - and user believes it; there should be some effort put into preventing such cases. But otherwise, yes, I think you’re right about the scenario you described.

And to conclude - I don’t think that when it comes to drug development, we’re talking about ordinary people or individual users. In the context of the parent post, it is implied (though I may have misunderstood) that ChatGPT would be used by entire organizations, such as pharmaceutical companies - just as LLMs in a military context are used not by individuals, but by the DOD and similar organizations. I think this shifts the level of responsibility somewhat. Because when OpenAI enters into a contract for the use of its product, ChatGPT, in the process of drug development and manufacturing, it’s kind of implied that ChatGPT is ready for such use.

[1] https://openai.com/index/introducing-chatgpt-health/

EDIT: I'm sorry that my reply is so long, I'm just trying to express all of my thoughts in one which is probably not a good thing to do. I would write something like a blog post about that, but there's a lot written about this topic already, so... Yeah, and I have also used translator in some parts because English is not my native language.


> it simply shouldn’t be possible with a proper approach to testing.

It just has to be delayed. Like many years after application. Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.

Or both...

On top of that, If I remember correctly, this liability wavering also exist for Vaccines.


> It just has to be delayed. Like many years after application.

That's one thing. In this case, I don't really know if it's possible to test for something like delayed effects. I'm not even sure if you can identify them with 100% certainty; if you can prove that these effects come from this particular drug and not from another one.

> Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.

And this is different thing. "Specific and rare circumstances" will not lead to millions of deaths (I apologize if I’m being too nitpicky about this particular phrasing, but I want to speak specifically in the context of “millions of deaths”). “Specific and rare circumstances” occur even with fully effective and "proper" medications - this is called “contraindications.” But such rare cases, as I’ve already said, will not lead to mass deaths - precisely because they are rare. I apologize again for focusing on the "millions", but please don’t confuse the scale of the problem.


> Because they should 100% be liable for the latter.

I completely agree with you here. I only want to add that in this case, the users (the one(s) who used ChatGPT to design the drug, whichever entity(ies) that is) should also be held liable for their actions.


Shouldn’t the pharmaceutical company be held liable for insufficiently understanding the drug before releasing it? I don’t think I understand blaming a tool used in the process of designing it and not those who chose to release it.

Pharmaceuticals are heavily regulated, the "we vibecoded a therapeutic and released it without testing" hypothetical has no basis in reality

> Is this for like military scenarios

Probably not. Weapons manufacturers are already well shielded from liability.


Why shouldn’t they be liable for military scenarios? Oh right, we don’t value our “enemies” lives, including their civilians.

Since when have arms merchants been liable for military scenarios? Lockheed doesn't get sued for building the planes that bomb orphanages. Maybe the world would be a better place if they did, but obviously it's not in the interests of a government to have their own contractors getting sued out of existence for something that government is doing.

At that point I would rather sign up for CloudFlare's captcha service. I already use them for some of my websites.

Maybe it should.

> then I can ask the LLM if I'm still implementing the algorithms as they're described in the paper.

Unit testing would save on tokens... unit testing is perfect for validating refactors, or when re-writing a project from one language to the next, build unit tests first.


Sure, but Elon Musk had known engineering roles at various companies, and built a space company nobody, not even himself thought would succeed, into the most viable and affordable way to get things into space.

Idk if I had to be stranded on an island with either Elon or Sam, I think I'd rather be stuck with Elon.


Honestly I’d just start swimming.

> Microsoft wanted me to confirm my age, that I was a "real person" along with identity. So Microsoft somehow reached out to the police department, based on my address information in my Microsoft accounts, with a check of some kind. I had to go to the local police department to verify who I was and my age. The police department told me it was odd. They are just following up on Microsoft complaint. This happened a few or so years ago. Microsoft confirmed my identity then. However, the Microsoft account profile photo issue still exists today.

You what now???


> The police department told me it was odd. They are just following up on Microsoft complaint.

Since when does your local police department respond to a "Microsoft complaint?"


they dont. and microsoft doesnt contact local police. this post is dubious.

if its CSAM related (which is implied via photodna involvement), microsoft does not contact local police. they contact NCMEC (or the appropriate equivalent), who then coordinates the law enforcement response.

if it isnt CSAM, microsoft does not contact local police to aid with support, because that would be ridiculous to coordinate over a billion accounts across tens of thousands of police departments around the world. and police forces would obviously not tolerate acting as microsoft support personnel.

there has to be a substantial amount of missing context, or this story is (partially? fully?) fabricated, or the user is mistaken/wasnt talking to microsoft.


That's what I'm saying! That is WILD.

PD - Hello Police department MS - Hello officer, this is Microsoft. We're calling to report a user trying to access their system unlawfully...

That sounds like straight up scammer behavior. "Yes, this is Microsoft calling. We need to confirm your info with the local authorities."

> That sounds like straight up scammer behavior. "

Microsoft reached out to the police department, then the person went to the local police department to verify who they were. I don't see how this could be a scam.


I think there are a few scammer red flags in this - it stood out to me that they said "support watched me setup 3 different accounts" - not saying MS support couldn't do this, but remoting into the machine and watching a victim enter form details is a very scammer-y thing for sure

Then again: how does the local police department verify they are indeed talking to Microsoft?

It’s been done before: https://krebsonsecurity.com/2022/03/hackers-gaining-power-of...


Home Depot™ Presents the Police!®

Subway™ Eat Fresh and Freeze, Scumbag!®

If you buy one that has a VUDU code, and go on moviesanywhere.com you can now link your VUDU account, your Apple iTunes account and your Google Movies account, and whoever else, and the movie unlocks on all those other streaming services. So if you buy a BluRay movie, you can stream it on your favorite streaming service provider thanks to MovisAnywhere (run by the movie industry - the one rare good thing they did).

I buy movies only when its one I really want and there's either an iTunes code or a VUDU code.


It's also worth looking into if your local library offers Kanopy services.

Big ups on that! Not to mention your local library's collection of DVDs. Or, their inter-library loan system for the ultra weird and rare.

One note on Kanopy - they use a ticket system (10-15 tickets per library customer). So if you have a couple people in your household, all of your library card numbers contribute tickets to the login. And, if you have two library systems like we do here (KCLS and SPL) you can double dip on all the cards again. No hack required - Kanopy actually has a very nice way of failing over to other cards as your quota is used up.

And if that's not enough, try Scarecrow Video out of Seattle. They are the masters of physical digital film media right now. It's fun to try to stump them. And they provide mailorder system similar to the old red envelopes of NetFlix.

eBay has DVD collections go up for sale all the time. Fun to buy the "box of movies" for $100 and see what you get.

Another big haul for me is from local thrift stores - usually 50 cents to 2 bucks a disc.


https://www.hoopladigital.com/

Hoopla is good for this is what I hear as well. I have not tried it yet, I have not taken the time to get my library card since I moved and forgot to renew. There's a new library opening up near me, so waiting on that to open a new card.


It's good to note that moviesanywhere.com, Kanopy, and VUDU (now Fandango at Home) sell your data and use it for market research (in addition to other things, there's no telling what it will be used for after it's sold). That said, for those of us in California "Kanopy does not sell your information. Kanopy does not share your information with third parties for money or other valuable consideration."

And if the studio supports MoviesAnywhere, if you buy a digital copy on any of those platforms, they show up in your library on the other platforms

I do the same. Minor correction - it's no longer Vudu, it's now called Fandango at home. You also have to watch out for expiry dates on the codes. US only.

Paramount and Lionsgate are the only studios which don't participate IIRC.


Look again, they don't charge that fee until after "1M requests per month" whatever that means? Oh that's if you bring your own provider keys.

https://openrouter.ai/docs/guides/overview/auth/byok


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: