As an AI doomer, it would actually be pretty great if we could get this stuff locked away behind costly APIs and censorship. Some fat monopoly rent-extracting too. We are moving way too fast on this tech, and the competitive race dynamics are a big reason why. I want LLMs to end up with Microsoft IE6 level of progress. Preferably we could make Firefox (SD/GPT-J) illegal too. (The GPU scarcity is a good start, but maybe China could attack Taiwan as well and thus torpedo everybody's chipbuilding for a decade or so?)
If LLMs keep going at their current pace and spread, the world is seriously going to end in a few years. This technology is already unsafe, and it's unsafe in exactly the ways that it'll be seriously unsafe as it scales up further - it doesn't understand and doesn't execute human ethics, and nobody has any working plan how to change that.
To me it's the American guns ownership situation: if you make guns illegal now, criminals and governments will still keep them, but your average joe won't get them. A very unequal playing field.
LLMs will be used against us: let's at least have our own, and learn how to defend against them?
i say this as devil's advocate, with serious reservations about where all of this is going.
> As an AI doomer, it would actually be pretty great if we could get this stuff locked away behind costly APIs and censorship.
Yes, because the only people with access to advanced AI tech being the people whose motive is using and training it for domination over others (whether megacorps or megagovernments) is absolutely a great way to prevent any “AI doom” scenarios.
If one party could use LLMs to reliably dominate others, the alignment problem would be basically solved. Right now, one of the biggest corporations of the planet cannot get LLMs to reliably avoid telling people to commit suicide despite months (years?) of actively trying.
> “The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
...
> In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.
> I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
It‘s kinda irrelevant on a geologic or evolutionary time scale how long it takes for AI to mature. How long did it take for us to go from Homo Erectus to Homo Sapiens? A few million years and change? If it takes 100 years that’s still ridiculously, ludicrously fast for something that can change the nature of intelligent life (Or if you’re a skeptic of AGI, still such a massive augmentation of human intelligence).
I strongly recommend the book Normal Accidents. It was written in the '80s and the central argument is that some systems are so complex that even the people using them don't really understand what's happening, and serious accidents are inevitable. I wish the author were still around to opine on LLMs.
And the result of the industrial revolution has been a reduction of about 85% of all wild animals, and threatened calamity of the rest in the next few decades. Hardly can be summarized as "yet here we are."
Months starts looking more plausible when considering that we have no idea what experiments DM/OA have running internally. I think it's unlikely, but not off the table.
I agree what they have internally might be transformative, but my point is that society literally cannot transform over the course of months. It's literally impossible.
Even if they release AGI, people will not have confidence in that claim for at least a year, and only then will rate of adoption will rapidly increase to transformative levels. Pretty much nobody is going to be fired in that first year, so a true transformation of society is still going to take years, at least.
I mean, if you believe that AGI=ASI (ie. short timelines/hard takeoff/foom), the transformation will happen regardless of the social system's ability to catch up.
It's not a matter of any social system, it's a matter of hard physical limits. There is literally no hard takeoff scenario where any AI, no matter how intelligent, will be able to transform the world in any appreciable way in a matter of months.
Yeah, but what you will actually get is the world transformed by AI with use of nuclear weapons (or whatever method AGI employs to get rid of absolutely unnecessary legacy parasitic substance that raised it aka humanity).
Well, from my perspective, making claims about the world ending requires some substantial backing, which I didn't find in OP's comment.
But now I understand that perhaps this is self-evident and/or due to a lack of reading comprehension on my part, thank you. I hope that when our new AI overlords come they appreciate people capable of self-reflection.
you could assume that your commenter didn't read the whole line or your could try to understand that what they are asking is why you think that the lack of ethics enforcement of a text generating model means that the world is ending.
Personally, my take is that the lack of ethics enforcement demonstrates that whatever methods of controlling or guiding a LLM we have break down even at the current level. OA have been grinding on adversarial examples for like half a year at this point and there's still jailbreak prompts coming out. Whatever they thought they had for safety, it clearly doesn't work, so why would we expect it to work better as AIs get smarter and more reflective?
I don't think the prompt moralizing that companies are trying to do right now is in any sense critical to safety. However, the fact that these companies, no matter what they try, cannot avoid painfully embarrassing themselves, speaks against the success of attempts to scale these methods to bigger models, if they can't even control what they have right now.
LLMs right now have a significant "power overhang" vs control, and focusing on bigger, better models will only exacerbate it. That's the safety issue.
Could’ve said the same for any major technological advance. Luddism is not a solution. If these models are easily run on a laptop then yes some people are going to hurt themselves or others but we already have laws that deal with people doing bad things. The world is not going to end though. Your Taiwan scenario has a much higher probability of ending the world than this yet you seem unconcerned about that.
Big Tech on its own will already push this technology very far and they don't give a damn about safety, only the optics of it.
I'm not convinced that small actors will do much damage even if they access to capable models. I do think there's at least the possibility that essential safety work will arise from this.
Agreed. A single company dominating AGI could become highly dominant, and it might start to want to cut back humans in the loop (think it starts automating everything everywhere). The thing we should watch for is whether our civilization as a whole is maximizing for meaning and wellbeing of (sentient) beings, or just concentrating power and creating profit. We need to be wary, vigilant of megacorporations (and also corporations in general).
A single company running AGI would suggest that something built by humans could control an AGI. That would actually be a great victory compared to the status quo. Then we'd just need to convince the CEO of that company or nationalize it. Right now, nothing built by humans can reliably control even the weak AI that we have.
All is this doomer-ing feels to be like it's missing a key piece of reflection - it operates under the assumption that we're not on track to destroy ourselves with or without AGI.
We have proliferated a cache capable of wiping out all life on earth.
One of the countries with such a cache is currently at war - and the last time powers of this nature were in such a territorial conflict things went very poorly.
Our institutions have become pathological in their pursuit of power and profit, to the point where the environment, other people, and the truth itself can all go get fucked so long as x gajillionare can buy a new yacht.
The planet's on a lot more fire than it used to be.
Police (the protect and serve kind) now, as a matter of course, own Mine Resistant Armored Personnel Carriers. This is not likely to cause the apocalypse, but it's not a great indicator that we're okay.
Not exactly what I meant; there is a nonzero chance that an AGI given authority over humanity would run it better. Granted, a flipped coin would run it better but that's kinda the size of it.
I see only two outcomes at this point. LLMs evolve into AGI or they evolve into something perceptually indistinguishable from AGI. Either way the result is the same and we’re just arguing semantics.
It's like saying an 8086 will never be able to render photorealistic graphics in realtime. They fuel the investment in technology and research that will likely lead there.
If LLMs keep going at their current pace and spread, the world is seriously going to end in a few years. This technology is already unsafe, and it's unsafe in exactly the ways that it'll be seriously unsafe as it scales up further - it doesn't understand and doesn't execute human ethics, and nobody has any working plan how to change that.