My take on this is that AI ethics is really important, but just preventing AI from doing certain things like creating celebrity deepfakes is somewhat lazy and ineffective. A better application of AI ethics is developing technology that can reliably detect deepfakes, rather than just putting artificial limits in your product and acting like that is going to stop pandora's box from being opened.
Yeah. People are definitely going to abuse Stable Diffusion, I'm sure they already are. But I don't really know what OpenAI's plan was. It's like they rushed up to a Pandora's box, took a peek and shouted to everyone "Good news everyone, we taped Pandora's box closed!" somehow without noticing that they were doing so from inside Pandora's warehouse.
On the other hand, everybody's been saying Pandora's warehouse was over there for a while -- it isn't really that they are to blame for showing us the way in or anything, I just don't understand what they were trying to accomplish.
OpenAI's strategy for "AI safety" - don't release weights, build a walled garden, charge for access - seems to conveniently coincide with a nice little SaaS business model. their usage restrictions are more akin to app store guidelines, and (I suspect) mostly serve to justify keeping models from direct access by the masses.
if anything, OpenAI has made me more cynical about "AI safety" messaging because it looks like an excuse to take a cut and keep things proprietary.
The funny thing about generated porn is that once it becomes ubiquitous real leaked tapes become deniable. So the possible downside for celebrities is that when they intentionally leak a tape to create buzz it may be met with a yawn.
These are still just images, not videos, so I don't understand the moral panic of "OMG celebrities & fake porn!"
These systems aren't really lowering the barrier of entry on still-photography fake porn when previously anyone with Gimp & a few hours of video tutorials could churn out much the same thing.
I think text-prompt generated deep fakes (not just of porn) will present a significantly larger challenge for society, but I don't see that same scope of problem on still images.
Yes... but you can then train a model to detect the improved fakes. And then train a better deepfake model, and so on, forever.
This technique of pitting two AI against each other (a generator and a detector/discriminator) is called a generative adversarial network, and it's used a lot for unsupervised training.
“AI ethicists” so far smack of people who want to make AI express all their personal biases and prejudices. Listening to them for about 10 minutes makes me desperately hope for an open future like that created by stable diffusion out of the sheer terror they might have their hand on the rudder.
I honestly think most of them are window dressing and aren’t allowed to have real influence though. They’re there for the PR, not to actually change things, but they honestly just make me really scared or big tech controlling AI.
I don’t think it is a joke, so much as misguided. I see a lot of focus on technical solutions, when the real problems are social. The big research question should not be “how can we build a ‘safe’ system” so much as “how should (or shouldn’t) we use these new tools and capabilities”?
- children won't be smarter than any living human.
- children have a human brain which makes them predictable (constrained in behavior by current laws, institutions, and most likely a conscience).
A better analogy is to ask what happens when a species branches off and evolves into a smarter species, but the dumber ancestor species still exists.
I really don't understand the perspective that people in your position take. Is it that you don't think we'll arrive at super intelligent AI, and therefore there's little risk? Or that we will be able to control it? If you think we can control it, why? Like we're not that much smarter than our monkey ancestors and what hope did they ever have of stopping our absolute domination over them? And then all of you call this opinion a "religion" without even explaining why it's wrong?
I don't think a superintelligent AI is necessarily more capable of affecting anything than I am or would necessarily be good at any actions except sitting around being superintelligent. I think that requires all kinds of other virtues like executive function, patience, motivation etc in a word that just means "thinks real fast". Intelligence itself is mainly limited by “no plan survives contact with the enemy”, also known as the efficient market hypothesis. And since this is the real world and entropy exists, it would have to get a job to pay its AWS bill.
That's why it seems to be a religion - it thinks intelligence gives you unlimited powers and makes your plans always work, it posits unseen entities with infinite amounts of it, and it tells you to move to Berkeley and dedicate your life to stopping them. Specifically, it's a kind called rationalist eternalism (https://meaningness.com/eternalist-systems).
For a specific example of undangerous superintelligence see Culture Minds, who only influence anything because of special programming to make them less of a general intelligence. The unbound ones immediately get bored with the real world, leave it and just play games in their head instead.
Also, I don't think any individual human has absolute dominion over monkeys? Human society as a whole yes, but society doesn't behave like a generally intelligent agent. A monkey is better than you at doing the things monkeys care about though.
I do think unintelligent machines are pretty dangerous. There's extremely dangerous machines called "cars" that have already taken over society and constantly kill people! And we buy their gas for them too.
It's not like I think the first iteration of superintelligent AGI will necessarily be an existential threat. The problem is really the law of large numbers. When you have N separate militaries and M separate companies all with their own agenda, stretched out over X years (where X is thousands, mind you), there is a lot of scope for >= 1 of these groups going the way I described -- effectively creating a new species that's smarter than us and that can function in the real world. Many of these groups would have an incentive to do that, because such agents are useful for certain tasks.
> Also, it would have to get a job to pay its AWS bill.
Inference power costs are low now on agents that are better than humans at Chess and Go. It isn't going to be an issue after another 20-100 years of further R&D and optimizations. Nothing about the history of computing should tell us that this will be a big limiting factor.
> Inference power costs are low now on agents that are better than humans at Chess and Go. It isn't going to be an issue after another 20-100 years of further R&D and optimizations. Nothing about the history of computing should tell us that this will be a big limiting factor.
Humans need shelter and jobs too. If you got an AI down to the energy requirements of a human that's not enough to avoid needing one. Especially if it's influencing the real world - entropy exists and all real world things cost money.
Restricting AI to the lower end of human intelligence (e.g. around IQ of 70) makes it a useful resource which is guaranteed to be safe. A 70 IQ human couldn't take over the world nor disarm any safety features built into their bodies.
You're right on the spot regarding the problem of having two different smart species on the same planet. We killed everything between us and chimps. Given enough time, the smarter species can be assumed to always take over.
That’s actually reverse causation - it’s not that we killed “everyone else”, it’s that everyone still alive became “us” after we gave up killing them and interbred with them instead. The British were going around genociding all over the place until pretty recently but now they’ve decided the Irish are human too.
That's probably a false choice. We killed them and interbred with them. Or at least, killed their men and interbred with their women, which is one of the hypothesized explanations for the male lineage population bottleneck about 7000 years ago.
Most children aren't able to recursively improve their own hardware and software in short time spans, and generally most children are unlikely to be many orders of magnitude more intelligent than their parents.
Even in a world where an AI exists, why am I supposed to believe it's going to be able to do any of those things? I do believe it's possible to create one with the same attributes as a human person, it's just anything beyond that is unproven.
Rather, it seems like evidence that singularitarianism is actually a religion (https://en.wikipedia.org/wiki/Millenarianism) which is why it believes things with magic powers will suddenly appear.
In particular, exponential growth doesn't exist in nature and always turns into an S-curve… of course it's a problem if it doesn't level out until it's too late.
What is your estimate of the probability that human intelligence is actually anywhere near the upper limit, rather than some point way further down the S-curve where seemingly exponential growth can still go for a long time?
I'd bet a ton that we're nowhere near the top: evolution almost never comes up with the optimal solution for any problem, almost by definition it stops at "meh, good enough to reproduce". And you don't need a ton of intelligence to reproduce.
Evolution's sub-optimality is actually one of the strongest arguments against intelligent design, so I'm really hesitant to agree that it requires any sort of leap to estimate that with some actual design it won't be very difficult to blow way past human intelligence once we can get there.
> What is your estimate of the probability that human intelligence is actually anywhere near the upper limit, rather than some point way further down the S-curve where seemingly exponential growth can still go for a long time?
Well, define "intelligence". People seem to use it in a vague way here - it might be what you call a motte and bailey. The motte (specific definition) is something like "can do math problems really fast" and the bailey is like "high executive function, is always right about everything, can predict the future".
For the first one I don't think humans are near a limit, mostly because of the bottleneck in how we get born limiting our head sizes. But it is pretty good if you consider the costs of being alive - food requirements, heat dissipation, being bipedal, surviving being hit on the head, risk of brain cancer, etc, it's done well so far.
Similarly an AI is going to have maintenance costs - the more RTX 3090s it runs on, the more calculations it might be able to do, but it's going to have to pay for them and their power bill, and they'll fail or give wrong answers eventually. And where's it getting the money anyway?
As for the second kind I don't think you can be exponentially better at it than a human. At least if you are, it's not through intelligence, but it might be through access to more private information, or being rich enough to survive mistakes. As an example, you can't beat the stock market reliably with smarts, but you can by never being forced to sell.
The real mystery to me is why people say "AI could recursively improve their own hardware and software in short time spans". I mean, that's clearly a made up concept since none of humans, computers or existing AI do it. But the closest thing I can think of is collective intelligence - humans individually haven't improved in the last 10k years, but we got a lot more humans and conquered everyone else that way. But we're also all individuals competing with each other and paying for our own individual food/maintenance/etc, which makes it different from nodes in an ever-growing AI.
Human intelligence is primarily limited by the 6-10 item limit in short-term memory. If you bumped that up by a factor of 5 we could very easily solve vastly more complex problems, and fully visualize solutions an order of magnitude more subtle and messy than humans can manage today.
That's a relatively easy thing to do architecturally once you have a model that can match human intelligence at all. TBH if we could rearchitect the brain in code we could probably easily figure out how to do it in ourselves within a few years, but our wetware does not support patches or bugfixes.
We can't improve ourselves, but that's only because we're meat, not code. And of course no AI has done it yet, because we haven't actually made intelligent AI yet. The question is what happens when we do, not whether the weak-ass statistical crap that we call AI today is capable of self-improvement. Nuclear reactions under the self-sustaining threshold are not dangerous at all, but that was not a good reason to think that no nuclear reaction could ever go exponential and be devastating.
> We can't improve ourselves, but that's only because we're meat, not code.
Doesn't seem like computers can improve themselves either. Mainly because they're made of silicon, not code. "AI can read and write its own code" doesn't exist right now, but even if it did, why is that also implying "AI can read its CPU Verilog and invent new process nodes at TSMC"?
(Also, humans constantly break things when they try changing code - the safest way to not regress yourself would be to not try improving.)
Computers are not as intelligent as humans right now at coding. So it's no surprise that they can't improve code (let alone their own).
If we ever get them there, then it's likely that the usual resourcing considerations will come into play, and refactoring/optimization/redesign will be viable if you throw hours at them. But unlike with human optimization, every hour spent there will increase the effectiveness of future optimizations.
I was enthusiastic about DALL-E but the "safety measures" are both heavy handed and naive. It gets in the way for many normal/reasonable prompts but seems easy to work around with various wordplay, so not sure the point. Stable Diffusion and others have been much easier to deal with.
The harm is really hand-wavey and speculative, frankly.
An image classifier calling Black faces gorillas? Embarrassing, insulting, has to be fixed. AI pre-crime classifiers for police departments? I'm against it, across the board.
Do we really care that the image mulchers default to stereotypes? It means if you say "basketball player" they'll mostly be Black, if you just say "doctor" they'll mostly be white males (and probably balding with a stethoscope), but this can be qualified easily in the prompt.
It just reflects the training data, and the smart thing to do is shrug and add enough words to get the image you want. It's not trying to throw shade, it literally understands nothing, it's not able to understand things, just match text prompts to generated images.
Nerfing DALL-E by randomly adding 'diverse words' just makes it harder to dial in the image you want. Let's say you want a Vietnamese male doctor drinking coffee on break in Hanoi, it's not going to help you if 1/3rd of the images have "female" or "black" tagged onto it.
It just seems low stakes. We wouldn't come after a human artist who happened to paint a picture which conforms to simple occupational stereotypes, why should AI be any different? It's not like it will refuse to give you what you want if you ask.
It's good thing that the "safety measure" is the way it is - an afterthought. It means that those ideologues haven't yet had influence on the model itself.