Crypto really didn't solve so many problems in such an immediately visible way.
AI has some immediate and fully practical uses, it's completely different.
Stable Diffusion with Control Net/Art AI's are game changers for art creation.
AI artwork is already winning awards.
Generative AI's are evolving so rapidly. ElevenLabs Voice AI is absolutely amazing, we are planning to use them over hiring any voice actors for internal presentations going forward.
AI generated Seinfeld was watched by millions,and I thought it was pretty damn good.
The flexibilities and immediate usefulness of neural net AI's is just astounding, and to think we are still in the beginning of the paradigm shift.
Agreed, the two are virtual opposites: no one is making money from AI (yet), but everyone is using it, while in crypto, a bunch of people were making money from it and no one was actually using it.
We, in my last company saved our business customers US$2-3mn/year using AI products. And we made good money selling those products.
Those weren't products sold from techies to techies (the way Blockchain made some people money) either. And those products were end-user facing- it solved problems for them, too.
And I know a lot of people who are making money solving real problems for people outside of tech.
Equating AI and blockchain is a common HN commentariat fallacy.
Siemens are selling AI for large sums from their medical division. I use it all day every day to denoise then upscale medical images and it’s fantastic.
The entirety of the acquired data is in the image you see.
You can turn the processing off and look at the images without it if you are worried.
You also don’t diagnose off one image. You use several in each plane, and more than one image weighting. The software is unaware of the other planes and images when processing.
You don’t need long playing with it to be very confident that it’s utterly transformed MR imaging.
An example: decent resolution cervical spine Sag T2 imaging used to take a few minutes (2 or 2:30?), it’s now 40 ish seconds. It’s made scans faster, and better looking.
Correct, AI has obviously been making money for a long time. But it wasn't described as a bubble until recent developments fired up a rapid rise in startups/investment.
Agreed. AI has a TON of real-world use cases, right now. I just wrote a letter to an insurance company asking for a settlement for a car crash with the help of ChatGPT. My friend couldn't afford a lawyer or knew how much to ask for. ChatGPT suggested some amounts for minor whiplash and whipped out a letter (pun intended) to send off for negotiation. This is distributing equity to people who couldn't afford it otherwise.
This criticism, like many others, attacks the mechanics of how LLMs think, apparently dismissing the models for not doing so using the same process, faculties, and background life experience as a human. It does not contain any compelling arguments to refute the notion that LLMs think. We may not have consumer level AGI just yet, but suggesting a LLM is just a dumb anything (stochastic parrot or otherwise) is a rather extraordinary claim to make about something that basically patterns the whole internet.
We’ve been here before. Our sense of place in the universe was first upset by the heliocentric model (we’re not as special as we thought), then the theory of general relativity (not as correct as we thought), then quantum mechanics (not even living in a deterministic universe, really). With all these fantastic discoveries behind us, now seems like the right time to learn that we’re not as smart as we thought either.
The stochastic parrot claim seems to crash people’s brains because they assume this is all about words. The fact that these models very clearly learn not just patterns of words, but patterns of ideas, and ideas about ideas, seems a very profound result. Personally I do remain unimpressed by how the current models ‘think’ on top of this knowledge, but I’d have to work very hard to be as cynical as to imply this was all a scam, or somehow not progress (whatever you think of the destination).
As an new AI image generators are not just replicating existing pictures but generating wonderful new ones, language models are not just replicating average of what they seen but are capable of generating novel ideas.
They're certainly capable of remixing things they've seen, and adding in randomness will add novelty. Whether that counts as "creativity" is something people can debate : - )
I think that the reason image ones have caught on better in some ways is because they don't need to be accurate. We're not asking them to understand anything, just produce images based on text prompts (which is amazing stuff all by itself).
Remixing objects and designs consistently like "van gogh" or "super mario" really implies some kind of internal model or "understanding" of the world.
Image generation didn't catch on because of lack of accuracy, but because of how GOOD the results are. It's made artwork immediately accessible to the masses without the huge learning curve. That's where these AI's are going to shine very very quickly.
Oh yes - you show them X and they make a model of X.
Show them enough pictures labelled "Van Gogh" and they get an idea of what "Van Gogh" looks like. They dp am awesome job of that.
The problem with the text ones is that people think that showing them words mean that they make a model of the thing the words are describing, rather than of how those words go together.
> The problem with the text ones is that people think that showing them words mean that they make a model of the thing the words are describing, rather than of how those words go together.
I believe a colossal truth is that the most efficient way to learn how the words go together is to at least make some approximate model of what the words are describing. And our optimization algorithms and model architectures are good enough to find theae solutions.
I guess you are on a different level of abstraction that I am.
It's clear the art AI's have a model of "van gogh's" style, and apply it to create very unique forms of art. The neural model weight's aren't storing compressed images of van gogh, but relationships and mathematical models about a concept.
Hum, as someone who grew up in the times of the linguistic turn, I'm somewhat missing the groundbreaking revolution here. The entire idea of "text" was based on this.
We're certainly not as smart, or special, as we think, and I often wonder if other species like dolphins or whales -- or crows! -- see the universe as made especially for them.
But comparing the current AI situation, and its parroting LLM models, to the Copernic revolution is so over the top that it's absurd.
Is it really so over the top, though? We have machines that have basically internalized the Internet and can easily pass a Turing test, and yet we still have people trivializing these marvels as mere “parrots”. I believe this is partly a defence mechanism (we humans are inherently hubristic), and if a stubborn attachment to humanity’s “specialness” is the cause, then that certainly mirrors the psychology of Copernicus’s detractors.
In a way, though, the “parrot” moniker is apt. The (long aspirational) Turing Test was originally called the “imitation game”, and what’s better at imitation than a parrot? Apparently, it’s ChatGPT - I never did see a parrot write code.
I don't see a contradiction.. Humans can both be intelligent and overestimate that intelligence (by thinking it's special and unobtainable) at the same time.
I've read that there is evidence that some ocean mammals have stronger emotional intelligence and experiences than humans do. Social relationships which are more complicated.
The experience of losing a mate might very well be a deeper kind of pain for them than humans are capable of feeling.
“It’s just auto complete” is probably one of the worst takes out there on AI.
It’s also clear the author doesn’t actually know how LLMs work and is parroting information, e.g. “…it tries to get you to finish your sentence with the statistically median thing that everyone would type next, on average.” is just not correct, and, frankly, suggests the author hasn’t even observed what autocomplete does.
I completely respect the view that there’s more to being human than pattern matching text, but I also am open to the possibility consciousness may not actually be that much more than stochastic parrots.
- It’s reductive. The model may be architecturally relatively simple, at least insofar as humans are involved, but the data that was used train it is anything but. Code is data, data is code. The model is more than its architecture. It’s all the weights and biases within.
- It’s qualitatively wrong. The temperature settings change it from a predictive system (auto complete) to a generative one (digital assistant).
- It’s quantitatively ignorant. How do you eat an elephant? One bite at a time. For that matter, how do you speak if not one word at a time? It happens that the model generates one token at a time as well, but the million dollar question remains: what chain of reasoning did it have to follow to come up with that token? Just because it‘s only outputting a single token doesn’t mean it’s not “thinking” about the structure of the sentence, paragraph, or the entire document - several tokens ahead, to say the least. Indeed, having been trained on a such a large internet corpus it has the capacity to model entire personalities and all the quirks and neuroses that go on behind the scenes.
So, yes, it is a bad take. That being said, it’s still a useful one because it encourages people to think about the models as tools, which is exactly correct given how they’re structured. So I still describe them as such.
> “thinking” about the structure of the sentence, paragraph, or the entire document - several tokens ahead, to say the least
You put thinking in quotes, so what exactly do you mean by thinking? Where does it do it? When?
AI as autocomplete is a perfect summarization of both what it does and how it does it. Anything more, including models “thinking”, is strategic PR aimed to make the public happy with their mouths gaping while they take all intellectual property ever published online and monetize it without paying the author.
"It's just a parrot" parroted repeatedly to suggest a lack of deeper understanding, while revealing the same.. So meta.
The general reaction of people to all this has been one of the most interesting things about it imo. Earlier versions were generally just fun and sometimes amazing but suddenly so many are going out of their ways to play it down. Of course there are valid criticisms and worthy discussions but some of it feels like more than that.
I think one side of it is that for the first time many people are genuinely comparing bots to humans, which by itself is kind of mind blowing.
Another side seems to be more about "controlling" something new and scary. Maybe that thing is tangible, like the tools themselves, or maybe it's just the idea that we're not that special.
With all the arguments how ChatGPT is "just autocomplete", I wonder if these people ever used it. I know it is technically autocomplete, but the end results are so much more than that.
Indeed, I was quite sceptical of all of this until I actually tried Co-pilot. Sure you can characterise it as “just autocomplete” but that autocomplete has saved me a bunch of typing and thinking.
I'm not saying it's wrong but usually thinking is the most important part for programming and relevant to long term success.
Typing is pretty easy and not the bottleneck of software development. That's why readable variables are better than abbreviations.
Judicious use of abstractions helps with that as well.
I think where things like copilot might shine is being a JIT educator of coding practices but they should be part of your thinking process and not replace it.
The risk with over relying on crutches is substituting the knowledge and intentionality of development.
There's a balance, and one that needs to be sought.
I'm not sceptical about the power, but I am about the claims of people who want to cash in or simply say "it's awesome" and allow no discussion. That means that you get to detail projects trying to use GPT models as some sort of authority that doesnt need to justify why it's better than some alternative. If you argue against it you're "not seeing the opportunity".
It's not a new battle, fighting hype and separating the actual capabilities of a tech and the lies or misconceptions. It's hard to get people to see short Vs long term.
Curious, have you tried Co-pilot? The type of thinking and typing it saves me from are not trivial. It turns writing the remainder of a method into just pressing tab. Or completely writes test cases based on the case name. Or just suggests test cases entirely.
The other day I let it autocomplete methods in a public interface and it was literally ideating feature ideas for me.
Sure there’s overblown hype but there is also real value here.
If you're talking about Microsoft's ML powered suggestions/autocomplete tool, I haven't.
I'll try it for the sake of this conversation but I think it still doesn't change my point about being intentional and thinking.
I'd like to see examples of those tests. After many years of software development and seeing how many people produce buggy services that create a lot of maintenance burden or simply solve the wrong problem, I firmly believe writing is a minuscule part of the day and it's mostly about thinking enough and communicating well with the human interfaces of your project.
Writing is easy. You can get fast with touch typing or autocomplete features (including the copilot one) but that's not the most important thing.
Knowing why you do something and being able to analyze that decision after a period of time and evaluate how "right" it was, is not something that can be done for you.
For quick brainstorming it can be great but we should be careful with the authority we assign it to, when we can't really explain its thought process
If you think about it, autocomplete based on heuristics has already quite some intelligence. At least a lot more intelligence than just the words output of a parrot. The fact that neural nets are influenced by the same architecture, does not mean they could not be plentyfold more intelligent. I'm pretty convinced that the next level, within 10 years from now, will be indistinguishable from human intelligence in a lot of areas. Think about diagnosis of illnesses, predictions of weather and climate, logistics, teaching, driving cars, traffic regulation, prediction of natural disasters, migration,... We people tend to put the bar for intelligence higher after each breakthrough.
If you haven't spent more than an hour inveestigating ChaGPT or Bing Chat then you really should. They are astonishing. I would say that in certain ways they are smarter than me. In a generation or two, with a motivational layer on top, they will be smarter than me. Do not scoff until you have honest to god sat down and interrogated these things.
I personally wouldn’t say they’re smarter than you. They’re definitely faster. But I believe you, internet stranger, could spend time and resources and produce the same result ChatGPT has produced for you. It would probably take you a lot longer though.
> could spend time and resources and produce the same result ChatGPT has produced for you. It would probably take you a lot longer though.
Actually no. I have a limited lifespan of which I've already burned half and have to spend at least another 1/3rd of the remaining lifespan sleeping. Not even counting making money to feed and shelter myself.
GPT is already a better translator than I'll ever be in far more languages then I'll ever know. To get better than GPT at any one of these things will require a massive amount of dedicated time for me to do so. It is already superhuman in this sense with capabilities well past all but the most unique translators.
This is the story of John Henry again. Yea, humans are still exceptional and can beat the machine at things, especially if they've had any training in doing so, but we're not sticking with the same steam-powered hammer, we're spending billions on building newer, bigger, faster ones. More and more of humanity will fall behind the machine every day.
You should let an AI bot have sex with your significant other when the tech is ripe enough, to avoid burning even more of that precious lifespan on things that a machine can do better than you ;)
Tongue in cheek of course, but if you think like that, nothing is "worth" doing - why learn languages, why run, why climb a mountain...
John Henry wasn't hammering those spikes for fun mate. Many of us (idealistically) work on tech with the goal of freeing us to spend time on things like your examples.
I don’t think we disagree here. I’m arguing that something like CGPT is not smarter, it just has more time available or rather can do in a short amount of time what would require us years or maybe an entire life.
I don't think the argument is that "it's just autocomplete" by itself, but rather the implications of such. The current product is absolutely useful for all sorts of little things, but I think we're all looking to the future - whether consciously or not. The idea is not that ChatGPT, in its current state, will change the world - but the exciting possibility that ChatGPT shows we're on the verge of artificial general intelligence. OpenAI themselves are happy to play into this with regularly repeated claims of AGI coming within 10 years.
So the question is, is it coming? And I think this is where "it's just autocomplete" comes into play. Can you get from a really sophisticated autocomplete system to AGI? Look to the past of humanity. Our intelligence drove the epitome of human knowledge being 'bash stone, poke with pointy part' to putting a man on the moon. Now imagine we were able to seed a ChatGPT style program with all of the expressed knowledge of humanity from that former time. Where would it lead us? Your answer here is going to be driven largely be whether or not what you think what we're seeing today is "just autocomplete."
From where I’m sitting, ChatGPT in its current nascent state is absolutely changing the world. It’s not only showing what’s possible but also making a powerful and highly disruptive tech available to the masses to hack and extend.
Of course, we won’t stop here, but as one of the seeds from which AGI will spring, I truly believe it will be seen as a historical innovation in the same league as early search engines, crypto (yes, unironically), or even the internet itself.
> From where I’m sitting, ChatGPT in its current nascent state is absolutely changing the world.
From what I've seen, ChatGPT is changing the world... it's absolute goldmine for the spammers, the content mills, the bullshitter industries. We've already seen at least one magazine cease unsolicited submissions because ChatGPT overwhelmed their editors. We've had a contributor on an open-source project get incredibly belligerent when told not to use ChatGPT to respond to bugs (because its advice wasn't helpful).
We’ve also had thousands of people use chatgpt to tune their writing to be more polite, and to fix other issues they struggle with.
Pointing to two very real anecdotes as evidence of systemic uselessness of a technology is not super convincing. It’s the rhetorical equivalent of saying cars will never amount to anything because Ms. Agnes Thompson was run over by one in Cleveland, or that the internet is pointless because someone sent a hoax UUCP message.
Let's imagine that ChatGPT stayed frozen in its current state of functionality, but was otherwise completely available for use. What would you see being realistically different in the average person's life ten years from now?
That’s a moot question because it will continue to evolve, and while its successor will inevitably dwarf its capabilities, it will be remembered as an early influence and historically significant innovation.
If it was frozen in current state, however, it would still be a springboard for innovation because businesses will find new ways to integrate it with their products and compose its functionality with other code and itself. It’s impossible to imagine the kinds of influence these products will have because progress will be following the exponential part of an S curve for some time to come.
Finally, even if in this artificially constrained world of our imagination we don’t allow for third party products that use it as a platform, it will completely upend the way business is done, on a 10 year timescale, just as search engines have before it.
1. Lots more communication because it lowers the barrier to composing simple, effective messages. “Write a polite complaint letter to my gym asking them to play music other than Prince” changes the ROI on casual notes.
2. Much, much easier for non-native speakers to ensure writing is grammatically and tonally correct.
3. Huge reduction in make-work exercises where the idea is to just soak up an hour of someone’s time to validate they have some understanding of a topic.
He can't have done in any serious way. All these negative opinions just show you how much people make stuff up out of thin air, rather than bothering to spend time with the actual real world. It's all emotion driven and it drives me wild with rage that people are so unprincipled.
I've made this type of argument and have used it quite a bit (though I wouldn't say "just"). Autocomplete is a very useful tool. ChatGPT even more so. It is actually fantastic. The point I was making when explaining it this way was that, in its current state, this technology isn't going to do anything amazing on its own, but will help more people do more amazing things. It still needs to be carefully directed. For people outside the technology world, this distinction is important. I don't think it will hold up forever though.
It’s kind of amazing how wrong the author is here. Any comparison to crypto bubbles fails when one digs anywhere below the surface level. Crypto was a solution in search of a problem. Machine learning is a collection of techniques specifically designed to solve problems. That’s why it was used long before your grandma knew what ChatGPT was, and ML will continue to be used even if OpenAI shuts down ChatGPT tomorrow.
I will say that crypto is a big reason why AI is blowing up, though. It primed people to believing in tech-backed get rich quick schemes. That’s why I avoid all ex-crypto “entrepreneurs” turned AI aficionados who couldn’t backpropagate their way through a paper bag.
LLMs may be stochastic parrots, but the critics of LLMs are starting to look like deterministic parrots. Do they know any other phrases besides “autocomplete on steroids” and “stochastic parrot”?
(The LLMs do: I asked ChatGPT for some sarcastic and dismissive phrases for language models and it gave me back “mindless mimics”, “algorithmic babblers”, “robotic regurgitators”, “synthetic chatterboxes”, and my personal favorite: “soulless scribble-bots”.)
I think this is in large part because they are reacting against the breathless hype that LLMs are either already sentient themselves, and thus we need to start giving them rights and/or being terrified that they're coming for us, or they're the last step before that happens (with basically the same conclusions, just time-shifted slightly).
Sure, companies might be able to have ChatGPT and its brethren spit out marketing copy and similar types of writing that are, to a large extent, about chasing the lowest common denominator. But even if we grant for the sake of argument that it's possible to train an LLM that can create, say, a brand-new "Shakespeare" play, without enough input and tweaking that it will take a writer or a liberal arts major to do it anyway, that day is a long, long way off.
No; the only people I've heard talking about how all the knowledge jobs will be automated away by ChatGPT (or, more commonly, just all the "unproductive" humanities-type jobs—because, of course, the humanities are something we can totally hand over to LLMs with no negative consequences) are more tech-oriented people with vastly inflated ideas of ChatGPT's abilities and the tech sector's primacy in the world. The humanities people I've talked to about the subject tend to fall either into the "this is a bunch of BS hype" camp or the "I've watched too many sci-fi movies and think Skynet is launching tomorrow" camp.
Eh, this is just because people are in general terrible at nuance.
We don't have AGI yet, so no Skynet isn't here, but that doesn't mean we shouldn't watch out and make sure it doesn't show up and that we have a regulatory framework to ensure Skynet doesn't happen.
And, no liberal arts majors are not going to be replaced, but that doesn't mean they are coming out unscathed either. Things like GPT are going to force further specialization as we offload the generalized work off to tools like this. Now, humanity has been doing this for a long time so this is nothing new, but specialization is risky, even a small change in technological capability and the thing you trained your entire life for has been replaced by a small shell script and you're left trying to figure out how to pay rent next month.
It used to be someone would confidently say “computers can’t do this” and they could be right for 10 or 20 years before the state of the art caught up and proved them wrong.
Now it’s like, “it can’t write a credible and coherent article“. Well, GPT3 does fine on paragraphs, and whole articles are literally the very next incremental step from that, so probably GPT4 will be able to do that. And when does that come out? Like next week?
Are you sure it can't write articles? I had it write an outline for an article, then an outline for each of those sections, then the content of those sections and it wasn't too bad.
Based on what's presented hete: https://acoup.blog/2023/02/17/collections-on-chatgpt/, it's quite good at producing the form of an article or an essay, but the actual thesis made is incoherent. If what you want is a content mill, it's great. A depressing amount of web articles already look like this with human authors. To be clear though, the generation of the format of communication, content being optional, has always been the goal and domain of chatbots. It's no big revolution.
The problem I have with claims like “LLMS aren’t thinking— they are just parrots”, is we don’t even know what human thinking really is! So many people want to assume that human complexity makes us special, yet there is not any proof of that right now.
This article claims “AI is not intelligent”— and I’ll counter with, “what is intelligence?” And further, say that we somehow prove LLMs aren’t “thinking” as humans do, but they still give (eventually) a near perfect illusion that they do — what does it matter that it’s not “really thinking?”
I feel like crazy AI cultist at times when discussing this, but my main (admittedly petty) point is that such strong confidence about similarity or dissimilarity of human thinking to LLMs is unfounded.
It’s like we are comparing the insides of two black boxes and trying to make absolute claims on them.
Ah I mean if you're hell bent on not being impressed by ChatGPT you don't have to be. Everyone's entitled to their own opinions too. It's already useful for those in certain roles, the only question is what's the medium term situation with ChatGPT gonna be? It's free, with a paid option for now. Is it going to get shut down? is it going to go pay-only? or go away and only be available as BingGPT? If you're already using this for work (eg gptforwork.com) or play (https://gamesplayedbadly.com/2023/02/14/create-your-own-dd-a...), those are your real questions. Pundits and critics have a vested interest in predicting the future the way their readers want, but your time machine is as good as mine - it only goes 1 second per second and we'll get to the future at the same time.
If your role doesn't involve things ChatGPT would be useful for (eg you're a blue collar carpenter), it doesn't seem very useful, but neither do computers or the Internet, really. They still revolutionized the world though, so do you want to be a buggy whip manufacturer, or a computer (the job, mostly employing women, prior to the advent of the digital computer and auto calculating spreadsheets, who performed the math for spreadsheets at accounting firms)? Or do you want to at least be aware of incoming trends.
Crypto and web3 still has yet to have a clearly defined use case by anyone outside that industry. Meanwhile, anybody with a phone number can make an openAI account and try out ChatGPT. Some, like our carpenter, will walk away thinking it's neat but ultimately useless. Others simply won't be impressed, for whatever reason. Some will see immediate uses for it in their life and can't live without it again. Don't expect them to speak up about it either, they're too busy using it to write emails and make plans to be bothered to convince the haters.
If your role doesn't involve things ChatGPT would be useful for (eg you're a blue collar carpenter), it doesn't seem very useful, but neither do computers or the Internet, really.
Are you freaking serious? Carpenters use computers all the time. Ever heard of CAD? Calculators? Talking to customers via email? Ordering materials, researching, I'm shocked actually this is how naive people are ?
Crypto and web3 still has yet to have a clearly defined use case by anyone outside that industry.
What exactly is the use case for ChatGPT? I mean it can do a bunch of different things, but to what degree really depends on a great deal of factors, so I don't really get your point.
I actually think maybe this will be a problem for ChatGPT as a product in the future. It doesn't really do anything especially well and it's not clear when you should trust it to be correct. Maybe it will get 99.9% accurate soon, until then, will be interesting to see what actually happens when the novelty wears off.
I do remember walking home from my friends house after using a VR headset about 6 years ago and thinking, well that's it, I'm going into the matrix. It's been 6 years and I've never had the need to use one again. Maybe when designing our house I would've liked to have put one on for 10 minutes to walk through the plans.
ChatGPT I has had a similar effect for me, I used it, it was fun, didn't really have that much daily use for it, now it's just a tool, like many in my toolbag, I pull out of I can think of a good use for it, it sometimes yields good results, then I move on.
Edit: Please if you down vote, I'd like to hear why, don't hate on people for having a difference of opinion.
My girlfriend uses it to write her Instagram captions. My nephews use it to do homework.
Anywhere you need text written, it can generate something. Like you said, it might not be correct, but you can read what it tells you, you don't need to copy paste it verbatim.
Rather than asking it a question, feed it some bullet points and watch it convert it into paragraphs. Read the paragraphs and remove anything extra it added. It probably saved you 10-20 minutes and lots of frustration depending on how much you hate writing.
If you can't see the use for yourself, that's fine. But I really don't believe that you can't see how it would be useful to other people.
I find it hilarious to compare with crypto, where the majority of projects are purely about speculation.
> Anywhere you need text written, it can generate something.
> Rather than asking it a question, feed it some bullet points and watch it convert it into paragraphs.
Part of me is hopeful that this will have the effect of people recognizing that maybe
a) The text didn't need to be written after all, or
b) The bullet point you started out with, were enough after all.
There is too much content being produced out of a notion of "something has to be here" for layout reasons, or because its simply what somebody expects.
Probably, ubiquitous use of generative text model will worsen the signal-to-noise ratio we see in media today.
Fair point, but - uno reverse card - maybe it will make tedious exposition so effortless and superfluous that nobody wants to do it anymore.
It might do to overlong text what border-radius did to rounded corners, and discourse will trend back to wit and concision over ceremony and formality. Businesses will value drafts - unpolished bullet points, like you describe - over finished pieces, because they’ll feed everything into standardized models fine-tuned by Brand Prompt Engineers to generate copy in the organization’s voice.
Yeah, I 100% agree. I'm hopeful that the end game will be that people will realize that having AI write garbage to have AI then summarize it is pointless.
But for now, it's not acceptable to communicate with half sentences and bullet points, so AI is useful.
Not just bullet points, but tone and professionalism too. ChatGPT will translate "fuck you, pay me" to "Please provide payment as soon as possible."
Now, in the future, after the Zoomers take over, it's entirely possible that "fuck you, pay me" will become acceptable language between professionals and seen as a reasonable way to request remittance. But until then, we'll still need to rewrite things for a different audience in a way to best accomplish your goal.
Funny though, I actually know people who are still using crypto for various things. So you're kind of snarky remark about how I might not find ChatGPT useful also applies to people who use crypto too.
Like I said I do use the GPT...when I see fit, mostly just when I want to see what it can do, or how it might do something different. Of course it can be useful in some ways. I still don't understand what the actual use of it is. ChatGPT comes with a disclaimer on the box which alerts you to the fact that it's basically an experiment and by using it, you're part of the experiment.
Should a my lawyer use it when giving me legal advice even though we don't know that it's giving correct information? Should I use it instead of a doctor? Should your doctor use it in your next visit?
I mean there is a chance that even OpenAI were never expecting it to blow up the way it did?
People have become a little too defensive over a product because it talks?
> It doesn't really do anything especially well and it's not clear when you should trust it to be correct.
You just described most humans. Imagine someone gave you access to a free digital workforce that you can only interact with through chat; it’s text in, text out, they only do what you say, and although you’re conversing with middle-tier experts they don’t have internet access or even a paper or pen handy so you have to take any facts or hard numbers they reference with a grain of salt. What is the use case for that?
What I’m getting at here is that ChatGPT doesn’t have a single clear use case. OpenAI is positioning themselves to be vendors of foundational AGI and it will be up to the market to create specialized fine-tuned models with higher levels of agency and built-in accountability.
I am an independent contractor, but my comment was in no way intended as a slight to anybody I work with or have in the past. I have had the pleasure of working with some exceptional people.
We may however have different standards for what constitutes doing something “especially well”. I’m talking top percentile performance. That is by definition rare, but that seems to be the standard we’re holding the LLMs to.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Moreover, let's assume that since I've made it to this site that I'm not a complete idiot (well, I mean, I can be, sometimes), and that it's entirely possible for me to make my claim, while also being well aware of the fact that carpenters use email and the Internet these days. I did some renovations to my home a few months back and, surprise surprise, we used email, along with texting for communicating. (We even used gasp pictures in these emails.) So maybe I'm trying to make a deeper point about the intersection of carpentry and computers that you maybe missed?
Right it's just a tool, one which doesn't get much daily use by you. Which is totally fine. But say your job was to hammer in nails all day long, would you ask about the use case for screwdrivers? At your hammering job there are no screws, so screwdrivers must be totally useless, right?
> Ah I mean if you're hell bent on not being impressed by ChatGPT you don't have to be.
This is such a disingenuous strawman. You've just responded to a long argument with a lazy faux dismissal based on personality and animosity.
The criticism are well laid out and sourced. It's about the hype and lies around it, not the tech itself. I appreciate the tech and dislike the lies and misunderstandings.
If you just dismiss me as "a hater" you're part of the lie.
> Pundits and critics have a vested interest in predicting the future the way their readers want, but your time machine is as good as mine - it only goes 1 second per second and we'll get to the future at the same time.
Except that's a lie. And when the insights of people who stop and think instead of just buying into claims unquestionably are proven right years down the line we hear the old "oh, hindsight is 20 20". That's not accurate. We can reflect on things when they happen. No need to wait.
> If your role doesn't involve things ChatGPT would be useful for (eg you're a blue collar carpenter), it doesn't seem very useful, but neither do computers or the Internet, really.
False dichotomy. If you think chatgpt is solving a problem by generating text, are you qualified to judge that text for accuracy, could you have produced that text yourself? Why is it that chatgpt beats you, typing speed? Thinking speed? What are the risks of overlooking things by virtue of allegedly being handed an answer.
Wizards can be useful but also dangerous. The point is to reflect about it and not dismiss anyone who doesn't just hype it up.
> Or do you want to at least be aware of incoming trends.
Your whole argument seems aimed at some luddite strawman when in reality it's people who love tech, love new things, appreciate Chatgpt and others forms of ML, but see the BS flying around and want to stay intentional and have conversations grounded in reality.
> Don't expect them to speak up about it either, they're too busy using it to write emails and make plans to be bothered to convince the haters.
Another empty dismissal. People are using it so any related criticism is "haters". Implicitly your whole comment just wants to, instead of engage in the conversation, derail it and dismiss it.
Reminder, when there's hype, everyone and their mother wants to "cash in on the hype". It doesn't matter if they need to sell you a bridge. Going into specific and reasoned arguments is more productive, specially in a place like HN that should foster discussion instead of squashing it like parent poster is doing.
That's a pretty good takedown of my comment, I appreciate the effort that went into that, and, well, you make some fair points, but I went back and reread the post, and stand by my response. To me, Cory Doctorow isn't making any substantive arguments in the linked article that I feel are worth digging into.
He denigrates ChatGPT as glorified auto complete but in the end, auto complete is... actually pretty useful?
I do appreciate him bringing in Ted Chiang, I love that quote - "it’s easier to imagine the end of the world than to imagine the end of capitalism".
I haven't been exposed to Pluralistic before so maybe that's just his style there, but I think the difference between the hype around crypto and the hype around ChatGPT is that people in the world of finance are able to intuit where crypto fits, but people outside that world can't really appreciate that. ChatGPT; anyone who uses English can create their own account and play with it and see what the computer is doing. Sometimes hype is actually justified.
Computers and the internet looked like bubbles until they didn’t.
Not every craze is the next big thing, but for LLMs it’s still too soon to say anything with certainty. The people comparing LLMs to autocomplete have no more legitimacy than people who think LLMs are the first coming of AGI.
I found it humorous that the author couldn’t help but anthropomortise ChatGPT by calling it ”fully automated supremely confident liar”. Calling it a liar implies intent to deceive. Strange to ascribe intent to an autocomplete.
> Computers and the internet looked like bubbles until they didn’t.
The funny thing being, it isn't that clear how the (personal) computer revolution worked out: The IBM PC was envisioned as a dumb smart terminal that would rely on mainframe integration for anything serious business. This model clearly failed with the advent of i386, as tasks were increasingly done locally and exclusively so. That model failed clearly, as computers are now dumb smart terminals that are connected to mainframes (AKA "the cloud")…
Similarly, the Internet revolution would connect everyone on equal standing and equal reach, decentralized, as opposed to the ideas of information utilities that has been proposed previously, an idea that clearly failed. Now this idea has clearly failed, as the Internet has become a centralized information utility run by a few companies for the majority of its use…
No. I've already used ChatGPT for a few practical tasks. I've never been able to do that with crypto. The most I got out of crypto was buying some, watching it go up/down in value and then selling.
The anti-hype around GPT is quite a thing. It got out of a gate at a sprint, determined to get ahead of the hype.
So sure... if you want to point to hype statements that are silly, you'll find them. If you want to find monopolies investing I gpt to preserve their monopolies, you'll find them. If you want analogies to stuff that has empty hype in the past, you'll find those analogies.
I'm old enough to remember the hype, and anti-hype around the early web... The information superhighway. Anti-hype had all the same arguments there.
It really doesn't matter what we think about chatgpt's answers to philosophical questions, or it's ability to write poetry. Those are just novelties and parlour games.
What matters is that autocomplete is useful, and that means it's going to be used. Well use it for writing emails. Well use it to code. Well use it to summarize, tabulate... It'll bring video game characters to life. Some of these will be significant. Others will be profitable. Others will be harmful.
What autocomplete isn't, is a dud. The thing works.
there's been steady progress on language models for the past 6-7 years, getting somewhat faster in the last 2 years, in part due to fundamental advances, in part due to (mainly) OpenAI being a little more product focused.
it's true that certain types of scammers and influencers have started jumping on LLMs en masse now that shilling cryptocurrencies isn't so lucrative.
i think that if it appears from your perspective like there is a brand-new LLM hype bubble that just appeared this year to substitute for the previous cryptocurrency hype bubble, you're in a bad part of the information ecosystem.
We're still at the beginning of large language models. This is very new technology. Right now, they can act as question-answering and interaction systems that can do a decent job most of the time and are totally wrong some of the time. Improvement is needed, and does not seem to be impossible. Making up bogus facts has to stop if these things front-end search engines.
The concern right now is widespread deployment of really crappy systems. ChatGPT can probably outperform the average call center staffer now. That's going to be a problem.
While big companies getting even bigger may not sit well, equating business development with the actual ponzi schemes of crypto, which will result in convictions (FTX members are already admitting guilt) seems like a stretch. There doesn't seem to be any reason to believe users are not in a position to pick experiences that work for them and abandon ones that don't for AI products - a far cry from financial products that require significant literacy and profit the most when ensuring the users never achieve it.
but he is without being fully self-aware about it starting to go down the same road that doomed Chomsky,
being so compelled by a compelling frame of critique, that it necessarily becomes the first and insidiously becomes the only real lens he views things through.
One of his tenets is that Technology is Hype. This may be true, but not all technology is hype; and the fact that hype is regularly an outgrowth of other equally intelligence observers finding reason to be excited, is something he is far too breezily dismissive of.
I welcome a critical and sardonic voice, and the regular deflation of the over-inflated,
but he is starting to predictably be derisive too quickly and too broadly, and to assume bad faith (or naiviety) too widely.
He's a smart monkey, but there are other smart monkeys, and AI in specific is one area where he's beginning to genuinely miss the scale of the disequilibrium coming.
This is a shame because many of his most passionately-held convictions and campaigns are going to be significantly impacted by AI.
He has a window and a pulpit to shine a bright light on the intersections where real change is accelerating or otherwise changing the landscape of some of the things he is most concerned about.
I agree with Sam Altman. It is not the main point that we are close to creating the human robot. Rather we start to understand that we are biological robots ourselves, just so sophisticated that it doesn't look like that from the top view. That we are called humans will not save us from this realisation. Each atom in our body just follows laws of physics.
I mean, SURE, you could (still can) actually use crypto for things like purchases, contracts, data tracking, and what have you - but in the crypto bubble people were solely buying and holding to sell at a profit. Zero actual use, all speculation.
At least with the current ML, we daily see new tools and other cool stuff.
So I guess just ignore all the people getting real value out of ChatGPT/etc? How can you compare that to crypto at all with a straight face. Yes, you need to be an expert to validate the output. Welcome to being an expert using any tool.
I've been using Bitcoin for over a decade almost daily and it's a great tool that has passed the test of time, and will continue to be useful unless governments around the world actually go out of teh way to ban it.
As for Chatgpt, I think it's really useful as well, but there's no way to compare the two.
Trustless, programmable money. It's useful. The problem centers generally around how useful "trustless" really is. Developed nations usually already have a trusted currency and underdeveloped nations have so many extant corruption issues that trustless money isn't necessarily spendable. The goldilocks zone so far for crypto has been remittances to developing countries, a small market indeed. Crypto's narrow application range is what makes it, so far at least, more hype than bite.
Sure ChatGPT and other LLMs can do all sorts of stuff. It can give you inspiration, help you search through a corpus that hasn't been scraped properly, or just fill in the blanks for laborious writing that was generally a waste of your time. It's lots of fun to play around with. But is that worth enough to offset the R&D, training, compute, and profit costs associated? Are people willing to pay for LLMs at a scale that makes sense? For companies that really need this kind of writing done, is it that much cheaper than hiring an English literature major intern? Can the sorts of "hallucinations" an LLM outputs be tolerated in actual applications?
The two fields suffer from a similar problem. Sure they're breakthroughs, but are they enough to derive broad utility? Regardless I disagree with the premise of Doctorow's post. I think the problem is subtle and not deserving of the silly snark that his post contains as it tried to pick apart banal definitions.
Bitcoin is mainly useful for money laundering... or... er... disagreeing with the government. In other words, removing the middleman. Also, depending on inflation levels, it can be a good storage of value.
All in all, I think it's the most useful technology in the last decade. If you live in a bubble you won't realize that, but it's what it is.
As soon as you move to a third-world country with heavy taxing and corruption (see: Venezuela), you see that Bitcoin is more useful for the average Joe than ChatGPT.
I predict this comment will be heavily donvoted with zero explanation. Because HN is always predictable when it comes to the trigger words: “Crypto bad” and “AI good”
Crypto can actually help people transact trustlessly, let communities govern themselves without a king etc.
Whereas AI swarms can generate bullshit at scale and turn every forum into a dark forest, with 99% bot-generated content deployed to silence or distract groups of people whose viewpoint is to be buried.
One has the potential to quickly destroy all the systems we rely on, including public discourse, voting, and so on. I would say that is far worse than a few silly smart contracts.
> AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn't "artificial" and it's not "intelligent." "Machine learning" doesn't learn. On this week's Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete – not our new robot overlord.
For what it's worth: a lot of "office work" is repetitive bullshit that could be done just as well with decent automation or assisting "artificial intelligence" (=lawyers writing letters, ...).
Just before LLM and stable diffusion I would have agreed with you but this two tools are real game changer, I do use the daily and they are not part of my day to day business.
If I'm honest, I'm more terrified of crypto going mainstream than AI going rogue. The crypto utopia is a world where everything can be fractionalized and financialized and monetized.
We already live in a world that's extremely hyper-financialized. Crypto can add a layer ontop and financialize it to a degree that it could completely perverse human motivations.
I don't totally agree with the article, but it was an interesting read. It's funny reading quotes like "I gave it a paper to peer review and it didn't help at all". It not doing exactly what you want it to do + you not wanting to believe it is revolutionary = it's all hype.
Personally, I use ChatGPT everyday. I use it in ways that weren't possible through mere google searches. It does some of the thinking and synthesis of different ideas and patterns for me. I work faster as a result. I work in languages and frameworks I didn't know before in a matter of minutes instead of days.
Is there AI hype? Yes. Am I starting to see patterns in the responses, and lots of errors, which makes me realise this is a bit more limited than I thought? Yes. Are there lots of gimmicky use cases? Absolutely.
Is it still crazy useful, and game changing for lots of industries in fundamental ways? Yes!
The author seems to be unaware of how many people's jobs are nothing but a sophisticated form of autocomplete. How many people's lives are nothing but a sophisticated form of autocomplete.
Also, what terrible writing, full of rage about nothing.
I've long been a fan of Cory Doctorow. He's a witty, eloquent writer with lots of interesting, insightful things to say about technology, and I agree with him on his critiques of capitalism and corporate hype.
However, he's dead wrong if he thinks AI is all hype and no substance.
Yes, AI is being exploited and hyped up by corporations. Yes, most workers are going to get screwed, as usual.
But there is so much potential in AI... even if it's not "truly intelligent" (yet). At the very least it's a tool that can boost creativity. Playing around with Midjourney made a believer out of me. I've been a life long artist, and midjourney just blew me away. It's nothing short of amazing, and just about as close to magic as anything I've ever seen a computer do.
AI is a completely transformative technology. People like Doctorow can dig their heels in and scream against the hurricane, but it's utterly futile. The world will adopt this technology anyway, and it will transform the world (for the better or for the worse). It already has, and the transformation will only accelerate.
Agreed. I think the (now obviously justified) cynicism over blockchain hype has led a lot of left-leaning people to see AI as the next bubble.
It doesn't help that major non-technical consulting firms have been producing nonsense for years about the coming "fourth industrial revolution" (AI/ML, IoT, VR. self-driving etc.) These people are usually wrong.
The difference is that ChatGPT, midjourney, copilot etc. are just legitimately useful. Being able to transcribe, translate, summarise and process audio and text data is useful. Being able to generate a regex or SQL query is useful. There are obviously questions about how much closer this gets us to AGI, but the tech has inarguable utility in a way that cryptocurrencies never did.
Well, what means "AI" here? "AI" as in neural networks in a more general sense have been in use for decades now. OCR aka text recognition uses neutral networks in consumer products since the 90ies, in the broader field of image processing they are in use for years by now, to the point that every more modern smartphone has a dedicated co-processor for "AI" computations for exactly this purpose. So "AI" is not just hype, it is already deeply ingrained in some aspects of our lives.
Ever thought about the stock market? There are big computing centers with "AI" automatically scooping up the next potential trend based on all kinds of information to make trading decisions.
I think it was CNN that did some articles with ChatGPT, but the kicker was that once they opened up about that there was a fine line below it: They've been using something similar for much longer already and even said it is used more widely (i.e. not just CNN).
So, "AI" is already there, going to stay, and will continue to be used in increasingly more places.
Is there hype around "AI"? Definitively, especially by those who have no clue about anything. Is it as unfounded as crypto? No, because crypto was garbage from the start.
Do you really think that swarms of bots churning out bullshit posing as humans, clickbait, rage posts, arguing people down in forums, and generally drowning out genuine conversation, is a net positive for the world? That's even more of a tech bro attitude than blockchain, sorry.
Of course it matters. We're talking in order to persuade each other, and reading in order to know what other people think. An astroturfing bot simply isn't important to you. You cannot successfully persuade it, and if you do hyponotize it into momentary compliance, that compliance is worth nothing. It goes forward as if your conversation never happened, and its opinion is worth nothing to you. It is, at most, a weapon against your brain. The only winning move is not to play. And if you can't trust that you're talking to people, than any site will cease to be interesting. You will feel it and you will close the browser.
Oh, you won’t be talking. As soon as you pipe up, you’ll be shouted down by implacable bot swarms until you learn to not even bother. Give it 3-5 years
At first, maybe, but then human to human conversation will drift to gated communities with strong verification, and on the wider net it will be like the blogging revolution all over again as people exchange pre-seeded forums for ideas more so than the ideas themselves, and compose them with other constellations of thought using the ecosystem of AI tools built around same.
Cory Doctorow has a vested interest in the written word, as if paid by the word, being super valuable though. His value as an author is diminished if I can type into ChatGPT a prompt like "write a story about living the Disneyland's Magic Kingdom about this guy Dan who's lost all his Whuffie, which is a form for internet karma (points). Pretend you are Cory Doctorow who hates ChatGPT and LLMs, but is wrong.", and get a story back. Writing use to be hard. Who knows how much money "Down and Out in the Magic Kingdom" has made Mr. Doctorow? Meanwhile here's what ChatGPT spit out for me on the basis of the above prompt. It's no "Down and Out in the Magic Kingdom", but that's got to have him running scared, which is what's making him refer to LLMs as "autocomplete". I mean if my job were so obviously on the way out, I'd do everything I could, including only referring to it in derogatory names, to willfully avoid seeing the threat when it's right in my face from everybody talking about it.
---
Once upon a time, there was a man named Dan who loved nothing more than spending his days at Disneyland's Magic Kingdom. He had always been an avid fan of the park, and he had made countless memories there over the years.
But one day, everything changed for Dan. He had been so caught up in his online life that he had neglected his real-life relationships and experiences, and as a result, he had lost all of his Whuffie, the online karma points that he had accrued.
Dan was devastated. He had always prided himself on being well-liked and respected in online communities, and he couldn't believe that he had let himself fall so far. He felt like a failure, and he didn't know how to pick himself up and move forward.
But then, one day, Dan decided to return to Disneyland's Magic Kingdom. He figured that he could at least find some solace and happiness there, even if he had lost everything else.
As soon as he stepped into the park, Dan felt a wave of nostalgia wash over him. He remembered all the good times he had spent there, and he felt a glimmer of hope that he could regain some of what he had lost.
As he wandered through the park, Dan began to notice all the little things that he had taken for granted before. The smell of popcorn, the sound of laughter, the sight of children's faces lighting up with wonder and joy. He realized that there was so much more to life than just online popularity, and that he had been missing out on so much by focusing solely on that.
Dan spent the whole day at Disneyland's Magic Kingdom, riding rides, watching shows, and just taking in the magic of the place. And when he left that evening, he felt a sense of peace and contentment that he hadn't felt in a long time.
Over time, Dan began to build himself back up again. He reconnected with old friends, pursued new hobbies, and started to live his life in the real world instead of just online. And while he never regained all of his Whuffie, he didn't really care anymore. He had found something much more important: happiness and fulfillment in the real world. And he knew that he had Disneyland's Magic Kingdom to thank for showing him the way.
It's awful compared to a professional author, absolutely, but it's still an actual story without big grammatical issues written by an AI from a short prompt. To me that's pretty magical, and the question is how much and how fast it will improve.
I do think people overestimate how fast it will improve (last 20% requiring 80% of the effort etc) and that we won't see chatGPT replacing real authors anytime soon though.
I second this but aside from the rather ephemeral article itself, the implication of the title rings true. This will end up in a very bad bubble burst and possibly world wars again unless there’s some way to mitigate the monstrous overproduction of goods and services it generates. It’s time to stop breaking things down to the smallest part and see the whole picture again for once. We’re simply repeating 20th century and we know where it went last time,,,
Crypto really didn't solve so many problems in such an immediately visible way.
AI has some immediate and fully practical uses, it's completely different. Stable Diffusion with Control Net/Art AI's are game changers for art creation. AI artwork is already winning awards.
Generative AI's are evolving so rapidly. ElevenLabs Voice AI is absolutely amazing, we are planning to use them over hiring any voice actors for internal presentations going forward.
AI generated Seinfeld was watched by millions,and I thought it was pretty damn good.
The flexibilities and immediate usefulness of neural net AI's is just astounding, and to think we are still in the beginning of the paradigm shift.