The AI Overviews outrage is only the beginning. It has been 11 days since they made it public in the US, and it had been in experimentation for over a year before that.
Already now, a lot of users will associate Google AI Overviews with nonsense; eat rocks, cook spaghetti with gasoline. Google also showed how easily these AI Overviews can be manipulated, since they use RAG.
I think the Slopception is only going to get worse. Slop in. Slop out.
And also, Google Research just happened to be sitting on AGREE[0]:
> a learning-based framework that enables LLMs to provide accurate citations in their responses, making them more reliable and increasing user trust
Which was published yesterday.
I tried to submit it[1] but it didn’t get much love.
> Already now, a lot of users will associate Google AI Overviews with nonsense
A lot of tech-minded folks who actively follow this stuff on Twitter et al make this association, but the average person was already defaulting to reading the first (likely incorrect) hit on Google and moving on with their day. I'm not sure they'll notice a dip in quality until it directly contradicts something they know about confidently.
Scam callers are going to have a field day with audio for sure. Maybe I am not thinking about it right but a lot of this does feel surreal and I just can’t see the light at the end this tunnel.
No regulation, rapid development, no protective measures even for the very basic attack vectors.
Google AI was so much better when they used it to do useful things like create the best starcraft player in the world. Imagine if strategy game companies could license/extend DeepMind game AI.
We are in the rough and tumble time of new tech. As an analogy, trains didn't always have the distance between their tracks as uniform (called gauge). It strikes me that we are in a similar time of invention. If you have ever read old patents (my experience being older pop book like "Strange Stories, Amazing Facts" and random blogposts) you'll see the truly bizzare.
But the trains with differently sized tracks didn’t fall off the tracks because (and I imagine there is some obscure exception to this, but for the very most part) someone had made sure that the systems were safe. The distinction here is that Google is rolling out half-baked rubbish that is palpably unreliable. No sane person is going to eat rocks or put glue on their pizza, but the race is on to find some genuinely dangerous answer from this system.
It would be regrettable if the result of rolling out this type of product too soon has the effect of eroding trust in AI-based search, and delaying people from engaging with it when it actually is ready.
> The distinction here is that Google is rolling out half-baked rubbish that is palpably unreliable.
nowhere in that sentence did you mention AI, though. If you take AI out of the picture, isn’t it equally problematic if they had some other mechanism, say some kind of analytical/stochastic mechanism, that selected data retrieved from actual results, and cited those?
Because, that’s what they’re doing. Google isn’t using an AI. They’re using a stochastic textual generator driven by results of the search. Sometimes it spits out weird and incorrect answers, especially when high-ranking sites suddenly spoof malicious results - that is a general problem with any attempt to pull data out of the results automatically, including the lesser variants of this approach they’ve deployed for decades.
The problem is (a) google putting out a poor product, similar to the pre-google era of search engines being really bad, and (b) high-ranking sites engaging in malicious and disruptive behavior, for which they probably should be de-ranked or blacklisted.
But you cannot gave this one both ways, it can’t be an “AI” when you want to be an maximalist and “just a textual stochastic algorithm” when you want to minimize and downplay. By the latter terms Google isn’t even using AI here, just a stochastic algorithm, driven by the search results. It’s just a bad one. So you’re arguing against an approach that doesn’t exist anyway.
Again: why specifically is google answer problematic, given that it’s been rolled out for like 15+ years now? The new algorithm behind it just kinda sucks, but the feature isn’t new. Citing and summarizing results has been a thing for a long time, and naturally it’s not always correct/accurate.
This isn’t the first time people have been upset over it either - newspapers got mad about this summarization so they got a law passed in Canada which outlawed it.
The operation of the trains probably ran with precision, despite the need to switch between tracks and trains of a different gauge, and despite the occasional accident.
But, like Google's garbage, the user experience for shipping or riding was abysmal.
I agree "Companies need to move really fast, even if that includes skipping a few steps along the way. The user experience will just have to catch up." is insane. There's a difference between a bad "user experience" and "harmful with no value" (the latter being unacceptable). The mentality that we must cut as close to the latter at the altar of market share is why we're in this state and why it's going to get much, much worse before anything gets better.
Matt Stoller’s theory is that Google at this point is like Boeing was maybe a decade ago — an incredible product company that has already been taken over by finance types, and on the way to irreversible crash.
Google has a good decade ahead of it. They own major browser and Mobile OS. Default is Google.
Most of the world is primed to Google and Bing isn’t any better. Bing is doing the same shenanigans.
A new player could beat them at their Game, but Google has a huge ML team and millions of TPUs. They could either buy them or quickly copy them.
Google did this to defend against OpenAI and Bing.
The big question is whether their bottom line will suffer. Google like every other big corp is beholden to their ever growing market cap. A few negative quarters and they face the fire.
I was born in the same year that Google Search launched. Half-decent search has been here my whole life. When someone asks me "how did you learn X", the answer usually starts with "I googled it".
I've never looked something up in an encyclopedia. I've never visited a library in pursuit of a research goal. There's no "going back to how things were before" for me - there was no "before"!
Life will go on, but the demise of search engines is quite a terrifying prospect for me. Generative AI is a search engine killer but not in any positive sense.
As if those services are the only options. No they are not.
The world isn't one dimensional nor binary. Everything is a multidimensional spectrum.
I don't use Google search and I don't have to. There are reasonable alternatives like DuckDuckGo and Brave Search. They aren't as good as Google was in the past, but there is also Wikipedia, Reddit, GitHub, Phind, StackExchange, Discord, IRC, forums, newsgroups, Claude, ChatGPT, LLama, and what not.
Sure, it would be awesome to get all the answers from a single input field, but honestly, even though Google was better in the past, this has never been the case for any sufficiently complex question.
> I've never looked something up in an encyclopedia
Does that include Wikipedia? I mean, the problem with encyclopedias is that they were small (despite physically taking up entire bookcases). I didn't use them much either, even back in prehistoric times when the internet wasn't invented. They tended to disappoint by giving a crappy, superficial overview of something that was only vaguely similar to the thing you wanted to know about.
But I use wikipedia instead of doing an internet search a lot of the time. Or sometimes archive.org, for things probably in books. Cut out the middleman, go directly to where you know the information is.
For me, having an encyclopedia at home was a huge help with my studies and even finding out what I wanted to do/be later in life. This was 40+ years ago and I also had or had access to a lot of other books but never found them disappointing at all.
Huh. Maybe you studied one of the things encyclopedias tended to concentrate on? They seemed to like particular sorts of facts. Geography. Astronomy. Chemistry, to some extent. That's how I remember it, anyway. If your yen for knowledge was in tune with what the writers thought mattered, you were in luck. Even then, there were space constraints.
I don't remember their titles (they're at my parents' house), but we have several. Some generic with a few plate drawings. Some illustrated. Some focused on one subject. It was not that you learned something particular, more that you know stuff existed.
I now like the fact that they, as books, are frozen in time. And that their knowledge is well packaged and consistent. Unless you're a researcher, you don't need the latest. And as a child, the clear writings and the great illustrations was key to internalize the knowledge.
Maybe not exactly what you mean with frozen in time but they used to (or maybe still do) publish a new tome every now and then with updated information. I think they are still very useful for children, for example.
I remember them being a big help with History for example. I believe I read most of the encyclopedias I had at home (a couple of them totalling maybe 20 heavy books). Now, when I want to learn about something, say acoustics, I go to the internet, back then I would go to the encyclopedia as a starting point and later to the library for more specific information. That's how it used to be.
I'd like to recommend that you start using libraries. You'll find that they often contain vastly more information than you'll find through web search, and you're already paying for them.
> There's no "going back to how things were before" for me - there was no "before"! (...) Life will go on, but the demise of search engines is quite a terrifying prospect for me.
Speaking as someone who was born long before the public Internet: you can never go back to how things were before. Similarly, you can never really stay with the way things are now. The world changes. Even if you stick to the ways of your lifetime, or go back to the ways prior to your lifetime, the ways have change simply because the context has changed.
I'm not saying that we should jump the AI bandwagon. I'm just saying that we need to recognize the world is in constant flux.
There's a big number of things you can search for, but it's a finite number. There are certainly people interested in curating resources for their specific niche in exchange for curated resources in other niches, across the entire internet. Perhaps through a social web of trust. We need a new search engine resistant to sybil attacks. One that takes the social aspect of searching—connecting you with the relevant experts—into account.
It's a challenging problem. The whole of Web 2.0 is built on lowering scaling costs as far as possible. The trust problem is itself very big - if someone makes the modern equivalent of the Yahoo directory, how do they establish trust that they're only incentivized by site quality? If it's a graph of trust, how does one handle the subtle changes in baseline expectations and norms that qualify as "trustworthy", when moving from one part of that graph to a distant part? If such a system goes massive, how does one prevent the member nodes from being overwhelmed by requests, or even from offers to contribute by people more interested in Internet karma than giving quality contributions?
This is so interesting. I have learned more from a years’s use of chatGPT than from two decades of googling. Yeah, AI has drawbacks. But AI has never displayed an ad to me and only once displayed a cookie banner, and that’s worth quite a bit to me
Aside from the apples-to-oranges claim that a chatbots teaches you more than a search engine, it's extremely unlikely that AI will remain ad-free. Ads seem to be at a local minimum of the information economy and I don't see how chatbots would be immune to it.
I have gotten great use from various AI tools but don't be naive. The "ads" are invisibly embedded in the training data and get expressed in the subtleties of the model's output. When you control the model and there's no transparency around what's in it, you can do whatever you want with the training data. This is only the beginning.
Google search quality has been decreasing not because of a lack of AI, but because of AI itself. LLM spamming and a decade of SEO maxxing has made pagerank completely obsolete. No one has a personal site anymore, backlinks don't mean anything now.
Maybe we need a need a new paradigm for search ranking. Voting, experts, introduce organic feedback back into the loop ?
Gotta imagine that Conde Nast, Hearst etc are not going to take these changes lying down and the last ten years of SEOmaxxing have given them literally all the money, power etc in the web scene. They will be able to clone any in-depth "expert" site with their firepower and then come in with backlinks, socials etc and we're back to the status quo ante.
With the March updates I thought there was a chance that Google could pull it off, but now I am leaning towards it's all doomed. Personally I wish I never got into building websites when I have to compete with ugly wordpresses from SEO companies in India taking my #1 spot for English content.
I wish I could get angry about the forbes articles in my niches but at least that's written ostensibly by a professional-- when you start getting beaten by center-aligned text from South Asia you know there was -never- a chance for quality to win.
Provocative proposition: what about search engine that bans everything with ads and affiliate links?
Wikipedia won over competitors because it has an anti-business-model. Only the passionate write for it, which paradoxically ensures quality. My favorite content on the internet is always non-profit, sincere stuff. Like this comment thread :)
That's a scary prospect for creators that live off the internet, but if the ecosystem is already doomed it may be the last option.
Yeah, okay, this sounds great until the point where I have a company with people that must eat, APIs that aren't free etc. and I look around and I see wikipedia is doing more than fine financially, has ads, morphed itself into three distinct properties in my niche (wikipedia, fandom, subdomain.fandom.com/inmyniche.html) that all take up precious google slots with almost the exact same content, and now you are asking for additional handouts to be given to this ex-pornographer Jimmy Wales.
Wikipedia won over competitors because Jimmy Wales honed his marketing tactics from years of being a pornographer. Not because of any inherent strength of the wikipedia platform. In fact those first articles on wikipedia were all horrible and far worse than the other online encyclopedias if you remember. Wikipedia's success was mostly due to good timing, good governance and great SEO.
And btw it is hysterical some think HN is non-profit. Sure, perhaps technically. But this is exactly why I think we are doomed. If even the self-proclaimed guardians of the internet (ala the IT Crowd) cannot grasp the simple concept that HN is as astroturfed (if not more) than reddit there is simply no hope for the internet as a whole to understand these concepts. So long and thanks for all the phishing attempts.
Google shot their own foot. Starting with discontinuing Google Reader, the biggest ecosystem of blogs at the time. Could have evolved into a moat against social media walled garden.
Google thinks they're untouchable. That's why they serve up so many ads that most adblockers can't keep up. That's why they've done very little to combat bad actors who manipulate google search results. That's why they think they can toss half-baked AI out there.
AI could be something that helps prevent google bombing. AI could be something that lets them serve up one or two ads per page and get a similar number of click-throughs.
Google's lunch is spread out on a picnic blanket, just waiting for someone to come along and eat it.
Any competent ad blocker keeps up fine. Give uBlock Origin a try, for example, rather than the ad blockers that take cash to let shit slip through and you won't see ad blockers failing to keep up with Google or YouTube.
Google is in a tough position, but the track they’re on isn’t helping. People are already publishing massive amounts of meaningless AI generated content which is making results poor. So they short-circuit the madness by introducing their own poor result content, it’s bad but at least they control it.
The big question is: can it get better fast enough to save search? I think the era of search is ending, and it doesn’t look like LLMs are a good replacement.
I agree that trust is eroding with a certain user demographic. As time moves on, they are competing to win trust with the next generations. Does anyone remember going to a library? Used to be the place to get info. Same thing.
The customer isn't even the board, it's the stockholders. This has always been true though. I was at Netscape when the end came and for the last couple years of their life, before Mozilla spun out as independent, they would have quarterly layoffs to bridge the gap between what they told analysts they were targeting for the quarter and the numbers they actually produced.
The users and even the employees took a back seat to making Wall St. happy.
It was ever so, and so only the naive put their faith in mega corporations. Google was extremely useful for about 10 years from around 2002 through about 2012, but even then they were conspiring with Apple to suppress wages for top web browser and other programmers, and they were abusing their overwhelming search monopoly to gain illegal advantage in email, web browsers, and office productivity apps.
Fuck them. They were always evil, but the trade off was better back then when their evil gave us useful stuff and not AI slop. For over a decade that trade off hasn't been worth it.
Google's market share was suffering because they have been using their dominance to force everyone to participate in the "game the search engine" scam that Google was selling, which exists only to cheat the Google search engine.
Once enough of us realized that Google search results are usually just paid ads, what's the point anymore?
It's insane, and "AI" isn't going to fix any of it.
It feels like at some point their goal reversed. From “be the tool that shows the user what they’re looking for as accurately and efficiently as possible” to “be the tool that has them clicking between Google search results and maybe relevant pages showing Google ads”
Google is fighting for it's life. I'm not saying that excuses things but it puts them in context. IIRC, 60% of google's profit is ads on search. Chat GPT like results remove much of the need for "search". I find myself more and more trying to find an answer on Google, not finding it, asking ChatGPT, getting an answer. And I'm NOT suggesting Google search sucks. It might but that's not my point.
I recently need to find some details of hardware implementations for a feature. I used Google to find specs for this hardware. The specs were hard to read or didn't explain things as I needed. So I asked ChatGPT and it gave me what I needed. It first regurgitated info from the specs but I was able to ask it to explain pieces in more detail and it was great! At what point do I just stop using Google search to find an answer?
Or course AI answers have their issues. ATM I wouldn't search for reviews of some new video game or movie on ChatGPT. I also wouldn't search for product reviews. Even though it will happily spew out recommendations they'll be old at best. Maybe that will get fixed but it will have all the same issues as search and more. People will write their SEO type techniques to try to get the AI to surface their product just like they try to get them to the top of search results. I guess we'll need a new name. ARO (AI Results Optimization). Searching for products at least I get the illusion of lots of various opinions. With ChatGPT I just get the bot's one opinion.
Anyway, the point is, Google has to do something. Too late and everyone will switch. Which pushes them into too soon and hence the issues we're seeing.
I hope either the information you needed was just personal curiosity or that you verified with sources what ChatGPT was reporting. The hallucination problem is still terrible with no real solutions from OpenAI apart from "we're working on it."
IMHO, fully cosigning that google search has gotten worse and the AI shit is only going to make that even worse, using ChatGPT as a replacement feels crazy to me. Like yeah, Google gets it wrong sometimes, sure, but ChatGPT just makes shit up. Is a dumbass better or worse than a confident liar? Guess that's up to the user to decide.
False information problem is terrible as well on the internet. And if I were to guess based on usage, GPT generates an order of magnitude less error than a random reddit or stackoverflow or quora answer.
Reading posts like this I'm reminded of the teachers who told me not to cite wikipedia because it's not reliable who 20 years later are spouting whatever right wing nonsense is the flavour of the month on facebook.
Close enough is good enough. It doesn't need to be perfect, it needs to just not be catastrophically wrong.
But many chatgpt hallucinations are catastrophically wrong. For example when querying for things related to advanced scientific subjects in physics and medicine for starters from my own experience.
I really want to give this a pass but the worst thing is that you can't really see where it turns from correct to incorrect, like you would usually pick up with a human teacher or peer when you start to get a feeling they really don't know what they're talking about. ChatGPT will gladly keep hallucinating references and reasonings that sound superficially plausible until you look them up, because that's what it's trained to output..
If they could just find a way to cut it off when it starts to become too unsure, it would be a big improvement.
I love that in our current moment if you will, that Luddism has been reduced to a binary, with zero nuance whatsoever. Either you are FOR every introduction of technology, every automation, every convenience, every new product regardless of it's demonstrated, documented deleterious effects, or you're an extremist boomer who refuses to learn email. There's no in-between at all.
I recall a time when a popular joke amongst tech people was that a tech enthusiast was someone who had every new smart home accessory and every new gadget and used them all, and a tech engineer was someone who had nothing more advanced in his house than a laser printer and he kept a gun in the same room in case it ever made a noise he didn't recognize.
I guess we're just more enthusiastic than we used to be. I like my smart switches, but I don't like the notion of all human knowledge being only accessible to me through the filter of a word generator. If that makes me a Luddite, then Luddite I am.
> I recently need to find some details of hardware implementations for a feature. I used Google to find specs for this hardware. The specs were hard to read or didn't explain things as I needed. So I asked ChatGPT and it gave me what I needed. It first regurgitated info from the specs but I was able to ask it to explain pieces in more detail and it was great! At what point do I just stop using Google search to find an answer?
But was it accurate ? Chat GPT just provided you with the statistically most likely specs. If the actual doc for this hardware is lacking, this is 100% hallucinations.
My experience is that ChatGPT is simply bullshitting you (sometime accurately), Google is drowning the info in 20 links (where they try to make you buy "Hardware you're searching for"), and you have to go to DuckDuckGo to find the thing you're actually searching for.
I wonder how much of that pattern (Google won't give me shit, better ask GPT) is Google's own fault. Their search engine has been degraded so much that, if their LLM results are based on its search, it won't match the specificity of GPT's results. They seem to have pushed people into ChatGPT's arms. If they had consistent search services, I think less people would flock to GPT, which would give them more room to develop their own LLMs properly instead of just rushing crap like Bard or Gemini's inconsistent stuff. The gamble to rush things means you run the risk of shooting yourself in the foot and letting the other runners catch up
Honestly I see a ton more ads on YouTube than search as I use Kagi but I can't avoid YouTube. I'm sure they have other future revenue streams for ad placement.
I wonder what Gruber will be saying when his beloved Apple inevitably steps on the same rake. Somehow I think his human slop will be markedly more exculpatory
> The trust Google has built with users over the last 25 years is the most valuable asset the company owns.
That gave me a laugh. I don't trust Google. Their entire purpose for many years now has been simply to enable SEO trash in such a way that users see as many ads as possible while providing a minimum amount of useful information so that users are not totally frustrated.
If Google could legally sell your organs, they would. They are that nefarious, and deserve all the hate they get. AI is just the next step. Frankly, if Google went bankrupt today, I think civilization would benefit immensely.
I’ve been using the AI Overviews in Google Search for months now and overall find more utility in them.
I’ve developed an instinct for when to ignore them. Typically it is when “edge information” is involved: facts / data that are only very sparsely published in time / web space, perhaps by only one or a few web authors, perhaps only very recently.
As an AI developer for over a decade, I feel like if we’re calling this “slop”, I fear what these authors think of human error. I’m also doubtful that these people who are damning this project have ever in fact innovated or developed anything significant from scratch themselves.
From my perspective, the issue isn't with AI but with UI - Google is promoting these as "answers", pushing them to the top of their results and giving no indication that the results might ever be inaccurate.
Google has been pushing to "answer questions" over return web sources since at least 2010 (when their public messaging changed), so that is nothing new. But it is a seismic, human difference between aggregated results from trusted web sources and reformatted comments from Reddit posts.
AI is great for targetted tasks and even for general web usage, with caveats. I would love to see something like an accuracy/trust score associated with the results or at a minimum a beta flag with a "hide results like this" option.
These are all basic UI procedures any reasonable person would make, but Google has been riding high on hubris for a while now (zero human support, just see the Google Cloud issue on the frontpage...) and this won't change until the stock starts getting slammed, which seems imminent.
If a human offered to confidently answer every question you asked without regard to whether it is true or correct, you wouldn't say they're merely mistaken when they tell you falsehoods. You would call them a bullshitter or a charlatan.
Calling it slop at least acknowledges that the AI isn't trying to lie to you. It's not merely making a error, either. It just doesn't care either way.
I don't think you ever have to have innovated or developed anything significant to take issue with a major consumer product losing some utility - I don't know if you used Twitter ten years ago, but it was pretty sad to see the site slowly downgrade year by year, every new feature making it less functional and more addictive, fueling toxicity, becoming a black hole of individuals' attention. It makes you wonder what would have happened if the people put up more of a fight whenever some new stupid feature was rolled out, or a useful feature taken away to make the whole thing just a little more like a slot machine. (Hmmm... maybe Elon Musk should buy Google, so people will finally start complaining en Masse.)
As a defender of the new LLM-thingies, do you think they're doing a reasonable job of promoting AI-output literacy? I think it's their job to do so when they are the ones generating the content, whereas general media-literacy was not really their problem when Google was just a directory for the web.
Already now, a lot of users will associate Google AI Overviews with nonsense; eat rocks, cook spaghetti with gasoline. Google also showed how easily these AI Overviews can be manipulated, since they use RAG.
I think the Slopception is only going to get worse. Slop in. Slop out.
And also, Google Research just happened to be sitting on AGREE[0]:
> a learning-based framework that enables LLMs to provide accurate citations in their responses, making them more reliable and increasing user trust
Which was published yesterday.
I tried to submit it[1] but it didn’t get much love.
[0]: https://research.google/blog/effective-large-language-model-...
[1]: https://news.ycombinator.com/item?id=40469518