Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Two thoughts here:

First, remember when we had LLMs run optimisation passes last year? Alphaevolve doing square packing, and optimising ML kernels? The "anti" crowd was like "well, of course it can automatically optimise some code, that's easy". And things like "wake me up when it does hard tasks". Now, suddenly when they do hard tasks, we're back at "haha, but it's unoptimised and slow, laaame".

Second, if you could take 100 juniors, 100 mid level devs and 100 senior devs, lock them in a room for 2 weeks, how many working solutions that could boot up linux in 2 different arches, and almost boot in the third arch would you get? And could you have the same devs now do it in zig?

The thing that keeps coming up is that the "anti" crowd is fighting their own deamons, and have kinda lost the plot along the way. Every "debate" is about promisses, CEOs, billions, and so on. Meanwhile, at every step of the way these things become better and better. And incredibly useful in the right hands. I find it's best to just ignore the identity folks, and keep on being amazed at the progress. The haters will just find the next goalpost and the next fight with invisible entities. To paraphrase - those who can, do, those who can't, find things to nitpick.



You're heavily implying that because it can do this task, it can do any task at this difficulty or lower. Wrong. This thing isn't a human at the level of writing a compiler, and shouldn't be compared to one

Codex frustratingly failed at refactoring my tests for me the other day, despite me trying many, many prompts of increasing specificity. A task a junior could've done

Am I saying "haha it couldn't do a junior level task so therefor anything harder is out of reach?" No, of course not. Again, it's not a human. The comparison is irrelevant

Calculators are superhuman at arithmetic. Not much else, though. I predict this will be superhuman at some tasks (already is) and we'll be better at others


Adopt this half baked, half broken, insanely expensive, planet destroying, IP infringing tech, you have no choice.

Burn everything, because if you don’t, you will get left behind and, maybe, just maybe, in 2 years when it’s good enough, maybe… after hoovering up all the money, IP and domain expertise for free, and you’ve burnt all your money & sanity prompting and cajoling it to a semi working solution for a problem you didn’t really have in the first place, it will dump you at the back of the unemployment line. All hail the AI! Crazy times.

In the meantime please enjoy targeted scams, ever increasing energy prices, AI content farms, hardware shortages, and endless, endless slop.

When humans architect anything - ideas, buildings, software or ice cream sundaes, we make so many little decisions that affect the overall outcome, we don’t even know or think about it! Too many sprinkles and sauce and it will be too sweet and hard to eat. We make those decisions based on both experience and imagination. Watch a small child making one to see the perfect human intersection of these two things at play. The LLM totally lacks the imagination part, except in the worst possible ways. It’s experience includes all sorts of random internet garbage that can sound highly convincing even to domain experts. Now it’s training set is being further expanded with endless mountains of more highly impressive sounding garbage.

It was obvious to me with the first image gen models how incredibly impressive it was to see an image gradually forming from the computer based on nothing but my brief text input but also how painfully limited the technology would always be. After days and days of early obsessive image generation, I was no better as an artist than when I began! Everything also kind of looked the same as well?

As incredible as it was, it was nothing more than a massively complicated, highly advanced parlour trick. A futuristic, highly powerful pattern generator. Nothing has changed my mind at all. All that’s happened is we’ve seen the worst tricksters, shysters and con artists jump on a very dangerous bandwagon to hell and try and whip us less compliant souls onboard.

Lots of things follow patterns, the joy in life, for me, is discovering the patterns, exploring them and developing new unique and interesting patterns.

I’ve yet to encounter a bandwagon worth joining anyway, maybe this will be the one that leaves me behind and i’ll be forced to retire on cartoon gorilla NFTs and tulip farming?


> The thing that keeps coming up is that the "anti" crowd is fighting their own deamons, and have kinda lost the plot along the way

perfect example


First off Alpha Evolve isn't an LLM. No more than a human is a kidney.

Second depends. If you told them to pretrain for writing C compiler however long it takes, I could see a smaller team doing it in a week or two. Keep in mind LLMs pretrain on all OSS including GCC.

> Meanwhile, at every step of the way these things become better and better.

Will they? Or do they just ingest more data and compute?[1] Again, time will tell. But to me this seems more like speed-running into an Idiocracy scenario than a revolution.[2]

I think this will turn out another driverless car situation where last 1% needs 99% of the time. And while it might happen eventually it's going to take extremely long time.

[1] Because we don't have much more computing jumps left, nor will future data be as clean as now.

[2] Why idiocracy?

Because they are polluting their own corpus of data. And by replacing thinking about computers, there will be no one to really stop them.

We'll equalize the human and computer knowledge by making humans less knowledgeable rather than more.

So you end up in an Idiocracy-like scenario where a doctor can't diagnose you, nor can the machine because it was dumbed down by each successive generation, until it resembles a child's toy.


> First off Alpha Evolve isn't an LLM.

It's a system based on LLMs.

> AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.

> AlphaEvolve leverages an ensemble of state-of-the-art large language models: our fastest and most efficient model, Gemini Flash, maximizes the breadth of ideas explored, while our most powerful model, Gemini Pro, provides critical depth with insightful suggestions. Together, these models propose computer programs that implement algorithmic solutions as code.


> It's a system based on LLMs.

What you said is:

    > First, remember when we had LLMs run optimisation passes last year? Alphaevolve doing square packing
If I start a sentence:

    First, remember the fish intelligence competition last year? Rossie jumped through a hula hoop.
I, and other readers (probably), would think Rossie is a fish. Not a dog. Even if you can technically group dogs as a sort of fish descendant.


I have no idea what you're arguing. Alphaevolve is similar to claude code. They are using LLMs in a harness. No idea what you mean with fish, kidneys and so on. Can you please stick to the technical stuff? Otherwise it's just noise.


I'm arguing your writing is unclear and confusing. You can't continue a sentence and pretend the new sentence isn't related to the previous one.

Alpha Evolve is made from LLMs, but it's not the only part. If anything, it needs a genetic algo component. LLMs generally don't evolve.

Also, why are you focusing on AlphaEvolve? I made two other points you haven't addressed.


You have no idea what alphaevolve is, yet you try to correct me. This isn't productive, I'm out.


First off, let's say I'm wrong about Alpha Evolve. Fine, I made two more points; address them as well; that's just normal manners in a conversation.

Second, I question your idea of what Alpha Evolve is. You seem to think it's an LLM or LLM-adjacent when it's more like an evolutionary algo picking a better seed among the LLMs. That's not an LLM, if anything, it has some ability to correct itself.


It’s more like a concept car vs a production line model. The capabilities it has were fine tuned for a specific scenario and are not yet available to the general public.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: