Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A bit of a stretch

Is that true?

C.f. what we're discussing

He's actively encouraging using LLMs to solve his benchmark, called ARC AGI.

8 hours ago, from Chollet, re: TFA

"The best solution to fight combinatorial explosion is to leverage intuition over the structure of program space, provided by a deep learning model. For instance, you can use a LLM to sample a program..."

Source: https://x.com/fchollet/status/1802801425514410275



The stretch was in reference to comparing Chollet to Einstein. Chollet clearly understands LLMs (and transformers and deep learning), he simply doesn't believe they are sufficient for AGI.


I don't know what you mean, it's a straightforward analogy, but yes, that's right, except for the part where he's heralding this news by telling people the LLM is an underexplored solution space for a possible solution to his AGI benchmark he made to disprove LLMs are AGI.

I don't mean to offend, but to be really straightforward: he's the one saying it's possible they might be AGI now. I'm as flummoxed as you, but I think its hiding the ball to file it under "he doesn't mean what he's saying, because he doesn't believe LLMs can ever be AGI." The only steelman for that is playing at: AGI-my-benchmark, which I say is for AGI, is not the AGI I mean


You're reading a whole lot into a tweet, in his interview with Dwarkesh Patel he says, about 20 different times, that scaling LLMs (as they are currently conceived) won't lead to AGI.


You keep changing topics so I don't get it either, I can attest it's not a fringe view that the situation is interesting, seen it discussed several times today by unrelated people.


He's said it pretty clearly, an LLM could be part of the solution in combination with program synthesis, but an LLM alone won't achieve AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: