Are we sure about that? I mean, we have seen that LLMs are able to generalize to some degree. So I don't see a reason why you couldn't put an agent in a loop with a profiler and have it try to optimize the code. Will it come up with entirely novel ideas? Unlikely. Could it potentially combine existing ideas in interesting, novel ways that would lead to CCC outperforming GCC? I think so. Will it get stuck along the way? Almost certainly.
Would you want it to? The further the goal posts are the more progress we are making, and that's good, no? Trying to make it into a religious debate between believers and non-believers is silly. Neither side can predict the future, and, even if they could, winning the debate is not worth anything!
What is interesting is what can do with LLMs today and what we would like them to be able to do tomorrow so we can keep developing them into a good direction. Whether or not you (or I) believe it can do that thing tomorrow is thoroughly uninteresting.
The goalpost is not moving. The issue is that AI generates code that kinda looks ok but usually has deep issues, specially the more complex the code is. And that's not being really improved.
Yeah but the speed of progress can never catch the speed of a moving goalpost!