This is not speculative, it's happened plenty already. People put mitigations in place, patch libraries and move on. The difference is that agents will find new zero days you've never heard of for stuff on your system people haven't scrutinized adequately. There will be zero advanced notice, and unlike human attackers who need to lie low until they can plan an exit, it'll be able to exploit you heavily right away.
Do not take the security impact of agents lightly!
I feel like my bona fides on this topic are pretty solid (without getting into my background on container vs. VM vs. runtime isolation) and: "the agents will find new zero days" also seems "big if true". I point `claude` at a shell inside a container and tell it "go find a zero day that breaks me out of this container", and you think I'm going to succeed at that?
I had assumed you were saying something more like "any attacker that prompt-injects you probably has a container escape in their back pocket they'll just stage through the prompt injection vector", but you apparently meant something way further out.
I know at least one person who supplements their income finding bounties with Claude Code.
Right now you can prompt inject an obfuscated payload that can trick claude into trying to root a system under the premise that you're trying to identify an attack vector on a test system to understand how you were compromised. It's not good enough to do much, but with the right prompts, better models and if you could smuggle extra code in, you could get quite far.
Lots of people find zero days with Claude Code. That is not the same thing as Claude Code autonomously finding zero days without direction, which was what you implied. This seems like a pretty simple thing to go empirically verify for yourself. Just boot up Claude and tell it to break out of a container shell. I'll wait here for your zero day! :)
Do not take the security impact of agents lightly!