Hacker Newsnew | past | comments | ask | show | jobs | submit | kjok's commentslogin

And this is exactly why we see noise on HN/Reddit when a supply-chain cyberattack breaks out, but no breach is ever reported. Enterprises are protected by internal mirroring.

Curious to know why are coding agents not detecting such risks before importing dependencies?

I'm assuming you are talking about agents like claude-code and open-code which rely on GPT functions (AKA Large Language Models).

The reason they don't detect these risks is primarily because these risks are emergent, and happen overnight (literally in the case of axios - compromised at night). Axios has a good reputation. It is by definition impossible for a pre-trained LLM to keep up with time-sensitive changes.


I mean that agents can scan the code to find anything "suspicious". After all, security vendors that claim to "detect" malware in packages are relying on LLMs for detection.

An LLM is not a suitable substitute for purpose-built SAST software in my opinion. In my experience, they are great at looking at logs, error messages, sifting through test output, and that sort of thing. But I don't think they're going to be too reliable at detecting malware via static analysis. They just aren't built for that.

> I actually just published a paper...

This gives me an impression that the paper has already been published and is available publicly for us to read.


Sorry about that, the conference was on Feb 2, and it's supposed to be out any day/week now. I don't have a date.

There is a blog-style writeup here: https://fwsgonzo.medium.com/an-update-on-tinykvm-7a38518e57e...

Not as rigorous as the paper, but the gist is there.


Thanks! I'll keep an eye out for the paper.

Maybe humans can focus on cybersecurity and fraud? That’s not going away with AI


Block based on cookies (i.e., set a cookie on the browser and check on the server whether it exists).


This project implements a variety of similar JSless checks, such as image loading

https://github.com/WeebDataHoarder/go-away


That helps, but the big bot farms use clients that support cookies, we need to add more defenses on top of them.


Genuine question: why is everyone rolling out their own sandbox wrappers around VMs/Docker for agents?


I know, right? The day I initially thought about posting this, there was another one called `yolo-box`. (That attempt--my very first post--got me instantly shadow-banned due to being on a VPN, which led to an unexpected conversation with @dang, which led to some improvements, which led to it being a week later.)

I think it's the convergence of two things. First, the agents themselves make it easier to get exactly what you want; and second, the OEM solutions to these things really, really aren't good enough. CC Cloud and Codex are sort of like this, except they're opaque and locked down, and they work for you or they don't.

It reminds me a fair bit of 3D printer modding, but with higher stakes.


Because of findings like this

https://www.anthropic.com/research/small-samples-poison

(A small number of samples can poison LLMs of any size) to save clicks to read the headline

The way I think of it is, coding agents are power tools. They can be incredibly useful, but can also wreak a lot of havoc. Anthropic (et al) is marketing them to beginners and inevitably someone is going to lose their fingers.


I understand the need, but I don't understand why a VM or Docker is not enough. Why are people creating custom wrappers around VMs/containers?


Docker isn't virtualization; it's not that hard to infiltrate the underlying system if you really want to. But as for VMs--they are enough! They're also a lot of boilerplate to set up, manage, and interact with. yolo-cage is that boilerplate.


Because people want to run agents in yolo mode without worrying that it's going to delete the whole computer.

And once you put the agent in a VM/container it's much easier to run 10 of them in parallel without mutual interference.


On that note, yolo-cage is pretty heavyweight. There are much lighter tools if your main concern is "don't nuke my laptop." yolo-box was trending on HN last week: https://news.ycombinator.com/item?id=46592344


My experience is that neither has a good UX for what I usually try to do with coding agents. The main problem I see is setup/teardown of the boxes and managing tools inside them.


It all feels like temporary workflow fixes until The Agent Companies just ship their opinionated good enough way to do it.


It probably is. Some of this stuff will hang around because power users want control. Some of it will evolve into more sophisticated solutions that get turned into products and become easier to acquihire than the build in house. A lot of it will become obsolete when the OEMs crib the concept. But IMO all of those are acceptable outcomes if what you really want is the thing itself.


They've already suggested using Dev Containers. https://code.claude.com/docs/en/devcontainer


If not today, models will get better at solving captchas in the near future. IMHO, the real concern, however, is cheap captcha solving services.


The solvers are a problem but they give themselves away when they incorrectly fake devices or run out of context. I run a bot detection SaaS and we've had some success blocking them. Their advertised solve times are also wildly inaccurate. They take ages to return a successful token, if at all. The number of companies providing bot mitigation is also growing rapidly, making it difficult for the solvers to stay on top of reverse engineering etc.


> when they incorrectly fake devices

And how often does this happen? Do you have any proof? Most YC companies building browser agents have built-in captcha solvers.


That's a good question. I haven't checked the stats to see how often it happens but I will make a note to return with some info. We're dealing with the entire internet, not just YC companies, and many scrapers / solvers will pass up a user agent that doesn't quite match the JS capabilities you would expect of the browser version. Some solving companies allow you to pass up user agent , which causes inconsistencies as they're not changing their stack to match the user agent you supply. Under the hood they're running whatever version of headless Chrome they're currently pinned to.


What is so fundamentally different for AI agents?


Other than the current popular thing which is "AI agents", like all programs, it changes absolutely nothing.


The fact that the first thing people are going to do is punch holes in the sandbox with MCP servers?


Ah ha.


> automated scanners seem to do a good job already of finding malicious packages.

That's not true. This latest incident was detected by an individual researcher, just like many similar attacks in the past. Time and again, it's been people who flagged these issues, later reported to security startups, not automated tools. Don't fall for the PR spin.

If automated scanning were truly effective, we'd see deployments across all major package registries. The reality is, these systems still miss what vigilant humans catch.


> This latest incident was detected by an individual researcher

So that still seems fine? Presumably researchers are focusing on latest releases, and so their work would not be impacted by other people using this new pnpm option.


> If automated scanning were truly effective, we'd see deployments across all major package registries.

No we wouldn't. Most package registries are run by either bigcorps at a loss or by community maintainers (with bigcorps again sponsoring the infrastructure).

And many of them barely go beyond the "CRUD" of package publishing due to lack of resources. The economic incentives of building up supply chain security tools into the package registries themselves are just not there.


You're right that registries are under-resourced. But, if automated malware scanning actually worked, we'd already see big tech partnering with package registries to run continuous, ecosystem-wide scanning and detection pipelines. However, that isn't happening. Instead, we see piecemeal efforts from Google with assurance artifacts (SLSA provenance, SBOMs, verifiable builds), Microsoft sponsoring OSS maintainers, Facebook donating to package registries. Google's initiatives stop short of claiming they can automatically detect malware.

This distinction matters. Malware detection is, in the general case, an undecidable problem (think halting problem and Rice theorem). No amount of static or dynamic scanning can guarantee catching malicious logic in arbitrary code. At best, scanners detect known signatures, patterns, or anomalies. They can't prove absence of malicious behavior.

So the reality is: if Google's assurance artifacts stop short of claiming automated malware detection is feasible, it's a stretch for anyone else to suggest registries could achieve it "if they just had more resources." The problem space itself is the blocker, not just lack of infra or resources.


> But, if automated malware scanning actually worked, we'd already see big tech partnering with package registries to run continuous, ecosystem-wide scanning and detection pipelines.

I think this sort of thought process is misguided.

We do see continuous, ecosystem-wide scanning and detection pipelines. For example, GitHub does support DependaBot, which runs supply chain checks.

https://github.com/dependabot

What you don't see is magical rabbits being pulled out of top hats. The industry has decades of experience with anti-malware tools in contexts where said malware runs in spite of not being explicitly provided deployment or execution permissions. And yet it deploys and runs. What do you expect if you make code intentionally installable and deployable, and capable of sending HTTP requests to send and receive any kind of data?

Contrary to what you are implying, this is not a simple problem with straight-forward solutions. The security model has been highly reliant on the role of gatekeepers, both in producer and consumer sides. However, the last batch of popular supply chain attacks circumvented the only failsafe in place. Beyond this point, you just have a module that runs unspecified code, just like any other module.


The latest incident was detected first by an individual researcher (haven't verified this myself, but trusting you here) -- or maybe s/he was just the fastest reporter in the west. Even simple heuristics like the sudden addition of high-entropy code would have caught the most recent attacks, and obviously there are much better methods too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: