OMG yes. The most egregious in movie/tvshow trailer reviews.
Who tf cares about a random quote included in the trailer?
Here's a subtitle for a He-Man movie trailer from the other day: "Skeletor took my family and he destroyed our world."
I mean, anything would have been better than that, and "Another attempt at live action movie based on 80s action figure" or even "in theatres on X.Y." would be Pulitzer material in comparison.
I am still amazed that people so easily accepted installing these agents on private machines.
We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.
People were also dismissing concerns about build tooling automatically pulling in an entire swarm of dependencies and now here we are in the middle of a repetitive string of high profile developer supply chain compromises. Short term thinking seems to dominate even groups of people that are objectively smarter and better educated than average.
And nothing big has happened despite all the risks and problems that came up with it. People keep chasing speed and convenience, because most things don’t even last long enough to ever see a problem.
I've yet to be saved by an airbag or seatbelt. Is that justification to stop using them? How near a miss must we have (and how many) before you would feel that certain practices surrounding dependencies are inadvisable?
A number of these supply chain compromises had incredibly high stakes and were seemingly only noticed before paying off by lucky coincidence.
The fun part is, there have been a lot of non-misses! Like a lot! A ton of data have been exfiltrated, a lot of attacks, and etc. In the end... it just didn't matter.
Your analogy isn't really apt either. My argument is closer to "given in the past decade+, nothing of worth has been harmed, should we require airbags and seatbelts for everything?". Obviously in some extreme mission critical systems you should be much smarter. But in 99% cases it doesn't matter.
> I've yet to be saved by an airbag or seatbelt. Is that justification to stop using them?
By now, getting a car without airbags would probably be more costly if possible, and the seatbelt takes 2s every time you're in a car, which is not nothing but is still very little. In comparison, analyzing all the dependencies of a software project, vetting them individually or having less of them can require days of efforts with a huge cost.
We all want as much security as possible until there's an actual cost to be paid, it's a tradeoff like everything else.
The funniest part is that it always gets traded off, everytime. Talking about tradeoffs you'd think sometimes you'd keep it sometimes you'd let it go, but no, its every goddamn time cut it.
My intent was to cast a very wide net there that covers more or less all expert knowledge workers. Zingers aside software developers as a group are well above the societal mean in many respects.
It's hard to think long term when your salary depends on short term thinking. I keep seeing horrifying comments from all sorts of people saying they'd be fired if they stopped using AI to bang out ridiculous amounts of code at lightning speed.
> We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.
These are generally (but not always) 2 different sets of people.
I am too. It is genuinely really stupid to run these things with access to your system, sandbox or no sandbox. But the glaring security and reliability issues get ignored because people can't help but chase the short term gains.
FOMO is a hell of a thing. Sad though given it would have taken maybe a couple of hours to figure out how to use a sandbox. People can't even wait that long.
Erm, no, that's not a sandbox, it's an annoyance that just makes you click "yes" before you thoughtlessly extend the boundaries.
A real sandbox doesn't even give the software inside an option to extend it. You build the sandbox knowing exactly what you need because you understand what you're doing, being a software developer and all.
I've never been annoyed by the tool asking for approval. I'm more annoyed by the fact that there is an option that gives permanent approval right next to the button I need to click over and over again. This landmine means I constantly have to be vigilant to not press the wrong button.
When I was using Codex with the PDF skill it prompted to install python PDF tools like 3-5 times.
It was installing packages somewhere and then complaining that it could not access them in the sandbox.
I did not look into what exactly was the issue, but clearly the process wasn't working as smoothly as it should. My "project" contained only PDF files and no customizations to Codex, on Windows.
And still a lot of people will give broad permissions to docker container, use network host, not use rootless containers etc... The principle of least privilege is very very rarely applied in my experience.
It's never about security. It's security vs convenience. Security features often ended up reduce security if they're inconvenience. If you ask users to have obscure passwords, they'll reuse the same one everywhere. If your agent prompts users every time it's changing files, they'll find a way to disable the guardrail all together.
Not in unknown ways, but as part of its regular operation (with cloud inference)!
I think the actual data flow here is really hard to grasp for many users: Sandboxing helps with limiting the blast radius of the agent itself, but the agent itself is, from a data privacy perspective, best visualized as living inside the cloud and remote-operating your computer/sandbox, not as an entity that can be "jailed" and as such "prevented from running off with your data".
The inference provider gets the data the instant the agent looks at it to consider its next steps, even if the next step is to do nothing with it because it contains highly sensitive information.
Agree with the sentiment! But "securing ... in all ways possible"? I know many people who would choose "password" as their password in 2026. The better of the bunch will use their date of birth, and maybe add their name for a flourish.
Seems most relevant in a hobbyist context where you have personal stuff on your machine unrelated to your projects. Employee endpoints in a corporate environment should already be limited to what’s necessary for job duties. There’s nothing on my remote development VMs that I wouldn’t want to share with Claude.
My testing/working with agents has been limited to a semi-isolated VM with no permissions apart from internet access. I have a git remote with it as the remote (ssh://machine/home/me/repo) so that I don't have to allow it to have any keys either.
Trusting AI agents with your whole private machine is the 2020s equivalent of people pouring all their information about themselves into social networks in 2010s.
Only a matter of time before this type of access becomes productized.
Eh, depending on how you're running agents, I'd be more worried about installing packages from AUR or other package ecosystems.
We've seen an increase in hijacked packages installing malware. Folks generally expect well known software to be safe to install. I trust that the claude code harness is safe and I'm reviewing all of the non-trivial commands it's running. So I think my claude usage is actually safer than my AUR installs.
Granted, if you're bypassing permissions and running dangerously, then... yea, you are basically just giving a keyboard to an idiot savant with the tendency to hallucinate.
> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes.
Spotify's CEO recently bragged about the app's code being written almost entirely by AI. Just saying.
If you suffer from any kind of anxiety, and you drink caffeine, you should seriously consider quitting. Even if you only drink as little as one coffee per day. There's a very high chance that caffeine is the source of a large part of it.
I've been drinking coffee for 20 years and had always assumed that I was just an anxious, paranoid person. Quitting made me realize that I really wasn't.
Quitting/reducing has also cured my itchy skin problem.
I also highly recommend the subreddit r/decaf as a great source of information.
> They contacted Facebook, which at the time dominated the social media landscape, asking for help scouring uploaded family photos - to see if Lucy was in any of them. But Facebook, despite having facial recognition technology, said it "did not have the tools" to help.
Willing to bet my life savings that they are able to do exactly this when the goal is to create shadow profiles or maximize some metric.
> The BBC asked Facebook why it couldn't use its facial recognition technology to assist the hunt for Lucy. It responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."
You don't need to imply they didn't read that part, because it doesn't really affect the point of the comment, that Facebook doesn't actually care about privacy. Even if they're not sharing things willy-nilly, they're still aggressively tracking everyone they can.
The two views aren't necessarily in conflict. I don't appreciate Facebook's use of facial recognition technology, but they built it. I'm extremely disappointed they proceeded to use this technology to influence elections while fighting against making the data available to law enforcement. I understand this may not have been intentional on their part, but the result is the same, and I was not at all surprised by it.
There's another option "I will but only if ..." which is what Facebook rightfully went with. Come back with a warrant is _always_ the correct answer when dealing with LE.
A fourth option is "I can and I will, but only after certain prerequisites are met - go away and meet them first", which looks to me what they were saying.
> From that list of 40 or 50 people, it was easy to find and trawl their social media. And that is when they found a photo of Lucy on Facebook with an adult who looked as though she was close to the girl - possibly a relative.
It sounds like Facebook was a huge boost to the investigation despite that.
Facebook did nothing to assist in narrowing a search area.
What Facebook actually did was host images .. so that after the team narrowed a list down to under 100 people they could look through profiles by hand.
It may as well have been searching Flickr, Instagram, Etsy, etc. profiles by hand.
Yes, and if Facebook didn't exist, presumably these images connecting the abuser to the victim wouldn't have been available anywhere for the investigators to find.
If Facebook didn’t exist, they would’ve found the photos on MySpace. Come on.
All Facebook likely did here that was any different than any other social media platform would have done, was gather Sandberg, Zuck and a cadre of snotty, sniveling engineers in a conference room and debate whether this was good engagement for the platform.
Facial recognition is very powerful these days. My friend took a photo of his kid at the top of Twin Peaks in SF, with the city in the background. Unfortunately, due to the angle, you could barely see the eyes and a portion of the nose of the kid. Android was still able to tag the kid.
I feel like Facebook really dropped the ball here. It is obvious that Squire and colleagues are working for the Law Enforcement. If FB was concerned about privacy, they could have asked them to get a judicial warrant to perform a broad search.
But they didn't. And Lucy continued to be abused for months after that.
I hope when Zuck is lying on his death bed, he gets to think about these choices that he has made.
Google photos has the advantage of a limited search space. Any photo you take is overwhelmingly likely to be one of the few faces already in the library. Not to say facebook couldn't solve the problem. But the ability of Google to do facial recognition with such poor inputs is that it's searching on 40~ faces rather than x billion faces.
> I feel like Facebook really dropped the ball here
This story was from more than a decade ago.
Facebook had facial recognition after that, but they deleted it all in response to public outcry. It’s sad to see HN now getting angry at Facebook for not doing facial recognition.
> I hope when Zuck is lying on his death bed, he gets to think about these choices that he has made.
Are we supposed to be angry at Zuckerberg now for making the privacy conscious decision to drop facial recognition? Or is everyone just determined to be angry regardless of what they do?
> I feel like Facebook really dropped the ball here
This case began being investigated on January 2014 [0], which means abuse began (shudder) in 2012-13 if not earlier.
Facebook/Meta only began rolling out DeepFace [1] in June 2015 [2]
Heck, VGG-Face wasn't released until 2015 [3] and Image-Based Crowd Counting only began becoming solvable in 2015-16.
> Facial recognition is very powerful these days.
Yes. But it is 2026, not 2014.
> I hope when Zuck is lying on his death bed, he gets to think about these choices that he has made
I'm sure there are plenty of amoral choices he can think about, but not solving facial detection until 2015 is probably not one of them.
---
While it feels like mass digital surveillance, social media, and mass penetration of smartphones has been around forever it only really began in earnest just 12 years ago. The past approximately 20 years (iPhone was first released on June 2007 and Facebook only took off in early 2009 after smartphones and mobile internet became normalized) have been one of the biggest leaps in technology in the past century. The only other comparable decades were probably 1917-1937 and 1945-1965.
Facebook rightly retired their facial recognition system in 2021 over concerns about user privacy. Facebook is a social media site, they are not the government or police.
When people on hacker News talk about requiring cops to do traditional police work instead of doing wide ranging trawls using technology, this is exactly what they meant. I hope you don't complain when the future you want becomes reality and the three letter agencies come knocking down your door just because you happened to be in the same building as a crime in progress and the machine learning algorithms determined your location via cellular logs and labelled you as a criminal.
There’s a pretty big difference between surveillance logging your every move your and scanning photos voluntarily uploaded to Facebook.
No, I don’t like Facebook using facial recognition technology, and no I don’t like that someone else can upload photos of me without my consent (which ironically could leverage facial recognition technology to blanket prevent), but these are other technical and social issues that are unrelated to the root issue. I also wish there were clear political and legal boundaries around surveillance usage for truly abhorrent behaviour versus your non-Caucasian neighbour maybe j -walking triggering a visit from ICE.
Yes, it’s an abuse of power for these organisations to collect data these ways, but I’m not against their use to prevent literal ongoing child abuse, it’s one of the least worst uses of it.
The grim meathook future of ubiquitous surveillance is coming regardless. At the very least we could get some proper crime solving out of it along the way.
The EU AI act activates this year. Facial recognition is in the restrictive list. You don't want to give auditors ammunition before it goes live as top fine would cost FB around $4B, and wouldn't be a one time fine.
Even if only law enforcement can use it, having that feature is highly regulated.
[edit] I see this is from years ago. I should read the articles first. :)
I would hazard a guess that the facial recognition will limit the search scope to people associated (to some degree) with your friends account and some threshold of metrics gathered from the image. I doubt it is using a broad search.
With billions of accounts, the false positive rate of facial recognition when matching against every account would likely make the result difficult to use. Even limiting to a single country like UK the number could be extremely large.
Let say there is a 0.5% false positive rate and some amount of false negatives. With 40 million users, that would be 200 000 false positives.
The only explanation for this comment is you never used reverse image search by Google or yandex before it was nerfed or you'd know this is super plausible to find direct hits without many false positives.
It seems to me that the BBC is including those passages at the beginning and end of their story as propaganda so the public begs (demands, even) for more surveillance, and the sale of private data to the government. I mean, think of the children, like Lucy! Seems to be having that effect in this thread, in any case.
It’s absolutely propaganda and a perfect example of how the public gets manipulated on a daily basis. Let’s break down the facts:
- Pushes for facial recognition
- Pushes for more state run surveillance
- Pushes for AI based surveillance
- Pushes for greater data collection, access & mining
- Legitimises it all under the classic “save the kids” meme and pushes emotionally hard for more.
The main issues i’ve seen discussed on HN the last couple of months have been critical of the never ending and increasing government surveillance. Both sides of the pond. This is their answer.
Simultaneously we’re hearing about how almost anybody and everybody beyond a level of power was well aware of industial level sex trafficking and abuse, and either totally turned a blind eye or joined in.
The article might carry some weight if it wasn’t from an authoritarian state backed organisation that’s very well known for covering up for, and protecting multiple famous high level sexual criminals within it’s own organisation, spanning multiple decades, that has never faced any real audit, investigation or justice for its own crimes.
I keep an iPhone SE 1st gen as a secondary phone. It still has the last best keyboard iOS had. Almost zero mistakes. Probably because no AI and other overoptimizitation BS. Every time I go back to my primary 13 I want to cry.
https://karlbode.com/ceo-said-a-thing-journalism/
https://news.ycombinator.com/item?id=47577735
reply