Do you find the results are better when you use the same query on Google? Because I’ve also exclusively used DuckDuckGo for the past 5 or so years, and every now and then I get frustrated by the results try Google.
But only once did Google actually give me what I was looking for. Every other time the Google results were the same SEO garbage I was getting with DDG.
Maybe I should try switching to Google for a full month to see if my search quality generally improves.
What I found was that when I first started using DDG, using !g to take me to Google would get good results. But over time, it stopped working. I'm not totally sure if it's because Google's profile on me timed out and it's not getting enough searches or because Google search quality has gone down. Now, several years into using DDG as primary, when I can't find it at DDG, I expect I won't find it at Google either... but they do give me different bad results.
Yeah, same queries. Results are (mostly) much better with Google. It's honestly disappointing because I'd love to be able to ditch Google as a search engine but, whilst it's not without its flaws, it's still much better than the alternatives[0].
[0] I'm deliberately avoiding talking about LLMs here: I mean specifically in cases where what I want to do is execute a web search - often because I'm dissatisfied with, or suspicious of, something an LLM has told me.
I was one of the solvers. It took me about a week to figure out. This is what I wrote out in my submission with the answer:
> After looking at the final two layers I was somewhat quick to intuit that this was some sort of password check, but wasn’t entirely sure where to go from there. I tried to reverse it, but it was proving to be difficult, and the model was far too deep. I started evaluating the structure and saw the 64 repeated sections of 84 layers that each process 4 characters at a time. Eventually I saw the addition and XOR operations, and the constants that were loaded in every cycle, and the shift amounts that differed between these otherwise identical sections.
> I thought it was an elaborate CTF cryptography challenge, where the algorithm was purposely weak and I had to figure out how to exploit it. But I repeatedly was getting very stuck in my reverse-engineering efforts. After reconsidering the structure and the format of the ‘header' I decided to take another look at existing algorithms...
Basically it took a lot of trial and error, and a lot of clever ways to look at and find patterns in the layers. Now that Jane Street has posted this dissection and 'ended' this contest I might post my notebooks and do a fuller post on it.
The trickiest part, to me, is that for about 5 of the days was spent trying to reverse-engineer the algorithm... but they did in fact use a irreversible hash function, so all that time was in vain. Basically my condensed 'solution' was to explore it enough to be able to explain it to ChatGPT, then confirm that it was the algorithm that ChatGPT suggested (hashing known works and seeing if the output matched) and then running brute force on the hash function, which was ~1000x faster to compute than the model.
> Antigravity did the vast amount of work, it feels unworthy
I think this is true for me as well. I have two types of projects that I’ve been working on - small ones with a mix of code I wrote and AI. I have posted these, as I spent a lot of time guiding the AI, cleaning up the AI’s output, and I think the overall project has value that others can learn from and built on.
But I also have some that are almost 100% vibe-coded. First, those would take a lot of time to clean up and write documentation for to make them publishable/useful.
But also, I do think they feel “unworthy”. Even though I think they can be helpful, and I was looking for open-source versions of those things. But how valuable can it really be if I was able to vibe-code it in a few prompts? The next person looking for it will probably do the same thing I did and vibe-code their own version after a few minutes.
On the Apple TV you get ‘ads’ for the apps you have in your top row, with different levels of interactivity. Some are just logos of that streaming service, some show recently watched. The Apple TV app has full-blown ads for Apple TV+ originals.
They won’t actually let you delete the Apple TV app, but if you move it out of the top row you will never see the ads.
My parents have an Amazon Fire TV and when I go to their house and have to use it it drives me insane. Carousels of adds large at the top, banner ads as you scroll, full rows of sponsored apps. Full screen ads for random Amazon products when you pause any show you are watching. Everything you watch on Amazon’s streaming service has minute long unskippable ads. Sometimes when you turn it on Alexa will just verbally read you ads.
This thread seems to have a lot of people that love the iPhone mini (me included - I still use my 12 mini).
But from all reports that you can find with a quick search it seems clear that it did not sell well by Apple standards.
I would love them to bring it back and I’m not sure what it is about the Hacker News crowd that makes this phone over-represented. Maybe the tech crowd also uses laptops more, so we think of phones as our “small device” and use other devices more as appropriate?
Yeah. The question I'm trying to answer is not "does it make sense for Apple to make a small phone?", but rather "does it make sense for anyone to make a small phone?" I'm using the 13 Mini's sales data as evidence, because it is the one and only small phone made in the past decade or so.
I understand why you'd reach for that data, not a ton of other alternatives... But I'm not convinced that an arbitrarily chosen brand could achieve those sales figures. Especially if it was a new or no-name brand that didn't have a proven track record with software updates and hardware build quality.
Maybe I'm just incredibly naive but I have this small hope that we'll see a return to smaller phones that are trifolds for when you need the real estate.
I tend to like smaller phones as well, but even comparing the Pixel 9 Pro vs Pixel 9 Pro XL used markets, it seems really hard to find non-XL versions. I would totally believe that the XL is a far more popular model, unfortunately for the rest of us.
It is interesting seeing the difference in model perception between “normal” people and the Hacker News crowd.
My perception is that a huge percentage of the mass market just like OpenAI because they were the first to market and still have the most name recognition. Even my coworker who works in DevOps says “Gemini sucks, Claude sucks” even though he has never once tried either of them and has never looked at a single benchmark comparison.
I’ve noticed a new genre of AI-hype posts that don’t attempt to build anything novel, just talk about how nice and easy building novel things has become with AI.
The obvious contradiction being that if it was really so easy their posts would actually be about the cool things they built instead of just saying what they “can” do.
I wouldn’t classify this article as one since the author does actually create something of this, but LinkedIn is absolutely full of that genre of post right now.
> My theory is that Apple specifically wanted an effect that can’t be replicated in webviews
This makes a lot of sense to me. I was also under the impression that all these lighting effects would be rather computationally expensive. This could encourage people to upgrade devices and make it hard to replicate this design on other brands’ less powerful hardware.
I typically get AppleCare on my phone and then get a new screen and battery right before the window is up. AppleCare is cheaper than the cost of those repairs plus I have the added peace of mind that if something bad does happen I have AppleCare. I don’t renew it as part of the monthly plan though.
I also don’t use a case or screen protector on my phone fwiw
But only once did Google actually give me what I was looking for. Every other time the Google results were the same SEO garbage I was getting with DDG.
Maybe I should try switching to Google for a full month to see if my search quality generally improves.