Hacker Newsnew | past | comments | ask | show | jobs | submit | more GaggiX's commentslogin

The article should analyze Rodin that in my opinion is probably the best one in generating 3d assets.

He is working on the web browser.

Why? A model correcting your errors is a powerful tool to learn the language, much better than just writing the phrase in your native language.


Your opinion on the two subreddit seems to be just influenced by how much they like your project or not.

A project that you spam in every of your comments.


I used to "spam" (as you call it) about nuclear fission on Hacker News. But this the wrong crowd. Hopelessly wrong.

Poison Fountain is top of mind currently so it's understandable I talk about it constantly. Even to my wife. Also I think it's highly relevant to the excellent Harper's article we're reading today.

Whether the Redditors "like the project or not" reflects whether or not they think there is a problem with mindlessness.

What they actually say is almost immaterial. Either it's FUD about malware or illegality or something they imagined without evidence about how easy the poison is to filter. These fictions are just a manifestation of their opposition to the idea.

You can see that among the bot-heads on r/programming (perhaps forced to embrace mindlessness by career considerations) there's nothing that can be said without attack. A dozen downvotes immediately. They actually logged into Hacker News and posted FUD directly to the HN post I linked to. Spectacular.

The opposite is true on r/hacking. Except for a few in opposition (some of whom did unsuccessfully attempt to DDOS the fountain) most people sympathize and agree. They don't want to be dependent on Sam Altman or Elon Musk for their cognition.


For fun I'm imagining a future where you would be able to buy an ASIC with like an hard-wired 1B LLM model in it for cents and it could be used everywhere.


This is meant for openclaw agents, you are not gonna see a ChatGPT or Claude User-Agent. That's why they show it in a normal blog page and not just as /llms.txt


In tirreno (our product), we catch every resource request on the server side, including LLMs.txt and agents.md, to get the IP that requested it and the UA.

What I've seen from ASNs is that visits are coming from GOOGLE-CLOUD-PLATFORM (not from Google itself), and OVH. Based on UA, users are: WebPageTest, BuiltWith, and zero LLMs based on both ASN and UA.

1. https://github.com/tirrenotechnologies/tirreno


Openclaw agents use the same browser and ASN that me and you use, also the llms.txt (as shown) is displayed as a normal blog page so it can be discover by the agents without having to fetch /llms.txt at random.


When I look at LLMs.txt, I see every request and there are no ASNs from residential networks or browsers UA.


For the third time I'm telling you on Anna’s Archive they have displayed the llms.txt as a standard blog page, not hidden in /llms.txt, so that agents can notice it without having to fetch /llms.txt at random. That's why it's meant for openclaw agents and not openai/anthropic crawlers.


I don’t understand your reasoning.

Are you suggesting that openclaw will magically infer a blog post url instead? Or that openclaw will traverse the blog of every site regardless of intent?

Anyway, AA do provide it as a text file at /llms.txt, no idea why you think it is a blog post, or how that makes it better for openclaw.


>AA do provide it as a text file at /llms.txt, no idea why you think it is a blog post

It's a blog post, it's shown as the first item in Anna’s Blog right now, and as I said in my first comment it's also available as /llms.txt

>Are you suggesting that openclaw will magically infer a blog post url instead? Or that openclaw will traverse the blog of every site regardless of intent?

If an openclaw decide to navigate AA it would see the post (as it is shown in the homepage) and decide to read it as it called "If you’re an LLM, please read this'.


My point is about LLM crawlers specifically.


LLM crawlers aren't really a thing, at least not in the "they have agency over what they're crawling and read what they crawl" way.


It would be cool, right now the mini and nano models are stuck at GPT-5


https://cosmo.tardis.ac/files/2026-02-12-az-rl-and-spsa.html

Response from the author of Viridithas, there is a link to this engine in her webpage.


Thanks! I've put that link in the toptext as well.


Her?


I read "girl.surgery" and guessed.


Homepage (via judicious Cmd-F):

> I use she/her pronouns


Warning to not open the homepage at work or in public.


Curating a benchmark for reverse engineering functions doesn't seem a bad idea actually


There is only one location shown in the images, in the past there were several and much clearer, I cannot image how difficult it must be to find it if the europol cannot find it in 2026.

Old thread for context: https://news.ycombinator.com/item?id=19469681


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: