Hacker Newsnew | past | comments | ask | show | jobs | submit | boyter's commentslogin

Its pretty easy to step over those limits.

Also localhost and presumably this are good for validating your logic before you throw in roles, network and everything else that can be an issue on AWS.

Confirm it runs in this, and 99% of the time the issue when you deploy is something in the AWS config, not your logic.


>> "It's pretty easy to step over those limits."

Exactly, especially when people are starting out, don't have a clear understanding of the inner workings of the system for whatever reason. Jobs are getting harder to find nowadays and if during learning, you make one mistake, you either pay or the learning stops.


All of the above.

Stop reading the news. It makes you depressed or angry. Go hiking. Walk on the beach. Play with a dog or your children. Climb a tree.

Leave the slave slab phone at home, or delete every news and social app. Do not browse the web. Take a book and read.

It will be hard at first. Then it gets easier. Best thing I ever did.

Reminder. What passes for news today wouldn’t have registered for most people 100 years ago.


Seconding this. I only heard about Iran bombing 23 hours after it happened. I was playing SimCity 2013 with my 7yo. I’m reading older books, playing older games, exercising, and keeping a loose eye on AI so I don’t fall too far behind, but I always wait about two months before adopting anything new (like Claude Code, new tools, etc.). I know that’s pretty superficial, but the goal is staying sane, right?


Other people did the same as you and are now stranded halfway around the world waiting for flights.

I get the gist of it but I think being informed is useful. The best way is having access to quality news with a balance. It does not need to be constant outrage.


If I knew I was traveling to a region, with a potential conflict, I would do my due diligence and read the news. For the vast majority of people, reading news is a net negative.


You may be right. I just wanted to add that there are probably people stranded in Dubai right now because they booked a Berlin–Sydney flight with a stopover six months ago. Sometimes you simply can’t predict these things.

Yes, staying informed can help in some cases. But there are also many situations where it wouldn’t have made a difference. The amount of noise you have to sift through to “stay updated” is huge. At some point, it becomes a trade-off: consume a constant stream of news to maybe avoid a rare edge case — or tune it out and accept that very occasionally you might get unlucky.


> Reminder. What passes for news today wouldn’t have registered for most people 100 years ago.

This is a great point.


> Reminder. What passes for news today wouldn’t have registered for most people 100 years ago.

Up to a point. There are some things that should not be ignored, e.g., "Trump says he's not mulling a draft executive order to seize control over elections":

* https://www.pbs.org/newshour/politics/trump-says-hes-not-mul...

"Trump, seeking executive power over elections, is urged to declare emergency":

* https://archive.is/https://www.washingtonpost.com/politics/2...


> should not be ignored, e.g., "Trump says he's not mulling ...

The prior news that "sources close to people who say they hear from a guy who went to school with someone who said ... that trump said he was mulling thinking about drafting an executive order ..." should be ignored, the follow up should be ignored and the inevitable follow to the follow up should also be ignored.

These are attention seeking outbursts at best, clickbait, lies and propaganda at worst.


If you have a high frame rate to start with it’s pretty nice and feels smoother. But a low frame rate turned into a high one looks good but feels laggy.

So arguably you never need frame gen for a game, since it only really works when it’s already pretty nice.


fps getting increased but latency does not improve, and what's what important


Gamers chased high FPS, that's what they got.


Chased the wrong thing. It’s the 1% lows that matter more generally.


You will never ever get decent 1% lows in most titles, the software stack is architecturally fucked in the popular engines and can’t do it. You would need a CPU that’s literally 100x faster than today’s top models for it to be able to compile shaders on-demand within a single frame without hitching. (Or maybe it’s more accurate to say that there’s a massive gulf between what the hardware/drivers need - compiled pipeline objects built/known ahead of time - versus what game engines are doing, building pipelines on the fly on demand, surfacing new permutations frame-by-frame)


Why not compile asynchronously ahead of time?


This requires knowing what to compile, which these engines don't really do, because the necessary data is pooped out by arbitrary game logic / scripts. That's why precompiling shaders in e.g. UE5 basically relies on running the engine through a pre-recorded gameplay loop and then making a list of the shaders/PSOs used; those are then pre-compiled. Any shader not used in that loop will cause stutter. A newer UE5 technique is to have heuristics which try to guess which PSOs might be needed ahead of time.

There's this article from Unreal on the topic: https://www.unrealengine.com/en-US/tech-blog/game-engines-an...

If you read their proposed solutions, it's quite clear they only have patchy workarounds, and the inability to actually pre-compile the needed PSOs and avoid shader and traversal stutter is architectural. It should be noted that these engines are also stuttering on console, but it's not as noticeable since performance is generally much lower anyway.


When getting rid of actual performance bottlenecks is too hard or costs too much, just make something up.

XeSS is actually pretty great, played Talos Principle 2, a UE5 game on the Steam Deck at 800p 30fps thanks to XeSS.


Not familiar with that tool. What follows is my best guess based on what I am seeing.

Serena looks to be a precision tool. Since it uses uses LSP its able to replicate a lot of what a IDE would allow and IDE for LLM's.

cs by contrast is more of a discovery tool. When you're trying to find where the work actually happens it can help you, and since there is no index involved you can get going instantly on any codebase while they are index.

You could use cs for instant to find where the complexity lies, and then use Serena to modify it.


Seems to be a load issue, hopefully easily resolved

    Request URL https://lightwave.so/api/register/ephemeral
    Request Method POST 
    Status Code 429 Too Many Requests


People who give away things like this tend to be good people. As such when someone comes asking for help or new things they are inclined to help.

Your response is where it should go when things get rude, but you don't want to start there.


I have projects online. You can use them, or not. Sometimes people file issues that I think are good and fix them.


I had never seen this before. Although I am now trying out lagrange and seeing what it can do.

Sorry for hijacking the thread on what is a great post, but is gemini common these days? This is literally the first time I have ever seen it, and it seems fairly interesting, albeit deliberately limited.

Annoyingly the learn more about it link https://gemini.circumlunar.space/ is now dead, as possibly might be protocol.


Many HN users can’t seem to get past some minor issues about the spec. Despite this, it’s a working protocol that has a critical mass of real world users, mostly bloggers (which is what the spec author mostly had in mind). One of my blogs on Gemini has a “like” button and it gets several clicks a week, indicating that people are indeed reading my content in Gemspace.


Really? Is that actually really true that they cannot get past the spec?

Because I followed the Wikipedia article to the article on The Register. Whoever wrote Wikipedia summarized El Reg's entire article as how gemini has been critized for excluding people. 1 sentence. But that's only what the headline says, for clickbait. The actual article body paints a very different picture, including the rather subtle (by Register standards) rebuttal of that criticism by pointing out that WWW browsers and the Internet are not in an everything-is-a-nail-and-I-have-this-hammer situation.

So I wonder how true it really is that there's this wave of criticism, and how much that is rather inferred from people reading headlines and nothing else beyond them. Like the Wikipedia author did.

That said, having discovered the existence of Gemini today, I am now wondering how much of a doddle it would be to add a gemini UCSPI-TCP server (which would obviously have to sit behind something like Hoffmann's sslserver) to Bernstein publicfile. The publicfile way would of course have a separate off-line tool to turn index.gopher into index.gemini, once at content generation time instead of over and over at runtime.

Hmmm.

Stop giving me distractions, people! I'm supposed to be getting the next djbwares running on NetBSD. (-:

P.S. I have had the GOPHER server that I added to publicfile running for some time. I use it to publish a not-very-secret list of packages and source archives. It gets some quite odd requests in its logs.


IIRC it can't even show images, which I see as a big issue. I'm usually a fan of alt protocols.


There are clients that can preload embedded images in 2025. Lagrange can handle this via config.


Clients do the f they want. The protocol doesn't prevent it.


Nice, I have to figure out how to add that to my site.


The cert has expired, but the site is still there if you click through the warnings.

Edit: no it just redirects to https://geminiprotocol.net which seems to be the new official site


Perhaps adding some guide on how to hook this up to... well anything would be good :)

There is a lack of guidance for https://github.com/mark3labs/mcp-go/ which this is using as well so while everything is there, its hard to know how to make it do anything.


Ignoring the open-source vs free software discussions that are bound to come about from this well said. Large companies exploiting developers and abuse towards the maintainers is probably my biggest bugbear when it comes to this.

In fact I have a similar post https://boyter.org/posts/the-three-f-s-of-open-source/ which I redirect people towards if they become aggressive towards me when I am trying to help them. Thankfully I have only had to use it a handful of times.


Crawling, incidentally, I think is the biggest issue with making a new search engine these days. Websites flat out refuse to support any crawler [other] than Google, and Cloudflare and other protection services and CDN's flat out deny access to incumbents. It is not a level playing field.

I wrote the above some time ago. I think its even more true today. Its practically impossible to crawl the way the bigger players do and with the increased focus on legislation in this area its going to lock out smaller teams even faster.

The old web is dead really. There really needs to be a move to more independent websites. Thankfully we are starting to see more of this like the linked searchmysite discussed earlier today https://news.ycombinator.com/item?id=43467541


that's a good point, in search we have google as a monopoly and since a big percentage of sites only want to be crawled by them it reinforces the monopoly. So a lot of people complain about bots not following robots.txt but if you follow them to the letter it's impossible to make anything useful. Also AFAIK robots.txt doesn't have any legal standing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: