Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is, sometimes. But a cache hit in 50-150ns beats the pants off of a round trip to the backing database or microservice, which might take tens of milliseconds. That's 100,000-1,000,000x faster.

It's the same reason you'd never find a modern CPU without multiple layers of caching.



That's not the kind of caching GP was referring to, at least not to my reading. I think you'd be hard pressed to find any application level caching that could serve up a cache in the nanosecond range. At best you're looking at sub millisecond range, and that's when the cache lives in memory on the same hardware. Application caching on any modern cloud provider is almost always going to be a network call and best case you're looking at sub-10ms response time. And yes, while I definitely agree that it beats hundreds of ms by a wide margin, it can very easily bite you in the ass if you're not very careful about what gets cached. Caching is a performance hack, pure and simple.


True, I was only citing the figure for memory access times on x86 (according to a quick google). There's certainly going to be more overhead depending on how the cache is implemented, whether it's in memory, or swapped out/deliberately on disk, etc.

That said, I'd call it a fundamental architecture requirement (and one you can't really avoid anyway), given fundamental limitations imposed by the speed of light.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: