You know I'd love to see a writeup of the issues and challenges you've faced with this persistence strategy as the site grew. You know, more details of what you decided to lazy load, why, and how, and the impact it had.
Technically speaking, I find hacker news persistent strategy one of the most interesting things about the implementation.
I use a similar strategy for my blog and just playing with larger datasets I've certainly run into hard limits on what seems to be acceptable.
I'd have thought that most of HN's traffic is on the front page, and it's associated comments. I could well be wrong - But given this I'd have thought 2GB was plenty, and wasn't the bottleneck.