Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People I think are overindexing on this being about "Bad hardware".

We have long known that single bit errors in RAM are basically "normal" in terms of modern computers. Google did this research in 2009 to quantify the number of error events in commodity DRAM https://static.googleusercontent.com/media/research.google.c...

They found 25,000 to 70,000 errors per billion device hours per Mbit and more than 8% of DIMMs affected by errors per year.

At the time, they did not see an increase in this rate in "new" RAM technologies, which I think is DDR3 at that time. I wonder if there has been any change since then.

A few years ago, I changed from putting my computer to sleep every night, to shutting it down every night. I boot it fresh every day, and the improvements are dramatic. RAM errors will accumulate if you simply put your computer to sleep regularly.

 help



I used to partake in all RAM discussions online. Here, reddit, every technical hardware forum and anywhere workstations were being talked about.

The sentiment was always ECC is a waste and a scam. My goodness the unhinged posts from people who thought it was a trick and couldn't fathom you don't know you're having bits flipped without it. "it's a rip off" without even looking and seeinf often the price was just that of the extra chip.

I've discussed it for 20 years since the first Mac Pro and people just did not want to hear that it had any use. Even after the Google study.

Consumers giving professionals advice. Was same with workstation graphics cards.


There is DRAM which is mildly defective but got past QC.

There are power suppliers that are mildly defective but got past QC.

There are server designs where the memory is exposed to EMI and voltage differences that push it to violate ever more slightly that push it past QC.

Hardware isn't "good" or "bad", almost all chips produced probably have undetected mild defects.

There are a ton of causes for bitflips other than cosmic rays.

For instance, that specific google paper you cited found a 3x increase in bitflips as datacenter temperature increased! How confident are you the average Firefox user's computer is as temperature-controlled as a google DC?

It also found significantly higher rates as RAM ages! There are a ton of physical properties that can cause this, especially when running 24/7 at high temperatures.


Every so often when I'm doing refactoring work and my list of worries has decreased to the point I can start thinking of new things to worry about, I worry about how as we reduce the accidental complexity of code and condense the critical bytes of the working memory tighter and tighter, how we are leaning very hard on very few bytes and hoping none of them ever bitflip.

I wonder sometimes if we shouldn't be doing like NASA does and triple-storing values and comparing the calculations to see if they get the same results.


Might be worth doing the kind of "manual ECC" you're describing for a small amount of high-importance data (e.g., the top few levels of a DB's B+ tree stored in memory), but I suspect the biggest win is just to use as little memory as possible, since the probability of being affected by memory corruption is roughly proportional to the amount you use.

It'd be interesting to see how your experience would differ if you put it to sleep at night after switching to ECC RAM.

Unfortunately, not that many consumer platforms make this possible or affordable.


Most computers running Firefox won't have ECC RAM.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: