Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those problems have been addressed, but I really shouldn't say anything more than that.


It's actually an interesting problem most people ignore... which, if you have enough swap, is perfectly reasonable; but if you are running without swap, they can become /much/ bigger problems. The funny thing is, memory overcommit both becomes a much larger risk /and/ confers more benefit when you have no swap

Now if you've dealt with this already and just have your techniques secret you know all this but I'm going to yammer on anyhow, just 'cause I think it's a interesting subject; feel free to participate or not.

So in the modern virtual memory system there are lots of situations where more memory is allocated than used. If I have some massive, 200 megabyte webapp running under mod_perl or mod_php or what have you, and I have apache fork 1024 processes, (well, in that case I probably need to increase threads... but I digress) even through I should be using 200 gigs of ram, I'm not. fork uses copy on write; it only copies the data that changes.

this is pretty cool... I get to use 200 gigs of ram, but I only actually have to buy 400 megabytes or something. The problem is that it's impossible for the virtual memory system to tell ahead of time how much of that copy on write will actually be copied. without memory overcommit, the box will keep track of every fork and figure the max allocated ram, assuming no copy on write savings, and it will fail your fork or malloc when it reaches the total amount of swap plus ram. If you are trying to run 1024 identical 200 megabyte processes on a box with 8 gigs of ram and no swap this really sucks. So memory overcommit, especially when you don't have a lot of swap, is a nice thing to have.

But, you say, what if my 1024 identical 200MiB processes become non-identical? what if they start changing their memory, and copying and start using more ram than I have ram and swap in the box? on most linux systems, you'll get the oom-killer, which will randomly (well, not randomly, but it seems that way sometimes) kill a process. Sometimes it kills something unimportant.... sometimes it kills the webapp the box was built to house... sometimes it kills some background system process you were depending on. You can tweek this to hell and back, but any way you slice it, the oom-killer is bad news. Another option is to tell the computer to just panic when it finds it runs out of memory.

Now, if you turn memory overcommit off...well, your landings are much softer. without memory overcommit, the only time the box runs out of ram is at malloc time, and it can cause malloc to return an error, and (hopefully) be handled gracefully by the program asking for more ram.

the real problem with turning memory overcommit off is that if you are trying to run 1024 identical 200MiB processes on a box with only 4 gigabytes of combined ram and swap, your 21st fork will fail, even though, thanks to the magic of copy on write, there was plenty of unused ram to go around.

Now, the advantage to having a lot of swap on a system without memory overcommit is that the virtual memory manager is pretty smart; while the computer can /commit/ to allocating swap, as long as the magic of copy on write leaves it with free physical memory, it will use that physical memory. If it turns out it was too aggressive about over committing memory, well, it hits swap, and depending on how much ram you have that is seldom used, your box slows down by quite a lot. Of course, if you use swap for ram that is actually accessed very often, most people agree that the box might as well have just crashed or frozen. the disagreement, I think, comes when there is ram that is allocated for some seldom-used library or the like. the virtual memory system swaps that out to disk, and gives that bit of physical ram to something else it needs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: