Is it normal for Node.js' RSS (Resident Set Size) to grow with each request, until reaching some cap?

I’ve noticed that RSS (Resident Set Size) of my node.js app is growing over time, and considering I’m having a “JS Object Allocation Failed – Out of Memory” error on my server, it seems a likely cause.

I set up the following very simple Node app:

var express = require('express');

var app = express();

By simply holding down the “refresh” hotkey @ http:// localhost:8888/ I can watch the RSS/heap/etc. grow, until RSS gets well above 50mb (before I get bored). Waiting a few minutes and coming back, the RSS drops – presumably the GC has run.

I’m trying to figure out if this explains why my actual node app is crashing… my production app quickly hits about 100Mb RSS size, when it crashes it is generally between 200Mb-300Mb. As best as I can tell, this should not be too big (node should be able to handle 1.7Gb or so, I believe), but nonetheless I’m concerned by the fact that the RSS size on my production server trends upwards (falloffs represent crashes):

enter image description here

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

This question is quite old already and yet has no answer, so I’ll throw in mine, which references a blog post from 2013-2014 by Jay Conrod who has “worked on optimizing the V8 JavaScript engine for mobile phones”.

V8 tries to be efficient when collecting garbage and for that it uses Incremental marking and lazy sweeping.

Basically incremental marking is responsible for tracking whether your objects can be collected.

Incremental marking begins when the heap reaches a certain threshold size.

Lazy sweeping is responsible for collecting the objects marked as garbage during incremental marking and performing other time consuming tasks.

Once incremental marking is complete, lazy sweeping begins. All objects have been marked live or dead, and the heap knows exactly how much memory memory could be freed by sweeping. All this memory doesn’t necessarily have to be freed up right away though, and delaying the sweeping won’t really hurt anything. So rather than sweeping all pages at the same time, the garbage collector sweeps pages on an as-needed basis until all pages have been swept. At that point, the garbage collection cycle is complete, and incremental marking is free to start again.

I think this explains why your server allocates so much memory until it reaches a certain cap.
For a better understanding I recommend reading Jay Conrod’s blog post “A tour of V8: Garbage Collection”.

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply