Node is using event-loop and single threaded approach in contrast to traditional threaded model used by other server side stacks.
In traditional threaded model, each client request is served by a new thread or process. In node it assigns a new thread from it's limited thread pool if the request requires I/O like DB access or some other blocking task and on completion of the task thread hand over the result with control to node's event-loop.
It is clear that for parallel execution of blocking tasks node uses threads with the help of libuv and event-loop.
My confusion is, let say my application has all the client requests having some sort of blocking task like DB access or some sort of File System access that needs a separate thread from node's single thread of execution. If node's thread pool is having m threads and concurrent requests become m+1, will node still serve more requests without new requests waiting for threads from pool become free to serve?
Here comparing this scenario with threaded model looks like node's single threaded model is only advantageous when there are some non-blocking requests along with some simple computation intensive requests.
It is possible to use multi-core CPU to increase OS capabilities using node's "cluster" module but I am only considering single core CPU.
Correct me if anything is wrong with my understanding.
I agree with what @fibric said. I'll add some more points :
Node.js works on Event Loop which is single threaded. But as you said libuv maintains an internal thread pool which handles all the blocking operations. If Node's thread pool is having m threads and concurrent requests become m+1, the extra request has to wait for one of the threads to become available. But as @fibric said Node.js has many queues and each additional blocking op can be added to the queues. A slow Node.js app rarely has something to do with thread pooling.
So, I will say yes Node is non-blocking and evented in nature. But that doesn't mean it doesn't make use of threading at all. It does, but you as a developer have nothing to do with it. We just work with the Event Loop which is single threaded and non-blocking. It's Node's responsibility to delegate the blocking tasks to worker threads.
Both @fibric and @sandeep have explained it much better than I could, but one thing worth pointing out, is that this statement isn't correct in the first place > "requests having some sort of blocking task like DB access or some sort of File System access that needs a separate thread from node's single thread of execution."
These operations don't really "go to a thread" that waits for them. From the libuv docs ( http://docs.libuv.org/en/v1.x/design.html ):
The event loop follows the rather usual single threaded asynchronous I/O approach: all (network) I/O is performed on non-blocking sockets which are polled using the best mechanism available on the given platform
All I/O is handled through signal handling mechanics that don't require an active thread.
If you read that doc, you'll see some temporary blocking is involved for signal handling, meaning blocking happens for only a small fraction of the complete wait time[*].
So yes, libuv (and by consequence node.js) is as non-blocking as it gets.
[*] at the File System level, this depends heavily on the platform and the Filesystem in use, but libuv will ALWAYS try to use the best mechanism possible.
Denny Trebbin
Lead Fullstack Developer. Experimenting with bleeding-edge tech. Irregularly DJ. Hobby drone pilot. Amateur photographer.
It's correct that Node.js uses multiple threading strategies. The libuv thread pool size equals to the number of logical CPU cores.
But, Node.js has many different queues. When the thread pool exceeds, each additional IO tasks is appended to the queue (FiFO).
A slow Node.js app rarely has something to do with thread pool size. Most often our code is synchronous, and that is slowing down Node.js quite a lot.
You can find everywhere those
setTimeout( {...}, 0)calls. A trick to speed up synchronous code execution. Oh well, that is not true, but the recognizable effect is a performance boot. All this setTimeout zero does, is to move our slow code to the end of the message queue, allows other code to execute before our slow code blocks everything else.