Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, you too can by required by the compiler to implement cooperative multitasking by hand. In 2011.

Yes. It is an answer to the criticism made, and I acknowledge that. But it is not a very good answer to the objection. The better answer is "don't do that in Node.js", which is still not all that great (it's really easy to accidentally write something that blocks badly), but is better.



Well the other option for computationally expensive code is to use some sort of worker that runs a sufficiently fast language.

JavaScript on v8 is actually one of the fastest interpreted languages available, so unless you really need to drop down into C or similar, splitting across the event loop or using another node child process is not an unreasonable way to approach CPU heavy calculations.


> Well the other option for computationally expensive code is to use some sort of worker that runs a sufficiently fast language.

Which only helps if you know your code is going to be slow. If you somehow implemented an algorithm with a quadratic complexity and did not test for sufficiently large input, you might not realize what's going to happen before it hits production.

> JavaScript on v8 is actually one of the fastest interpreted languages available

1. Nobody is denying that.

2. The issue is with the behavior of evented systems in general and node in particular in case of in-request CPU-bound code paths, namely that the whole server blocks killing concurrency and basically DOSing the instance.


I can't disagree with any of these points.

NodeJS is no magic bullet, those who treat it as such should be extremely wary. It is, however, rather nice to work with.


Cooperative multitasking will always have the blocking problem, whether implemented manually or by the language. The manual callback chaining is tedious, though.

Preemptive multitasking solves that problem, but if mixed with lots of shared mutable state, it reintroduces much worse problems of non-determinism and unreproducible bugs.

The golden path involves preemptive multitasking with little-to-no shared state. That way you get to have your cake (determinism) and eat it too (no blocking problems/starvation).


> The golden path involves preemptive multitasking with little-to-no shared state. Erlang (and probably some other languages) seems to have taken this approach.


(go too)


I am not quite sure what it is you are trying to say. Node does (in this case) require a bit more hands on approach than most other languages but that is because of a subtile but important difference. Node uses corporative concurrency whereas threads are not corporative. This is both a disadvantage (you are required to do more work and cannot take proper advantage of multiple CPUs) and a huge advantage (no locking is needed and you can share the results between different execution points).

Of course the real issue is that you should choose a better implementation of the algorithm.


Lets not forget that process-per-connection has a really bad performance rep.

These newfangled non-blocking designs are seemingly much better for everyday tasks.

And saying that an async IO server is a bad choice for computational things is, well, a well-known tradeoff that 99% of apps don't have to worry about.

When was the last time someone complained that their webservers were CPU bound? Its always the DB that is the bottleneck...


Yes. It is an answer to the criticism made, and I acknowledge that.

It's an answer that says: "You're using the wrong tool for the job."

Which is particularly weird given that Fibonoacci itself is probably the most overused example of algorithm-to-promote-paradigm in computer science. Except it's for a different paradigm: recursion, not asynchronous IO.

Still, it's interesting in a recursive sort of way.


Fib is designed to be a piece of code that runs really slowly but doesn't require typing in many lines of code. This makes it a reasonable benchmark for things like "how fast can a function be called", and also a good example of "something that takes a long time".


"it's really easy to accidentally write something that blocks badly"

What is this nonsense? Isn't it time we knock this one on the head? Let me let you into a secret: If you write CPU intensive enough code in any framework you can eventually block all further requests. This whole debate was based on FUD and a complete misunderstanding of what node is all about. Read the previous posts and don't post generic, bullshit comments like "which is still not all that great".


"If you write CPU intensive enough code in any framework you can eventually block all further requests."

"Intensive enough" is a vague and fuzzy term which you can hide too much behind. So let me put it this way: Go grab Yaws, the Erlang web framework. Write a web page that goes into an infinite loop. Write some other web pages that work normally. Observe that visiting the infinite-loop web page once does not cause the rest of the web server to stop serving. Yaws may time it out eventually, too, not clear from a quick look at the docs. In fact it will only marginally decrease performance, even on single core machines.

Yes, if you bash on that page often enough you will eventually degrade service to an unacceptable level. But you will not bring down the whole server, or even that OS process, and it will take substantially more than one hit per process or one hit per core.

Now, go grab Node, and write a web page that goes into an infinite loop. You just brought that OS process down, from the user's point of view.

Node is qualitatively much easier to lock up an OS process with than Erlang. Or Haskell, or Go, or anything else with a modern task scheduler, which is an ever-increasing number of language platforms.


At some point the fair time given to all the various workers stuck in an infinite loop will starve legitimate low-work tasks

CPU is finite. RAM is finite. The kernel has limits on things like sockets.

(Erlang is super-neat, and Go goes a long way in the same direction. But I rather imagine the original poster meant that people should use CGI)


But who deploys node in a single instance? There's documentation all over the place which tells you how to load balance over several instances - and it's easy to do so. This is my point, the node is cancer article was pure troll.


OK. Set up a load balanced infinite loop. Result: A load balanced infinite loop. This is not a win for Node. Load balancing across a number of hung processes buys you very little. (Not quite zero; you get a chance to detect the fact that it's hung and restart it, as long as these pathological requests aren't coming in fast enough. Hope the user who poked the bug doesn't hit refresh too many times!)

I still think you may not understand what modern schedulers end up doing here.


How does this differ from MacClients in apache, or process limits on CGI? I agree with you, but don't see how this problem is specific to node...


It isn't specific to Node. What's specific to Node is that there's a whole bunch of hype convincing people that Node is the epitome of multitasking, when in fact it's just yet another event-loop based system, subject to the same foibles. The same very well known foibles.

Node isn't a bad technology and I don't hate it. Well, I personally hate working in the event-loop paradigm (due to abundant experience) but that's no discredit to Node, which simply is what it is. The hype is toxic. The hype is basically full of flat-out lies. It teaches people that the state-of-the-art as of 1990 or so is the state of the art today. The hype claims Node is blazing a new path in the field of concurrency, when in fact it's traveling a 4-lane highway with fast food and hotels, while putting blindfolds on its partisans to hide them from the fact they're actually smack dab in the middle of civilization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: