Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is only true if you ignore response latency as a comparative metric entirely.

Response latency makes scripting language unacceptable for some types of problems. But in my experience very few latency problems I've come across are down to language choice vs. badly written database queries, lack of caching etc., that'd still be unacceptable regardless of language. Basically, the moment a database is involved, that's where you're most likely to see your low latency going out the window.

Of course there are situations where language choice definitively matters in terms of latency. Despite using mostly Ruby now, there are certainly places where I'd use C. E.g. I'd never try to replace Sphinx search with pure Ruby code, for example.

> I'm really curious what you were doing in Ruby that spent 90% of your runtime in the kernel. Having implemented similar middleware, I found that nearly all my runtime was spent just performing basic decoding/encoding of framed packet data, and as a kernel developer, I'm having a hard time imagining what an in memory queue broker could have been doing to incur that overhead

The moment you find yourself "decoding/encoding framed packet data", you have lost if your goal is a really efficient message queue if said decoding/encoding consists of more than check a length field and possibly verify a magic boundary value.

If the userspace part of your queueing system does more than wait on sockets to be available and read/write large-ish blocks, you'll be slow.

Of course there are plenty of situations where you don't get to choose the protocol, and you're stuck doing things the slow way, and sometimes that may make a language like Ruby impractical.

But there are also plenty of situations where in-memory queues are also not acceptable, and the moment you hit disk, the entire performance profile shifts so dramatically that you suddenly often have much more flexibility.



> Response latency makes scripting language unacceptable for some types of problems. But in my experience very few latency problems I've come across are down to language choice vs. badly written database queries, lack of caching etc.

5 ms you save in your runtime -- for free, without any work on your part -- is 5 ms longer you can now afford to spend waiting on the database.

> But there are also plenty of situations where in-memory queues are also not acceptable, and the moment you hit disk, the entire performance profile shifts so dramatically that you suddenly often have much more flexibility.

I can only disagree here. I view CPU cycles and memory as a bank account. If you waste them on an inefficient runtime, you can't spend them elsewhere, and there are always places where they can be spent.

Excluding huge sections of the problem domain -- such as "decoding/encoding framed packet data" -- is a cop-out. There's no valid reason for it, unless you're going to go so far as to claim that Ruby/Python et al increase developer efficiency over other language's baselines so dramatically that it outweighs the gross inefficiency of the languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: