Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's too bad that this discussion is ending up mostly in language wars, because Alex' talk was about how to improve the situation, not how to prove that your favored technology is always the right tool for the job and everybody else is wrong.

Let's say that you're doing some NLP and you want to pre-process your strings in Ruby because its easy and elegant. Idiomatically you might write something like "my_objs.map(&:to_s).map { |s| s.gsub(/\W\S/+, '') }.reject(&:blank?)" - perhaps broken up into multiple lines - but you're allocating three arrays and re-allocating every string. And in real life you're probably doing more pre-processing steps which involve additional arrays and possibly additional string re-allocations.

You could switch to gsub! which does the non-word stripping in place, but it "Performs the substitutions of String#gsub in place, returning str, or nil if no substitutions were performed", so you're adding an extra conditional. And it's going to be harder to avoid re-constructing a bunch of arrays.

There's no intrinsic reason you couldn't have basic Ruby language APIs which let you write code which chains operations just as naturally and obviously, but don't wastefully re-allocate. Think of how ActiveRecord relations defer the actual SQL execution until it's unavoidable.

(Haskell-style laziness might solve the main issue above, but I've run into analogous unavoidable-allocation problems with Haskell. If I'm depth-first searching a board game state-tree I shouldn't have to re-allocate the whole state representation at each step, but it's pretty hard to avoid while writing elegant Haskell since modifying memory in-place has side-effects.)

I don't know Go, but maybe instead of talking about why people who still use Python or Ruby are or are not stupid, we can talk about what it gets right that other languages can adapt?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: