That link includes a 20% launch discount, which expires tonight at midnight PDT.
As with previous versions, the new edition focuses on the core principles of web development, so there isn't much Rails 5–specific material in the update, but I am planning standalone Learn Enough tutorials on things like Action Cable and Rails API (http://learnenough.com/).
I know you've receive a gazillion of comments like this in the past but I'm going to do it anyway: THANK YOU. Your work on that book is what got me my first web dev job. I've grown a ton since and have you to thank for it. Thank you so much.
I learned Rails through this book, which kickstarted my way into tech. It's an excellent resource that not only teaches you Rails, but also developing software as a professional.
Michael, I learned Rails from your tutorial and started implementing my SaaS app the day I hit chapter 14. That's how I went from corporate Java slave to location-independent entrepreneur. Your tutorial is that good, it's all I needed.
A lot of name dropping on hacker news goes over me head because I've only been in the community a few years. I feel like Michael Hartl will be the name I drop 15 years from now whenever I'm discussing Rails and the origin of "modern" web development.
I love Dr Hartl's pedagogy - it reminds me of the Feynman Lectures in terms of presentation of information. I worked through the book over the course of a few weeks, then wrote a rails project on contract from scratch quite comfortably.
I feel bad for Sean Griffin. He spent over a year overhauling the internals of ActiveRecord to add this attributes API. His work dramatically improves coercion and type enforcement for ActiveRecord users. Seems weird for this to only get a non-descriptive bullet point in "other highlights."
I don't know if he has written about it beyond the docs and various commits. He has talked about it many times on his podcast (which I highly recommend). Here is an episode that talks about the attributes API as well as his Relation#or work: http://bikeshed.fm/8
Definitely also checkout the episodes where he talks about Diesel.
I see a lot of pro-Erlang/Phoenix pushing in here, which is (as a polite reminder) an announcement about the rails framework. Not to say that one shouldn't, just that I think it's deviating from the main topic in hand. Interestingly, I wanted to find out what's the real reason behind these pushes towards Erlang/Phoenix and I realised the discussion is mostly around how you can save a few bucks worth $20-50 by opting for a faster programming language.
Any framework can be tuned to do anything. Rails right now is the only truly comprehensive framework with tight integrations to Coffee, LESS, CSS, etc. As someone who is writing his own framework in Scala, I learned this the hard way after under-estimating how much of work is already done for you in Rails.
If you run a business, then all this small talk shouldn't matter as much as how profitable you are. In the end, if your business failed because of your choice of framework (which usually reflects your philosophy), then you need to fix your business model and not the framework.
As a polite reminder, I'd like to link to an old comment of mine I made at the time of Rails 4.2:
I personally dislike Rails quite a bit, but I still agree with you. Personally I think the main choice of language and framework for web sites should be down to what you can easily hire developers that will be happy for.
I've chosen PHP for that in the past despite detesting it, at a time when Rails did not have enough of an eco-system. I might pick Rails now, even if I personally dislike it. I'd be concerned with niche alternatives out of consideration for whether or not I'd be able to hire replacements/additions.
Speed concerns would be extremely far down the list... I have worked on systems where speed for some APIs would necessitate something else, but it is very rarely the case for web apps.
I also do devops consulting, and at least for my customers it is extremely rare for the dynamic parts of the web frontends to make up sufficiently large portions of the overall hosting costs to be worth worrying about.
In the rare instances it is, it usually turns out that people have used a non-standard setup letting Rails serve up static content too, and/or have not made any attempt at doing even basic caching.
Doesn't mean you shouldn't consider something else than Rails. I personally rarely use Rails if I can avoid it. But most of the reasons for picking one of the other are excuses to obscure personal preference. Personal preference is fine. It's a perfectly good reason to want one over the other, as long as it meets other needs.
But people should realise it rarely actually makes that much difference, so unless you have very specific use cases and benchmarks to back it up, I'll think it's a bullshit excuse if someone comes to me and says they need X because of speed (in the context of web apps).
> I see a lot of pro-Erlang/Phoenix pushing in here, which is (as a polite reminder) an announcement about the rails framework. Not to say that one shouldn't, just that I think it's deviating from the main topic in hand.
You're right basically, but the migration wave from rails to phoenix has noticeably started, and it shows.
> Interestingly, I wanted to find out what's the real reason behind these pushes towards Erlang/Phoenix and I realised the discussion is mostly around how you can save a few bucks worth $20-50 by opting for a faster programming language.
Actually, the high performance stuff and Rails-y organization of phoenix is just something to give you immediate benefits and a sense of familiarity. It attracts people and give them a reason to try it out. Beginners praise it, but it's by far not the actual reason to keep pushing phoenix. Just like
"hey, it works like rails, you can do everything with it like rails does, so there is just a small learning curve and we have orders of magnitude better performance!"
> Any framework can be tuned to do anything.
Yes this is true. You can also write OOP assembly or use vim as a bitmap editor like paint. The point is: while you can theoretically do everything with mostly everything, the results will differ a lot and the way to these results will vary a lot.
With phoenix, there is no "magic" (people complain about this in rails!), everything is simple, explicit and straightforward. You also do not need many third-party tools like redis, sidekiq or memcached. This greatly simplifies the application code, reducing brittle constructs/bindings.
Currently, Rails still has the edge when it comes to gems, many things are readily available. This helps when "getting started" for typical app scenarios.
But a few weeks in, maintenance is the most important factor when it comes to development speed. Here Phoenix wins by a large margin. Less dependencies, easier code, less performance tuning, less bugs (compiler/dialyzer/... arguably).
> Rails right now is the only truly comprehensive framework with tight integrations to Coffee, LESS, CSS, etc.
The asset pipeline of rails can be quite a pain in the ass. Phoenix went a different way by integrating brunch, the probably easiest of the node.js task runners, which mostly doesn't even need custom configurations. (npm install --save brunch-less, your less files are integrated as you'd expect. Same for coffeescript, ...).
> If you run a business, then all this small talk shouldn't matter as much as how profitable you are. In the end, if your business failed because of your choice of framework (which usually reflects your philosophy), then you need to fix your business model and not the framework.
The development speed does matter. when you waste time optimizing things, fixing bugs with dependencies or stuff like that, you aren't shipping features. Moving fast is crucial for startups.
And then there is scaling. Phoenix is known to handle large loads with ease, and probably most startups can go a long way before they even have to consider horizontal scaling. Not having to think about this is worth more than the money you'd throw at servers to scale up rails (which, as you said, isn't needed in that quantity in phoenix either).
And just a personal note on top of this: Once functional programming/elixir "clicks" for you, translating business requirements into working code is so much easier as functional data transformation with pipes than having to deal with forests of class structures. For many cases, it just makes more sense (and is easier to read, maintain, extend and test).
---
Finally, the actual big win of phoenix is BEAM/OTP. Performance aside, the thought model of OTP just makes so much sense and simplifies building reliable, distributing systems a lot. I'd argue that OTP is the true reason people are blown away, elixir simplifies the syntax/reduces boilerplate, and phoenix on top makes getting started a breeze, looks like rails on the surface, and once you get into how OTP works, you're hooked.
There is a difference this time. Phoenix and Elixir was created specifically with an eye towards Ruby developers. Jose is working for a Ruby shop from what I understand. Go was C++ ++ or Java++ or Python++, Node was front-end brought to the backend kinds deal (or "you only need to know only language" deal). Clojure is an interesting one, but I haven't gotten much to play with it, so don't have much to say about it.
Clojure will never take off as a "Rails killer" because well, it's a Lisp. Sounds superficial but it's the truth. Also, from what I see it doesn't have an opinionated framework like Phoenix or Rails. That said, neither does Node.
Sure! But this time (tm) it's different. Phoenix is a direct drop-in replacement for rails, with a bunch of strong benefits on top. Getting productive for a rails developer should be 1-2 weeks max, since one already knows how the framework works conceptually. The truly cool stuff happens when one learns how stuff works beneath the surface.
While you're around: Elixir/Erlang in itself is a rather slow language when it comes to number crunching performance. The canonical way currently is to write NIFs in C for these parts. But if this native code crashes, the reliability promise of BEAM goes out the window. For me it would make much more sense to write native extensions in Rust. Have you ever considered to include a plugin to "mix", Elixir's build tool, that allows one to ship Rust source files alongside the Elixir application, which then at compile time fetch a Rust compiler, compiles the code on the current platform and generates a NIF? This would be so huge.
"Drop-in replacement is a term used in computer science and other fields. It refers to the ability to replace one hardware (or software) component with another one without any other code or configuration changes being required and resulting in no negative impacts."
That's not the case at all with Phoenix, nice as it may be.
Everyone always says that :) That said, your sibling comment makes good points here.
> While you're around:
I think this idea is _really_ cool, but I'm not equipped to build such a thing, since I haven't had enough experience with them yet. I agree that that would be super cool.
Truth to be said: there were some good languages but never good frameworks. After rails it's really hard to touch these semi-baked flawed "rails copies". And Phoenix is the first framework that not only replicates good rails sides (other web frameworks failing even that) but also fixes most (actually important) rails issues.
Yes, the huge win is developer efficiency and enjoyment. These _can_ make or break a business, for good reasons. They're also the reasons for the initial movement to Ruby and Rails in the first place. Elixir and Phoenix takes it a big step farther.
Passenger author here. Phusion is excited about the Rails 5.0 release! Rails is one of the most productive web frameworks out there, and 5.0 just makes it even better.
We have release various Passenger updates to ensure that Passenger plays well with Rails 5, Action Cable etc:
However we found a bug in Action Cable yesterday which may cause issues with app servers such as Passenger and Puma. Unfortunately the fix didn't make it into 1.0. I recommend anybody who uses Action Cable to apply our patch locally for now: https://github.com/rails/rails/pull/25615
I know that lots of other languages / frameworks compete these days for the title of "most-cutting-edge", but I love working with Rails. There's a lot to be said for the "stability without stagnation" approach. I come from a design background, did not study computer science, and am usually working as a team of one. Rails lets me leverage very finite amounts of time and theoretical knowledge into working software that is elegant, testable, and comprehensible. It is an amazing piece of technology, and I'm happy to see it's still going strong!
I've been following along with Rails 5 for many months now and I've been tracking progress on updating GitLab in our issue tracker[1].
Feel free to look at the relevant merge requests and use them as guides to upgrade your own apps :) We're still on 4.2.6 since we have a few gems we're waiting on, but I hope to change that in a month or two!
Favorite features are probably the various performance improvements and underlying improvements to the framework, as well as quiet_assets being moved into sprockets-rails.
I also wanted to give a shoutout to BigBinary's Rails 5 series, since it's been great for finding out about new features[2].
We've moved all of our backend offerings from Rails to Elixir/Phoenix. Despite some questioning the value of anything below 100ms response times there is a lot of data backing up the idea that Elixir/Phoenix can lead to a more maintainable and more economical solution. I spoke about this recently at RailsConf: https://www.youtube.com/watch?v=OxhTQdcieQE
Don't get me wrong, I think Rails is an amazing technology but it doesn't fit the use cases and demands our clients have today.
Every time I've explored alternatives to try to get those kinds of numbers, it seem the bottleneck is more in the database than the application server.
How granular are your measurements? I'm certainly not calling your experience into question. But I wanted to relay a cautionary tale of my own.
On one project my team was convinced that Redis was our bottleneck. Profiling was showing that Redis calls were where the most time was spent. I spent some time looking at the redis-rb source and it does a lot of block nesting (4 or 5 levels for every call, IIRC). So, I tried replacing the redis-rb client with something a bit lower level and our throughput doubled. It turned out that all that block processing cost as much as the network call did.
I suppose this isn't all that new: many performance issues are really a "death by 1,000 cuts" situation. Oftentimes your profiler can lead you astray.
The performance problems I've had with redis-rb were largely due to my accidentally using it in an improper environment (an event machine based asynchronous service). Replacing it with em-hiredis (and refactoring to use callbacks) drastically improved throughput for us.
In situations where a blocking redis client like redis-rb doesn't interfere with other running code (synchronous web requests), I've found zero performance issues with redis-rb (latency caused by redis is measured in single digits, usually < 5ms).
The tricky part with performance is it's often contextual. I didn't mention it, but I did verify the results by modifying redis-rb to remove the layers of blocks as well. Of course, it could have been something else, like how a particular method handled a block it was provided.
My intention wasn't to rail against redis-rb. It's written very nice, idiomatic Ruby. The faster solution was decidedly less Ruby-like. I was just suggesting that even when your profile looks to be telling you the DB is to blame, you should probably prod a bit more.
ActiveRecord is as another case where I've hit performance issues. I had just swapped out most AR usage for Sequel, so my data on that is probably well out-of-date. But, for us, materializing rows as full DAOs was often a very costly process that could dominate the cost of the DB query.
In this case we first moved to JRuby (after verifying a similar performance profile to MRI) and once on JRuby we used jedis [1]. I simplified the tale a bit -- apologies if I got your hopes up on MRI.
A lot of super scalable apps don't necessarily use a "database". They use a heterogeneous set of data stores that are most appropriate for their particular use case. Or maybe they shard, or run a distributed cluster, etc.
I get that the Erlang VM is really nice for some things (I made my first contribution to it in 2004!), but I'm curious about the above statement, as I haven't had the time to dig into Elixir/Phoenix yet.
I'm still fairly new with Elixir/Phoenix (as I think most of us still are), but so far I feel intuitively that it's more maintainable and I admit that I don't have the experience or numbers to back that up yet.
There are some things that you simply must use other tools for in Rails, because it can't maintain state like Erlang can. Once you start adding in these other things, you're no longer just a "Rails app" but you're really creating an entire system. Now you've got different things you potentially need to test, and you're adding in extra stuff to your system to monitor and measure the different parts of your system. You might do the same thing in Erlang/Elixir, but you also might not. You can potentially keep your entire "system" inside Erlang, either in one node or distributed. So right away simply having that option sounds like a huge improvement in maintainability. You've got everything running in Erlang, monitored and supervised in Erlang/Elixir, more easily testable using ExUnit or whatever.
And I think @bcardella talked about this in his RailsConf presentation (linked somewhere in the comments here).. but the performance you get from Elixir/Phoenix is potentially a huge improvement in maintenance. How much time have people spent in NewRelic or whatever measuring and tuning their Rails apps, getting the caching and everything working just right? People are writing Phoenix apps with zero caching that are performing better than Rails with caching. When people aren't having to go debug and diagnose these kinds of performance problems they get to work on actual features.
Again, some of this is just my intuition on it more than real experience to back it up. I've built a couple apps with Phoenix and I do enjoy it more. They're not big enough for the performance to really matter, but I do feel strongly that they're simpler because I'm keeping everything in Elixir code and not having to use, for example, Sidekiq in order to do anything async.
Exactly this. It's easy to do 'rails new' and push to Heroku, but when you've got redis, memcache, the worker process & scheduler etc etc you're really maintaining your own ecosystem. It's a lot of where time gets spent on production Rails apps.
In a Rails system you've got Rails running, maybe across X number of instances with Unicorn or whatever, sitting behind nginx. If you want to have a long-running request or websocket you've typically used Go or Node or Elixir. If you need to store state between requests you may be using Redis. For background jobs and general async things you've probably got something like Sidekiq.
Erlang already has an http server that Elixir apps are using now, it's called Cowboy. It's not that Elixir is creating their own just because. One of the reasons nginx is often used for Rails apps is to handle static assets, because Rails just isn't as well-equipped to handle those. Cowboy seems to be fine with handling static assets, so a lot of people just take nginx out of the equation for an Elixir app.
Typically Rails apps have used something else like Node or Go or Elixir for long-running requests or websockets. Maybe that will change with ActionCable, maybe not. Either way, Elixir/Phoenix happen to be quite good at this already. Phoenix's equivalent to ActionCable is called Phoenix Channels and it's able to handle 2 million+ websocket connections on a single server, which is pretty cool I think.
Rails apps use redis to manage state between requests. Elixir apps just don't need something like this. Elixir doesn't have something different, it just happens to be easy to store state between requests already.
Rails apps may use Sidekiq or something to deal with background jobs or async operations. This is another case where Elixir doesn't have anything special to occupy that role in the stack, it just happens to be good at that already.
Most of these properties are things that Elixir inherited from Erlang, which has been around for years. I wish I had started learning Erlang a long time ago but unfortunately it just wasn't really on my radar. But really the only thing on this list which is a relatively recent thing is Phoenix Channels, and I don't think that's really a NIH thing. At least, certainly not any more than ActionCable is.
I'd love to see an active record (npi) of that 2m active web sockets. It's trivial these days to make 2m connections (it's just a RAM problem), but marshalling data between them is the non-trivial problem, as is actually having active data crossing those sockets.
Considering that nearly all of those features were in Erlang before the other services listed I don't think NIH applies. Elixir just exposes the Erlang tools.
I would love to see someone take a deep dive into why their Phoenix thing is "so much faster" than with Rails. I mean really look at the whole stack from the VM, to different pieces of the framework like views and DB interaction. Erlang definitely does concurrency well, but it is not that much faster than Ruby in terms of "raw speed". I'd be fascinated to see someone actually do the work and look at where Phoenix is eking out those gains.
Some bigger details why phoenix is so fast despite erlang being a "slow" language:
- macros: fancy syntax and "magic" can be resolved at compile-time. Less work to do on runtime.
- templates: they get handled at compile-time, too, resulting in functions with blobs of binaries. This matters a lot, since a specific template-binary exists only once throughout the application and gets re-used whenever needed. This way you get near instant templating, instead of the usual string processing on every request.
- dispatching code like the routing that must be done on every request is like rails a DSL, but actually a macro, that gets compiled down to basic pattern matches, which is extremely fast.
- the concurrency model is extremely lightweight, handling more requests and/or doing more stuff per request is much more efficient hardware-wise. Indirect performance gain.
Just a few points that have a very noticeably impact, there are probably more. It boils down to the erlang vm's concurrency and pattern matching performance, plus elixir's compile-time macros and the fact that they treat strings as immutable binaries.
All well and good, but Erlang/beam code is not that fast.
> templates
Ruby's templates are parsed into code that stays in memory, too, so it's not like they're re-parsed each and every time.
> routing
Perhaps clever use of pattern matching helps here. But my point is: someone should dissect these things in a real-world-ish application to see what's actually true.
> concurrency
Yes, but let's be precise. Everyone knows Erlang's concurrency is way better than Ruby's. The claim was 'fast' though. As well as maintainable, which seems curious given that there are no really old Phoenix apps out there.
True, no doubt. You wanted a comparison why phoenix is faster than rails. And simply doing things at compile time reduces work at runtime. In Ruby, all Metaprogramming must be done at runtime, for example via method_missing trickery (been there, done that).
> Ruby's templates are parsed into code that stays in memory, too, so it's not like they're re-parsed each and every time.
This is actually not the same. A single immutable blob of binary (elixir strings) which is shared throughout the whole application can leverage hardware caching better. Jose could answer this probably better than I can do.
> someone should dissect these things in a real-world-ish application to see what's actually true.
Well at least I know that BEAM processes are far lighter than eg: goroutines. Also there is a per-process GC, so no "stop the world", much smaller units for individual collects, and when a process finishes before the heap grows full, it can be discarded directly.
> maintainable, which seems curious given that there are no really old Phoenix apps out there
Fair enough.
Personally I'd not wait until a software stack is a decade old before even considering it. I have at least worked on Elixir/Phoenix projects for many months now on-off with multiple colleagues, and it was/is still pure joy. Aesthetically pleasing syntax helps (broken window syndrome I guess), plus functional programming style in general, plus phoenix' foundation in "plug" and the clear modularity. "Let it crash" with supervisors also is incredibly robust, and robustness in itself leads to less maintenance costs.
All good points. As I keep saying, I'm an Erlang fan. I actually used it at my last job, and have used it on and off since 2004-ish. It definitely has some advantages, but I'm just curious where it's actually beating Rails, and with what kinds of workloads and test methodology and so on.
Stuff like "Erlang processes are lighter" matters in some contexts, but not in a straight up speed contest. It matters a lot when you start trying to handle a bunch of concurrent connections, so maybe that's where we're getting some of the claims from.
I'd say you really should have a look into https://www.youtube.com/watch?v=OxhTQdcieQE . A really great presentation, including numbers and real world backup data, plus conceptual overviews.
It sounds like you're undervaluing concurrency. Concurrency is huge for a web application. If you were say doing image manipulation or other DSP where you needed "raw speed" you'd want to choose a language like C, Go, Rust. But in a web application you're handling hundreds, thousands, to millions of requests per second; concurrency is crucial for that kind of throughput. A developer on my team did some quick benchmarks and found that that Phoenix gives an order of magnitude better performance than Rails. Thats not negligible.
I was a bit surprised by its performance in the last TechEmpower benchmarks: roughly equivalent to Rails and other Ruby-based solutions (PHP, too).
It could be a case of not yet being optimized for the tests, but I was expecting much more impressive numbers out of the box (particularly after the full-court press on the boards and blogs).
The Phoenix tests had a ton of errors and there was no preview run so whoever submitted them wasn't able to fix them. This has happened with a bunch of different languages/frameworks in the past and until the errors in the implementation are sorted out the benchmarks are basically meaningless.
From what I've read in the comments so far, you won't see it thinking in terms of speed of Erlang vs Ruby. It sounds like the gains come from baked in behavior, so that "Rails + Redis" or "Rails + Redis + nginx" is equivalent to Phoenix alone. I'm still only reading about this, though. No Elixir experience here.
Erlang definitely does a lot out of the box without needing to rely on other systems and that's a great thing, but we're hearing claims about "fast" and "more maintainable" and I'd like to see more details.
I used Erlang at the last place I worked and like it a lot, but Rails is pretty good too in my book.
For me, it's "fast" meaning developer time. A typical first pass at a Rails view might be 300ms vs. 300 micro secs in Phoenix. This extra headroom represents hours of optimization I don't need to do, in order to stay under 200ms per request.
The one thing that comes to mind is that since each process has its own isolated memory space, Erlang's GC is not a "stop the universe" kind of thing, the GC can run in parallel with other processes.
(edit: again, this is just intuition on my part, I haven't remotely done the deep dive you're talking about)
What I use Rails for these days is (1) as a backend to our SPAs, so ActiveRecord, Rails controllers, and a view to serve JS assets and CSS via the Asset Pipeline, and (2) to serve Active Admin.
I see there is Ecto, Brunch, and ExAdmin for Phoenix, to cover our primary bases I spoke of, in Elixir/Phoenix.
For those that have switched to Phoenix, and have experience with these three- was it as smooth transition? What are the significant gaps, if any?
A Devise equivalent. There are a number of alternatives, but I've found them all a little raw: with many Rails projects, I know I can just add Devise, and be confident it'll Just Work with minimal work. Valim said he wasn't interested in building a Devise equivalent for Phoenix - I think the reasoning was that each context is different enough in its own way that a monolithic one-size-fits-all solution like Devise isn't preferable, which is fair enough. But authentication is a pain, and carefully wiring up slightly immature solutions is a little hairy.
I've found this only really applies to CRUD-like, Rails-like projects, so about half of the projects I've built, so YMMV: it's maybe just me making the jump from Rails and expecting things to be more similar than they are.
Ecto is good; it's not quite an ORM, so there were a few WTFs when I tried to do things the same way, but on balance I prefer the Ecto approach; a Linq-like query language I like better than a [somewhat magic] ORM, the way it seperates concerns is good, and (in common with most of Phoenix) the way it works is pretty transparent. Not quite mature yet, though Ecto 2 seems to cover most of the functionality I found to be missing in 1.x.
I work primarily front-end,and I always had issues with the asset pipeline. I've found Brunch to be fantastic - there are always going to be a few issues when dealing with NPM, but other than that, I think they picked the absolute simplest JS-based task runner, and having direct access to the JS ecosystem is great. Brunch has been almost zero-config for me, has Just Worked with only a few `rm -rf node_modules`
I think most folks are finding that distribution in an erlang based application is a lot simpler, because you don't need to pull in things like redis for pubsub, memcached for caching, etc etc. erlang solves a bunch of problems in a rather elegant way that are solved in the java/python/ruby world through gobs of duct tape.
Good point, but I'd see the actual codebase on a single node as the most important thing for most people. Even with Rails you can do a lot with one server.
I find working with functional languages and type systems (w/ dialyzer in this case) makes my projects more maintainable in general. Might just be a personal bias towards them though. Note: I haven't used Phoenix but I'm a fan of Erlang.
This is mostly an observation I've had of imperative programming vs functional programming. Going back and creating the state of the application in my head is easier when I don't have to find out what else the code is affecting.
I personally really love using Rails. It's been very productive for me over the past several years I've been able to make a living off of this.
I see a lot of comments here about Elixir/Phoenix. Is the performance gain really that big? I currently serve 2-3 mil requests on Rails per day on around $200 worth of servers with at least one database call per request. In defense of Rails, there are so many libraries out there already built I can get an app up and running fairly quickly. I really think it's a matter whether the tool fits the bill.
For Discourse which we host in general we spend HALF the time in database calls and say 20-30% of the time in ActiveRecord bullshit.
Lets say we erased all of the app cost using some magic, we would still only be saving 50%, so twice as fast, now erase some ORM bullshit, say another 50% faster, so at a super ambitious totally unrealistic setting we could be 4x faster.
For applications that spend most of the time in the DB I am not sure how switching languages would give you 12x improvement.
For non-dynamic stuff what is behind rendering don't matter anyway cause you can cache before it even hits the app.
I would say languages that can leverage concurrency and do pooling well will generally put less pressure on the database for the same load.
Rails in particular is very wasteful in terms of connection management, typically checking out the database connection for the duration of the whole request. If you have a machine with 4 cores, one of Rails deployments would start 4 processes each with 10 database connections. So everything you can cache at the connection level (like prepared statements) is now spread through 40 connections across 4 different processes. I believe servers that rely on copy-on-write wouldn't reuse the cache at all.
Using Erlang/Elixir/Scala, you will start a single instance that will multiplex requests across all cores and sharing the same pool. This typically means you have smaller pools, specially if you are not checking out connections for the duration of the whole request, and you can leverage database caches more efficiently, effectively putting less pressure in the database.
Not only that, proper connection pools can actually foresee when requests are not going to be served on time because the database is under high load and avoid sending the load to the database in the first place. We are using such implementation in our ad-serving Erlang system (https://github.com/fishcakez/sbroker) although there is a lot of interesting queue theory regarding sojourn times. Those requests would timeout anyway but the pool can foresee that with high precision and avoid making the database even slower.
In other words, when you say "half of the time is spent in database", it is always worth asking how much of that time is a consequence of how you are interacting with the database in the first place.
Another aspect to consider is if the language (or framework) you are using allow you to depend less on the database. For example, by processing jobs concurrently instead of storing them on Redis/DB, by building distributed clusters instead of pushing broadcasts to third party systems, etc.
Sure, but what is stopping you from choosing a more efficient architecture for your Rails app?
For example, we use pgbouncer transaction pooling, so our connection counts are low. Tons of caching is done in NGINX and first middleware in the stack, we are not afraid of using SQL where needed for performance reasons and so on.
For an application that is taking data from a database and turning into JSON or HTML, spending 50% of the time in the database is not really a crime of bad architecture. In fact even when we built Stack Overflow in superfast C# we often saw a similar breakdown.
I just really object to the sentiment of "abandon ship" Y is 12x faster.
These are good points, but they regard the system as a whole. Is that what's "12 times faster"? I don't like big claims without seeing some more detailed numbers and methodology.
To clarify, I was not the one doing the claiming. My only point is that there is much more than meets the eyes so 12 times faster is possible. I haven't deployed Rails in a long while so I have no data from the performance perspective.
That saving of $150 is equivalent to one hour of pro dev work time. It's hardly worth converting unless Elxir / Phoenix is much more productive and maintainable.
> That saving of $150 is equivalent to one hour of pro dev work time.
Fair point, unless you have more customers. Then it can start to add up quickly. And by customers it means requests as well. Being able to handle large traffic spikes and continous connections is a game change from what I found.
Moreover. A huge benefit is using the BEAM VM, I use Erlang (but same VM) and I have seen major advantages being able to trace, debug, hotpatch a running system. Some parts of the cluster would be crashing and restarting for a while, and there would be no need to wake everyone up. It can be fixed in the morning kinda deal.
An interesting observation to think about -- importance of fault tollerance grows as quickly or quicker than concurrency. What I mean is if you have a service that handles more concurrent requests, it now becomes are lot more important to have solid fault tollerance, because a single failed request, maybe causing a segfault, can kill all of them for example.
That is a bit hard to understand unless you see it practice. You can think of it this way -- it doesn't matter if the system can handle 100K requests / second. If it is not fault tollerant, when it crashes it handles 0 requests / second. Depending how crappy the uptime is, you can average those number and get something that's pretty low.
> Being able to handle large traffic spikes and continous connections is a game change from what I found
This isn't specific to Elixir/Phoenix/Beam.
> Some parts of the cluster would be crashing and restarting for a while, and there would be no need to wake everyone up
Like any well-designed system that separates concerns. Again, this isn't language or framwork-specific.
This stuff is what frustrates me most about the vocal early adopting crowd. If it makes your team vastly more productive, that sounds like a great win for you. If you subjectively like it better, that's also perfectly valid. But citing resilience, efficiency, scalability, and handling traffic spikes seems thin -- these are all things that can easily be attained with just about any well-factored system on any number of languages/frameworks.
> these are all things that can easily be attained with just about any well-factored system on any number of languages/frameworks
Not sure what your point it. I shared my experience working with systems, you can share yours (if you have any).
Yes, you can write any of the stuff in assembly. Erlang, Java, Rust, all can be written in assembly. You can serve web pages in that too, and even create distributed system by twiddling bits directly in the network card's receive buffer.
> This stuff is what frustrates me most about the vocal early adopting crowd.
Lol, early adopters ;-) Erlang turned 30 this year. It is probably older than the median age of people reading this comment.
> Again, this isn't language or framwork-specific.
Again, point to where I said "nothing like this can be done in any other frameworks". Code reload -- can do it in Java. But you'll get paper cuts. Python -- can spawn an OS process to get fault isolation, but, can only spawn so many. Distribution -- spin another instance in AWS, talk gRPC with it, but now you are paying more money and have another library to maintain.
> But citing resilience, efficiency, scalability, and handling traffic spikes seems thin
Well would you prefer a whitepaper? It is an informal conversation. Do you ask for proof / benchmarks / whitepapers every time you talk to someone at a meetup? That must be fun.
But if you want a fun reference, pick up your smartphone, navigate to a web page. Does it work? Good. There is 50% chance it works because Erlang works. There you go, scalability, resilience and efficiency ;-) can share that at the next meetup's beer hour.
Since this is a thread about RoR5, perhaps the "early adopters" comment was targeted at the proponents of Phoenix/Elixir who fail to demonstrate any of the shortcomings of their chosen framework rather than people who have been relying on the reliability of their resilient BEAM to efficiently make a living for the past 30 something years.
> Like any well-designed system that separates concerns. Again, this isn't language or framwork-specific.
While it is true you can implement those systems in pretty much any turing complete language, we expect some languages and frameworks to make it simpler to write certain kinds of systems. For example, doing scientific calculations in Python or Julia is much easier than in Ruby. It it not impossible to do in Ruby but that's just how things are today. Since Erlang was designed for handling millions of connections in a fault-tolerant and scalable fashion, we expect it to shine in the areas previous listed.
> citing resilience, efficiency, scalability, and handling traffic spikes seems thin -- these are all things that can easily be attained with just about any well-factored system on any number of languages/frameworks.
Easily attainable? Err... no? Depending on which system you want to build, it is hard on all languages and Erlang (or Akka or whatever) is going to make it quantitatively less harder.
If your reference point is classical web applications that depend on the database, you still have a single point of failure (even when using primary and replicas). You could use primary-primary replication but that is still a world of pain and definitely not easy. Maybe that's fine for the applications you are building but that does not even scratch the requirements of building distributed systems at scale (and your app server talking to the database is a distributed system).
But we don't even need to go that far to see the benefits of Erlang, Elixir, Clojure, Go, etc. The fact those languages enable developers to use all of their machine resources efficiently should be enough reason for moving on. Rails developers complain about slow boot times, slow test suites, while using only 25% percent of their machine resources (1 of 4 cores, for example). The languages mentioned above have abstractions that make developers more productive and yet they refuse to adapt. We saw this happening with the adoption of garbage collectors and it is only a matter of time for us to take concurrency for granted as we do with memory management.
If we are talking web apps, all this great beam features don't mean much if your database doesn't have them. Today, scaling and providing redundacy is the easy part for a web app in most languages, having fault tolerance in the database is much harder.
And not just elixir/Phoenix, but the whole ecosystem around it.
I am relatively new to rails and have been astonished at the number of high quality gems that I can drop in and accelerate development. If I need to build something, my first step is always to google for a gem and see if some kind soul has already solved my problem.
Drupal had some of the same rich libraries, but in my short time working with it, I felt less like a software engineer and more like a software mechanic, piecing together pre built parts. For whatever reason, perhaps rhe emphasis on TDD,rails doesn't make me feel like that (which makes work more enjoyable for me).
All this is to say, I don't have any idea what the elixir/Phoenix ecosystem is like, but until it is sizable, rails has an edge in my book.
Depends on who is building it, who is supporting it after it is built, etc. I have seen companies with "technology sprawl" where project after project was done in the shiny new thing. No thanks, it's an ops and maintenance nightmare.
Not to say you shouldn't choose this solution, just do it with an eye to the future.
Are you doing anything with persistent connections? ActionCable won't be able to hit the same scale as Phoenix Channels (2 million usable active connections). That's a pretty big deal.
Agreed. It's getting pretty annoying. Every Rails post that hits the front page now feels like it's getting hijacked/side-tracked by Elixir/Phoenix evangelists.
Looks like a solid and and relatively straightforward upgrade from Rails 4.2. It's hard not to feel Rails has become a bit of a slow-moving behemoth though, with this release four years after 4.0. I've still got a couple clients using 3.2 from 2012 and things aren't that different.
Smart money at this point seems like a significant portion of the Rails community could begin moving to Elixir/Phoenix over the coming years. The advantages from a language/VM level just look impossible for Ruby to overcome, along with a blub effect kicking in.
> I've still got a couple clients using 3.2 from 2012 and things aren't that different.
Which is exactly what makes Rails 4 and 5 a reasonable choice for slow-moving projects on a budget (i.e. no money to pay for massive library updates every N months).
I think the vocal early adopters are going to jump ship, if they haven't already. But I'm more likely than ever to start new projects on Rails.
Of course, if there's anything in particular that you are missing from Rails, then the lack of fundamental changes can be a showstopper.
Unless they're able to pull off 3x3 in the next year, I expect Ruby/Rails will start to lose a lot of mind-share.
*edit: For clarification, I use Ruby every day and I think its awesome! However, I don't want to see it lose popularity the way Pearl did. I'm concerned that if the language doesn't continue to innovate and improve, that I will lose ground. Lots of new languages have similarly great ergonomics, but get concurrency right, or are faster. I don't think improved concurrency is really on the table for Ruby 3, so I think 3x3 is going to be critical to the long term future of the language.
But, that's just my prediction and its likely wrong.
Not sure why you're being downvoted for having an opinion. I think Ruby and Rails became the Java of the late 2000's. There's going to be lots of jobs and work for it for many decades to come. But I feel that slowly newer technologies are supplanting it. Such is the way of all technology, though.
unfortunately, i agree with you. Ive moved to elixir for all web work and the erlang vm just blows ruby out of the water at restapi/web services stuff. Im a huge ruby/rails fan though, and have earned a living off the ruby ecosystem. I just dont see how ruby can overcome the vm disadvantage to be honest, though id really like to see them try
They don't have to. With a good architecture (caching, etc) Rails scales well in the overwhelming majority of cases. Performance is only one of many things to worry about when choosing a technology.
It is, sometimes. But a cache hit in 50-150ns beats the pants off of a round trip to the backing database or microservice, which might take tens of milliseconds. That's 100,000-1,000,000x faster.
It's the same reason you'd never find a modern CPU without multiple layers of caching.
That's not the kind of caching GP was referring to, at least not to my reading. I think you'd be hard pressed to find any application level caching that could serve up a cache in the nanosecond range. At best you're looking at sub millisecond range, and that's when the cache lives in memory on the same hardware. Application caching on any modern cloud provider is almost always going to be a network call and best case you're looking at sub-10ms response time. And yes, while I definitely agree that it beats hundreds of ms by a wide margin, it can very easily bite you in the ass if you're not very careful about what gets cached. Caching is a performance hack, pure and simple.
True, I was only citing the figure for memory access times on x86 (according to a quick google). There's certainly going to be more overhead depending on how the cache is implemented, whether it's in memory, or swapped out/deliberately on disk, etc.
That said, I'd call it a fundamental architecture requirement (and one you can't really avoid anyway), given fundamental limitations imposed by the speed of light.
I use jRuby, which is great and an amazing piece of engineering. However, with Rails, you're only see about a 20% speed bump and out of the box it uses too much memory to run on Heroku's free tier. You'll also miss out on the gems with c extensions (for now I think that's about to change). Additionally, there seems to be a performance regression around includes and case statements. Your devops story can also be a bit trickier, depending upon what you're experienced with. I like jRuby a lot, but it's a mixed bag for the moment.
JRuby is painful to develop with compared to MRI. Running a test suite is slower, everything is slower even if you turn off JIT to startup the JVM faster and do other tricks that I don't remember. That's why Java developers usually work in IDEs that keep compiling and doing hot reloads constantly. Maybe that could apply to JRuby. Anybody here has experiences with a Ruby IDE and JRuby? Do you?
That's not surprising, it has some real problems with certain code, the edge cases are bad sometimes. We had a 20% perf bump, but probably 4x the memory use. That said, it gets better every release, and the JVM tooling is quite nice.
The upgrade is anything but easy if an existing application uses protected attributes extensively (removed from Rails 5). Because protected attributes does a shitload of monkey patching it's hard to know exactly what side effects its removal might have, especially in any application that uses nested attributes for mass assignment/initialization of a model's relations.
I'm being working in this kind of application and there is no problem with the "shitload of monkey patching". Most of work was being done in the controller layer to filter the parameters. After you finished this you can safely remove the protected_attributes gem. To make this we created a migration sanitizer for the protected_attributes gem that I plan to integrate in the gem itself very soon.
I wrote it in SQL when I needed it, after all I knew SQL before AR came around. An OR is very rare and probably that's why it landed only in version 5.
Another implementation is Post.where(id: [1, 2]) which is SELECT * FROM posts WHERE id IN (1, 2). I guess a db would compile it into exactly the same code (dbs use different terms but that's what it is) but the performances could be different inside AR/Arel.
It's not an omission per se, it's actually a challenging thing to implement. I'm sure they'd have done it sooner if it wasn't so hard to get right. Think of how the precedence rules would work: especially since the queries are composable, tacking on another .or() can do funny things semantically.
I think I chose my words poorly. I'm sure it was a huge undertaking, but the lack .or() is very noticeable in AR if you're coming from other ORMs or straight SQL.
Yay, Jeremy Evans spreading some Sequel knowledge ^_^
Big fan of Sequel; it's super sleek. As a side-note to Jeremy's comment, while (i guess) he used Sequel.& to demonstrate that OR and AND conditions can be combined freely, there are more succinct ways of expressing that particular query:
# Using the a "virtual row" block:
Post.where(id: 1).or{(id =~ 2) & (name =~ 'Foo')}
# Or, if you don't fancy that kind of block magic, simply passing a Hash,
# the same way as with .where
Post.where(id: 1).or(id: 2, name: 'Foo')
Post.where(id: 1).or Post.joins(:author).where(author: { name: 'John' })
ArgumentError: Relation passed to #or must be structurally compatible. Incompatible values: [:joins, :references]
It appears that only the filter clauses are allowed to be different (similar errors if :select, :order, :group, :limit are different), in which case it's no more powerful than Sequel, just more of a pain to use. You can easily implement ActiveRecord's behavior in Sequel if you want to combine filter clauses for arbitrary datasets:
To my mind the big news is Turbolink, A simple tech to build SPA with the following roots :
- the robustness & ecosystem of the rails server side (great testing stack & battle tested backend)
- the lightweight approach of rails/javascript, which is opting out of jquery : meaning that SPAs won't have to include jQuery and the whole JS world (client side speed will be greatly enhanced)
- the overall simplicity (no huge javascript stack pilling angular, react, redux, flux, webpack etc...).
Odd they didn't mention performance improvements in the blog post. From my understanding, there have been some massive gains (of which, some commits date back to early '15!) One of them which has been killing me, is that model schemas were not preloaded on app boot, which led to the first few dozen requests (depending on how many workers you have on Puma) perform 100+ sql queries and add over 1s to response times. Not only were they not preloaded, but they were not cached across ActiveRecord::Base.connection's
Been using Elixir / Phoenix as well, and it's been a breath of fresh air. That said, I'm glad that rails has been looking at projects like Phoenix for inspiration as it continues to grow and adapt.
I'm glad that rails has been looking at projects like Phoenix for
inspiration
Other way around.
Phoenix was explicitly inspired by Rails and founded by Rails core members. Elixir was first created by a Rails core member. Phoenix is the performance-really-matters successor to Rails.
You're talking about the original inspiration for Phoenix, while the previous poster is talking about some of the new features in Rails 5. They've both influenced each other.
Concurrency aside, why is Elixir preferred over Ruby when it doesn't even have a native array implementation? No, lists and tuples are no substitute nor are maps with numeric keys as Jose has suggested. If you want an array in Elixir your only option is Erlang's implementation which ain't pretty - http://erlang.org/doc/man/array.html. When I raised this issue in the mailing list and on IRC the response was invariably a definsive "I've never needed arrays", "We use tuples for most things in Elixir" or "Array performance characteristics are difficult to optimise in a functional language such as Elixir". I just find this disappointing.
Why do you want an array? You mean you want a memory area with adjacent cells layed out, to talk to C perhaps? Perhaps might want a list, a binary, or a sorted set, a tuple, a map and so on. You have to tell a bit more about your use case.
I used Erlang for many years, and in the last one full time. Not one time have I needed an "array". It seems you picked one odd feature no-one uses and are upset about it, I thik it is just a small misunderstand, if you share a bit more about what you are trying to do, you'll get more help.
> I just find this disappointing
Did you explain your use case to them. If you just said "I need an array" I see their response as perfectly rational. And from what I've seen, that is probably the friendliest and most approachable community, especially for newcomers.
I would need an array for processing any large data set as an indexed collection. Why is that considered an "odd feature no-one uses" when it's a fundamental data type in most mainstream languages?
What if I want to push/pop it and also access it randomly by index? PHP gets a lot of stick for making arrays serve double purpose but the Elixir advocates of using maps for array purposes are basically arguing for the same thing, no?
Then you are not in functional programming place anymore, and you may want to use an imperative language.
I think it is John Hughes that said "The real question is not 'when should i use functional programming?' but 'when should i use imperative programming?' And the answer is 'When you need random access and complete control of your memory'"
Does absence of arrays look a problem for you, while general immutability of most everything and message-based approach look totally normal and easy to grasp?
I suspect that to use Elixir, you need to study a new way of doing things first. Then you'll see arrays in a different light.
I have studied Elixir and other functional languages such as Clojure where vectors are 1 of the 3 basic collection types so the "new way of doing things" isn't an issue for me. Elixir took a lot of inspiration from Clojure so the 2 languages are not that different.
Expectedly, per Clojure documentation, "Vectors support access to items by index in log32N hops". This means that a Clojure vector is essentially a tree-based map; a "true" array (as in C or Java) would have O(1) time for access by index.
I've just started learning Elixir out of interest so I can't quite answer your question. However, I'd like to ask: what have you found to be difficult to do using the Elixir types such as tuples, list, keyword lists, and maps that would otherwise be easy to do using arrays? I'm asking out of genuine curiosity because I've specifically noticed a lack of arrays when learning Elixir and I'm quite used to using them in other languages.
Elixir has no native array syntax nor any array functions in the stlib so arrays are basically second class citizens rather than completely absent. Ask not why I need them but why anyone might need them if the Elixir is supposed to be a general purpose language. Why does Clojure need them or Ruby or Python (lists)? If I want to process data as a large indexed array Elixir isn't going to help me and some Elixir devs have conceded that it may not be suitable for large scale data processing. We all know Elixir/Erlang excels at distributed concurrency but the question here is whether it is deficient in other areas, making it too much of a niche language.
> Why does Clojure need them or Ruby or Python (lists)?
So you want a list it seems
> l=[1,2,3]
> [h | t] = l
[1, 2, 3]
> h
1
> t
[2, 3]
> If I want to process data as a large indexed array Elixir isn't going to help me
But neither will Python. You'd want a large indexed array you probably want numpy.
See now you are revealing a bit more about your usecase that's helpful. So you have large amount of data and want to query it and process it. In that case you might want to check out ETS. The nice benefit, you have have cool matching or query list comprehensions of it. Can also access it concurrently from multiple processes:
> but the question here is whether it is deficient in other areas, making it too much of a niche language.
Of course it is deficient. C is deficient. Java, C++ is they all. Every language is deficient in some way. I haven't found a Perfect one yet. Still looking... Maybe Rust, who knows, still struggling to learn it.
Original poster, specifically mentioned lists in Python, i.e. implying they wanted something equivalent to that. (Why does Clojure need them or Ruby or Python (lists)).
It definitely doesn't fit as a hardcore number crunching language. People do regularly just setup their number-y code in C or similar and call out to it so you can handle the distributed stuff with a language that works well there and the low level stuff with a different one. Similar to numpy/etc. calling to C or FORTRAN libs.
> making it too much of a niche language.
No language will be the "one true language" that excels at everything.
if you need constant time access by index to a sequence of fixed size elements you can use binaries. if you want constant time access to a sequence of fixed size you can use tuples. if you want both you can get close with maps using integer keys or with trees (which i believe are used by clojure behind the scenes for arrays)
If it comes to that I can just use Erlang's implementation but my question is why the omission in Elixir when arrays are supported in Erlang? With Elixir's excellent macro support surely an Elixir wrapper around Erlang's arrays wouldn't be so difficult?
Elixir wrappers that don't significantly enhance what Erlang is already exposing are discouraged within the Elixir community. As I type this, I realize that this might enrage you even more than the lack of arrays ever could. :)
This isn't a defensive response, just a genuinely curious one.
For certain types of programming - generally in lower-level languages - arrays can be essential for performance. But then my assumption is that if you needed that kind of performance, you'd be using those languages.
So outside of performance, what's the functional difference between an actual array implementation - and a thing that looks, walks, and quacks like a duck?
Just to be clear, Javascript's arrays will, depending on the implementation, dynamically switch between sparse and compact implementations. A sparse javascript array literally behaves like a javascript object or a map with an integer index. Ruby and Python's arrays will grow as long as the allocator can alloc more memory onto the end, but has to copy the entire array if there's memory fragmentation...it's not quite the same as a C array.
Because of the immutable data, your best hopes are actually vectors or maps.
Btw, the erlang library you mention implements arrays on top of a tree of 10-ary(-ish) tuples. It'd need to be measured, but I'd be willing to bet that the native maps are faster at access, insert, and delete. The maps are implemented in C, and worst case for all operations would be O(n log n) or similar.
Python's lists are lists in name only when compared with lists in functional languages. Whilst Python may offer additional array/vector implementations its lists are equivalent to arrays or vectors in other languages such as Ruby.
Since you refuse to be more specific, it sounds like you're just not willing to learn the language and the way problems are solved with it.
Expecting mutable C-type arrays in elixir/erlang is like expecting classes and lambdas in an assembly language. It's just not a good fit for what the language was designed to do.
not the OP, but I've had a side project in Elixir and often I wanted to browse/play with some of the last saved models in repl. When I get the collection using `ecto`'s `Repo.all..` I can't just `bets[4]` or `bets[-2]`, I have to do weird gymnastics like `hd(tl(bets |> Enum.reverse))`
I actually don't like rails' convention over configuration school of thoughts. It makes everything implicit. For any large rails app it's difficult to reason about how things are working, unless you learn all the conventions by heart (by the way, these conventions don't seem to be documented well)
If you are a Rails developer you know all the conventions and they help you at finding code with only an editor and a file manager. No need of IDEs. It's very easy to learn a new application, unless the original developer wanted to be clever, which usually means that was one of his/her first Rails projects.
However there are other school of thoughts and not everybody likes the Rails way. It's fair and it's a big world with space for every opinion and tool.
I never learned 'all conventions'. There are countless cases where I had to dig into source code to find out how a certain feature is working/how it works with another feature/when the abstraction does not work the way I want
I guess being implicit just makes it harder to find out which part of abstraction is leaking when it happens..
But again I'm not a good rails programmer. I was just never a fan of how conventions play such a big part, and was not well documented.
You could have! It was reverted, so we made https://github.com/rails-api in the meantime, and this was finally (literally four years later) pulled upstream now.
Is anyone aware of a site that the memory footprints of default Rails apps? I know that this may not be the greatest indication of the memory footprint in an actual running app, but I feel it'd still be interesting data. It'd have the be segmented by ruby version, of course.
It sounds like it wasn't decided to keep 3rd party as much as decided not to not combine yet:
> The feedback I got during the proposal process for putting Channels into Django 1.10 (it did not get in before the deadline, but will still be developed as an external app with 1.10) was valuable; some of the more recent changes and work, such as backpressure and the local-and-remote Redis backend, are based on feedback from that process, and I imagine more tweaks will emerge as more things get deployed on Channels.
It seems to be taking more of a South approach, and I really like this. Shake out conceptual issues, find an API that works well, iterate a bunch. Django release cadence is pretty slow, so having it simmer outside of it makes a ton of sense.
If it's a big hit like South was, it'll get rolled in and we won't have to immediately start deprecating things due to an un-tried design.
That's fine, and I don't use Rails either - but this post was about Rails. People who use or want to use Rails are likely to benefit from the new features. Even if that number has declined somewhat, it's still a lot of people, that still merits development, it's still a topic of interest to others on HN, and it's easy to avoid if you find that you aren't personally interested.
Besides which, I can't see that a switch from Rails to Node is going to have any strong advantage if you are equally proficient in each. If they're really about the same, this would boil down to "I like JS more than Ruby," which is not about the technology so much as personal feelings.
http://railstutorial.org/book
Sales actually just launched on Tuesday (announcement here: https://news.learnenough.com/rails-5-edition-of-rails-tutori...), and you can pick up your copy of the new 4th edition here:
http://railstutorial.org/code/launch
That link includes a 20% launch discount, which expires tonight at midnight PDT.
As with previous versions, the new edition focuses on the core principles of web development, so there isn't much Rails 5–specific material in the update, but I am planning standalone Learn Enough tutorials on things like Action Cable and Rails API (http://learnenough.com/).