For Discourse which we host in general we spend HALF the time in database calls and say 20-30% of the time in ActiveRecord bullshit.
Lets say we erased all of the app cost using some magic, we would still only be saving 50%, so twice as fast, now erase some ORM bullshit, say another 50% faster, so at a super ambitious totally unrealistic setting we could be 4x faster.
For applications that spend most of the time in the DB I am not sure how switching languages would give you 12x improvement.
For non-dynamic stuff what is behind rendering don't matter anyway cause you can cache before it even hits the app.
I would say languages that can leverage concurrency and do pooling well will generally put less pressure on the database for the same load.
Rails in particular is very wasteful in terms of connection management, typically checking out the database connection for the duration of the whole request. If you have a machine with 4 cores, one of Rails deployments would start 4 processes each with 10 database connections. So everything you can cache at the connection level (like prepared statements) is now spread through 40 connections across 4 different processes. I believe servers that rely on copy-on-write wouldn't reuse the cache at all.
Using Erlang/Elixir/Scala, you will start a single instance that will multiplex requests across all cores and sharing the same pool. This typically means you have smaller pools, specially if you are not checking out connections for the duration of the whole request, and you can leverage database caches more efficiently, effectively putting less pressure in the database.
Not only that, proper connection pools can actually foresee when requests are not going to be served on time because the database is under high load and avoid sending the load to the database in the first place. We are using such implementation in our ad-serving Erlang system (https://github.com/fishcakez/sbroker) although there is a lot of interesting queue theory regarding sojourn times. Those requests would timeout anyway but the pool can foresee that with high precision and avoid making the database even slower.
In other words, when you say "half of the time is spent in database", it is always worth asking how much of that time is a consequence of how you are interacting with the database in the first place.
Another aspect to consider is if the language (or framework) you are using allow you to depend less on the database. For example, by processing jobs concurrently instead of storing them on Redis/DB, by building distributed clusters instead of pushing broadcasts to third party systems, etc.
Sure, but what is stopping you from choosing a more efficient architecture for your Rails app?
For example, we use pgbouncer transaction pooling, so our connection counts are low. Tons of caching is done in NGINX and first middleware in the stack, we are not afraid of using SQL where needed for performance reasons and so on.
For an application that is taking data from a database and turning into JSON or HTML, spending 50% of the time in the database is not really a crime of bad architecture. In fact even when we built Stack Overflow in superfast C# we often saw a similar breakdown.
I just really object to the sentiment of "abandon ship" Y is 12x faster.
These are good points, but they regard the system as a whole. Is that what's "12 times faster"? I don't like big claims without seeing some more detailed numbers and methodology.
To clarify, I was not the one doing the claiming. My only point is that there is much more than meets the eyes so 12 times faster is possible. I haven't deployed Rails in a long while so I have no data from the performance perspective.
That saving of $150 is equivalent to one hour of pro dev work time. It's hardly worth converting unless Elxir / Phoenix is much more productive and maintainable.
> That saving of $150 is equivalent to one hour of pro dev work time.
Fair point, unless you have more customers. Then it can start to add up quickly. And by customers it means requests as well. Being able to handle large traffic spikes and continous connections is a game change from what I found.
Moreover. A huge benefit is using the BEAM VM, I use Erlang (but same VM) and I have seen major advantages being able to trace, debug, hotpatch a running system. Some parts of the cluster would be crashing and restarting for a while, and there would be no need to wake everyone up. It can be fixed in the morning kinda deal.
An interesting observation to think about -- importance of fault tollerance grows as quickly or quicker than concurrency. What I mean is if you have a service that handles more concurrent requests, it now becomes are lot more important to have solid fault tollerance, because a single failed request, maybe causing a segfault, can kill all of them for example.
That is a bit hard to understand unless you see it practice. You can think of it this way -- it doesn't matter if the system can handle 100K requests / second. If it is not fault tollerant, when it crashes it handles 0 requests / second. Depending how crappy the uptime is, you can average those number and get something that's pretty low.
> Being able to handle large traffic spikes and continous connections is a game change from what I found
This isn't specific to Elixir/Phoenix/Beam.
> Some parts of the cluster would be crashing and restarting for a while, and there would be no need to wake everyone up
Like any well-designed system that separates concerns. Again, this isn't language or framwork-specific.
This stuff is what frustrates me most about the vocal early adopting crowd. If it makes your team vastly more productive, that sounds like a great win for you. If you subjectively like it better, that's also perfectly valid. But citing resilience, efficiency, scalability, and handling traffic spikes seems thin -- these are all things that can easily be attained with just about any well-factored system on any number of languages/frameworks.
> these are all things that can easily be attained with just about any well-factored system on any number of languages/frameworks
Not sure what your point it. I shared my experience working with systems, you can share yours (if you have any).
Yes, you can write any of the stuff in assembly. Erlang, Java, Rust, all can be written in assembly. You can serve web pages in that too, and even create distributed system by twiddling bits directly in the network card's receive buffer.
> This stuff is what frustrates me most about the vocal early adopting crowd.
Lol, early adopters ;-) Erlang turned 30 this year. It is probably older than the median age of people reading this comment.
> Again, this isn't language or framwork-specific.
Again, point to where I said "nothing like this can be done in any other frameworks". Code reload -- can do it in Java. But you'll get paper cuts. Python -- can spawn an OS process to get fault isolation, but, can only spawn so many. Distribution -- spin another instance in AWS, talk gRPC with it, but now you are paying more money and have another library to maintain.
> But citing resilience, efficiency, scalability, and handling traffic spikes seems thin
Well would you prefer a whitepaper? It is an informal conversation. Do you ask for proof / benchmarks / whitepapers every time you talk to someone at a meetup? That must be fun.
But if you want a fun reference, pick up your smartphone, navigate to a web page. Does it work? Good. There is 50% chance it works because Erlang works. There you go, scalability, resilience and efficiency ;-) can share that at the next meetup's beer hour.
Since this is a thread about RoR5, perhaps the "early adopters" comment was targeted at the proponents of Phoenix/Elixir who fail to demonstrate any of the shortcomings of their chosen framework rather than people who have been relying on the reliability of their resilient BEAM to efficiently make a living for the past 30 something years.
> Like any well-designed system that separates concerns. Again, this isn't language or framwork-specific.
While it is true you can implement those systems in pretty much any turing complete language, we expect some languages and frameworks to make it simpler to write certain kinds of systems. For example, doing scientific calculations in Python or Julia is much easier than in Ruby. It it not impossible to do in Ruby but that's just how things are today. Since Erlang was designed for handling millions of connections in a fault-tolerant and scalable fashion, we expect it to shine in the areas previous listed.
> citing resilience, efficiency, scalability, and handling traffic spikes seems thin -- these are all things that can easily be attained with just about any well-factored system on any number of languages/frameworks.
Easily attainable? Err... no? Depending on which system you want to build, it is hard on all languages and Erlang (or Akka or whatever) is going to make it quantitatively less harder.
If your reference point is classical web applications that depend on the database, you still have a single point of failure (even when using primary and replicas). You could use primary-primary replication but that is still a world of pain and definitely not easy. Maybe that's fine for the applications you are building but that does not even scratch the requirements of building distributed systems at scale (and your app server talking to the database is a distributed system).
But we don't even need to go that far to see the benefits of Erlang, Elixir, Clojure, Go, etc. The fact those languages enable developers to use all of their machine resources efficiently should be enough reason for moving on. Rails developers complain about slow boot times, slow test suites, while using only 25% percent of their machine resources (1 of 4 cores, for example). The languages mentioned above have abstractions that make developers more productive and yet they refuse to adapt. We saw this happening with the adoption of garbage collectors and it is only a matter of time for us to take concurrency for granted as we do with memory management.
If we are talking web apps, all this great beam features don't mean much if your database doesn't have them. Today, scaling and providing redundacy is the easy part for a web app in most languages, having fault tolerance in the database is much harder.
And not just elixir/Phoenix, but the whole ecosystem around it.
I am relatively new to rails and have been astonished at the number of high quality gems that I can drop in and accelerate development. If I need to build something, my first step is always to google for a gem and see if some kind soul has already solved my problem.
Drupal had some of the same rich libraries, but in my short time working with it, I felt less like a software engineer and more like a software mechanic, piecing together pre built parts. For whatever reason, perhaps rhe emphasis on TDD,rails doesn't make me feel like that (which makes work more enjoyable for me).
All this is to say, I don't have any idea what the elixir/Phoenix ecosystem is like, but until it is sizable, rails has an edge in my book.
Depends on who is building it, who is supporting it after it is built, etc. I have seen companies with "technology sprawl" where project after project was done in the shiny new thing. No thanks, it's an ops and maintenance nightmare.
Not to say you shouldn't choose this solution, just do it with an eye to the future.
If you spend $200 you could theoretically hosting the same product on $50 worth of hardware.