I would say languages that can leverage concurrency and do pooling well will generally put less pressure on the database for the same load.
Rails in particular is very wasteful in terms of connection management, typically checking out the database connection for the duration of the whole request. If you have a machine with 4 cores, one of Rails deployments would start 4 processes each with 10 database connections. So everything you can cache at the connection level (like prepared statements) is now spread through 40 connections across 4 different processes. I believe servers that rely on copy-on-write wouldn't reuse the cache at all.
Using Erlang/Elixir/Scala, you will start a single instance that will multiplex requests across all cores and sharing the same pool. This typically means you have smaller pools, specially if you are not checking out connections for the duration of the whole request, and you can leverage database caches more efficiently, effectively putting less pressure in the database.
Not only that, proper connection pools can actually foresee when requests are not going to be served on time because the database is under high load and avoid sending the load to the database in the first place. We are using such implementation in our ad-serving Erlang system (https://github.com/fishcakez/sbroker) although there is a lot of interesting queue theory regarding sojourn times. Those requests would timeout anyway but the pool can foresee that with high precision and avoid making the database even slower.
In other words, when you say "half of the time is spent in database", it is always worth asking how much of that time is a consequence of how you are interacting with the database in the first place.
Another aspect to consider is if the language (or framework) you are using allow you to depend less on the database. For example, by processing jobs concurrently instead of storing them on Redis/DB, by building distributed clusters instead of pushing broadcasts to third party systems, etc.
Sure, but what is stopping you from choosing a more efficient architecture for your Rails app?
For example, we use pgbouncer transaction pooling, so our connection counts are low. Tons of caching is done in NGINX and first middleware in the stack, we are not afraid of using SQL where needed for performance reasons and so on.
For an application that is taking data from a database and turning into JSON or HTML, spending 50% of the time in the database is not really a crime of bad architecture. In fact even when we built Stack Overflow in superfast C# we often saw a similar breakdown.
I just really object to the sentiment of "abandon ship" Y is 12x faster.
These are good points, but they regard the system as a whole. Is that what's "12 times faster"? I don't like big claims without seeing some more detailed numbers and methodology.
To clarify, I was not the one doing the claiming. My only point is that there is much more than meets the eyes so 12 times faster is possible. I haven't deployed Rails in a long while so I have no data from the performance perspective.
Rails in particular is very wasteful in terms of connection management, typically checking out the database connection for the duration of the whole request. If you have a machine with 4 cores, one of Rails deployments would start 4 processes each with 10 database connections. So everything you can cache at the connection level (like prepared statements) is now spread through 40 connections across 4 different processes. I believe servers that rely on copy-on-write wouldn't reuse the cache at all.
Using Erlang/Elixir/Scala, you will start a single instance that will multiplex requests across all cores and sharing the same pool. This typically means you have smaller pools, specially if you are not checking out connections for the duration of the whole request, and you can leverage database caches more efficiently, effectively putting less pressure in the database.
Not only that, proper connection pools can actually foresee when requests are not going to be served on time because the database is under high load and avoid sending the load to the database in the first place. We are using such implementation in our ad-serving Erlang system (https://github.com/fishcakez/sbroker) although there is a lot of interesting queue theory regarding sojourn times. Those requests would timeout anyway but the pool can foresee that with high precision and avoid making the database even slower.
In other words, when you say "half of the time is spent in database", it is always worth asking how much of that time is a consequence of how you are interacting with the database in the first place.
Another aspect to consider is if the language (or framework) you are using allow you to depend less on the database. For example, by processing jobs concurrently instead of storing them on Redis/DB, by building distributed clusters instead of pushing broadcasts to third party systems, etc.