Both the submission and the target of the submission dismiss dynamic typing as a source of slowdowns (in the unfortunately haughty, completely unsupported manner that is too common in the industry. It is, in effect, the head-in-the-sand approach: If you don't want something to be true, simply keep saying it isn't and somehow that will become reality), yet I guess we need to define what "slow" is then, using as an example an empirical test (ASM.JS) that certainly serves better than some people waving their hands about innovations in JIT compilers.
The primary optimization that ASM.JS brings is its static type system. With that, and the obvious benefits to the code path, intensive activities can be run at 1/2 native speed, versus the 1/12th of so native speed available with dynamic typing.
Is 1/12th native speed "slow"? In most cases -- where the code is the duct-tape that glues together native code (which is the case on web servers, admin scripts, etc), no, it isn't slow at all. But from a basic optimization perspective, asm.js sure seems to indict dynamic typing if you really care about squeezing every cycle's maximal output.
No, the primary optimization asm.js brings (at least in OdinMonkey) is AOT compilation.
Fully typed code should produce the same code from IonMonkey without OdinMonkey: the problem is you have to get up to 10k calls (at least last I looked!) of a function before you get there; but once you are there it is just as good. It's the build-up, through the interpreter and the less efficient code-generators (in terms of code generated, that is; they're more efficient in terms of their own runtime), which really hurts.
No, the primary optimization asm.js brings (at least in OdinMonkey) is AOT compilation.
But it can do that precisely because of the static typing, which loops back to exactly what I said. Looser, more dynamic language always have a cost associated with that flexibility.
Arbitrary JS can be as statically typed as asm.js is (because, well, asm.js is just a subset). There's no reason why one couldn't compile AOT something like:
function add_and_double(a, b) {
a = a | 0;
b = b | 0;
var sum = (a + b) | 0;
return (sum + sum) | 0;
}
Yes, it'd be trivial to fall off the AOT-compilation path (esp. without a defined subset that got AOT compilation and hence likely would differ between implementations), so static verification that you're on the AOT-compilation path is valuable (which asm.js provides), but it doesn't fundamentally alter the semantics of the language.
You seem to be saying that the AOT path requires constrained typing and a strict subset of the language semantics, which agrees with the parent's point that static type semantics matter in terms of performance.
But it is extremely specific to asm.js -- the optimizations possible can only happen because static typing is mandated. It is absolutely a subset of JS, but JS can't make those optimizations because the guarantees aren't there.
> The primary optimization that ASM.JS brings is its static type system.
Well, that and not having objects, user-defined types, closures, strings, polymorphism, or even garbage collection. asm.js is fast because it's essentially BCPL, not just because it has a static type system.
The primary optimization that ASM.JS brings is its static type system. With that, and the obvious benefits to the code path, intensive activities can be run at 1/2 native speed, versus the 1/12th of so native speed available with dynamic typing.
Is 1/12th native speed "slow"? In most cases -- where the code is the duct-tape that glues together native code (which is the case on web servers, admin scripts, etc), no, it isn't slow at all. But from a basic optimization perspective, asm.js sure seems to indict dynamic typing if you really care about squeezing every cycle's maximal output.