Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just out of curiosity, why do you only consider gzipped size when determining bloat? Transmission size/time isn't the only factor, there's also the time it takes the JS engine to process the code. 43k gzipped is okay for the transmission, but the engine is still parsing/processing the unpacked version, no? 145k minified for a foundational library is not insignificant. jQuery 3.0 minified as a reference is ~86k.


Yeah, this is a good point. We do consider time-to-interaction an important metric (it is especially important for React Native), and indeed having less JS code to parse is going to give better results. I don't have a good answer other than that we definitely care about these metrics, and Facebook uses React more and more, so there will be more pressure on the team to improve them.

We are not in a perfect place right now but we are not just brushing it off either. As I said in another comment, at least having a good explanation for these bytes is something we plan to do in the following couple of months.

There are some other improvements to startup time that we are considering such as bundling to a flat file and removing some dynamic overhead (https://github.com/facebook/react/issues/6351). Also the plan is to have React go through significant changes this year (adding an incremental reconciler) so right now is not the best time to optimize the script size.

But yeah, this is on our radar. Community help identifying specific issues is also very welcome.


You're not wrong, it's just that the time it takes to transfer something across a network is usually slower and a lot less deterministic than processing something locally on your machine. While I'm pretty sure it takes longer to evaluate more code than less code, I would imagine that the time differential is negligible compared to the time savings you get from gzipping JS/CSS assets for transmission over a network. Otherwise, why would we gzip assets at all?


Parent is not saying not to gzip. They're saying that more js takes more time to parse, regardless of compression. Iow, they are not talking about decompression overhead, they are talking about what happens after decompression.


If you're aiming for a really snappy app, and you've made sure everything is fully cached, in a ServiceWorker even for totally instant loading, then JS parse/evaluation time becomes significant.


Right, but you can use caching / service workers to solve the network time, but there's nothing you can do to speed up the initial javascript load time.


Couldn't browsers keep a map of js-file-hash to cached AST?


It's not just parsing, it's evaluating all the code statement by statement, and the result of that is hard to cache.


I think it suffers from the same problem as other asset caching. Think of all the sites you visit and how much JS they load. 1000 websites with 1MB of JS will hit 1GB of storage.


1gb isn't huge on the desktop, especially if the cache expired LRU.


I don't think any of this concerns desktops anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: