Agreed. I'm really impressed by the professionalism shown by all members of the React ecosystem. Just the other day I had noticed a org member of the reactjs org (houses react-router, redux, etc) who was acting very unprofessional and expressed that it was a turn off seeing official members of the org basically telling a issue reporter/discussion was "annoying them" and Ryan Florence stepped up and said he was not tolerating that and even reached out to me on Twitter apologizing and saying that they do not condone such tones.
+1 to this. High-quality code from @mwiencek right out of the gate, and he was super responsive to my comments when I asked for changes. We're really grateful for the contribution.
Hey, I didn't expect that to get as much love as it has, but that's really wonderful. Being highlighted in the release notes is super cool. :) I'll just say, it was thanks to the constructive/encouraging help I got on the PR that I was able to finish it.
I must be getting jaded - my first thought was that you were being sarcastic, so I was really pleasantly surprised when I browsed through. You're right, this is actually fantastic.
Yes, that's one prime motivation. In this particular PR, it's not clear that it could have been split up into any smaller logical commits since a lot of the code is interdependent.
Won't this break things that relied on span's being there? I'm thinking it should have been opt-in initially. Or is it that way since it is a major version?
Specifically, in the old way, there were spans, and we had css styles targeting the spans sometimes.
react.min.js is now 145.4kb. For comparioson, angular.min.js is 155.2kb. One of React's biggest selling points was that it was much smaller than Angular/Ember/Backbone etc, but I'm not sure that argument can be leveraged anymore. I think React is a culmination of some great ideas, but in my view this is a big step back. Especially since Angular/Ember/etc. offer a lot more tools out of the box.
I don't intend to tell anyone here that React is a bad framework because it's not, but I think it should be a big priority to tighten down the file size. When React is big pretty much all their selling points go out the window.
Perhaps they can split files up so you can just import the APIs you want. I wonder how much that would help with the bloat.
I feel like some of the smaller VDOM frameworks like Cycle/Mithril/Riot are a lot more appealing now, since they focus on small file size and low bloat.
I don't think we've ever promoted React based on file size. Smaller files are better but people tend to overemphasize JS file size -- gzipped, this latest release is 43k which is comparable to the size of an image or two on most websites.
(Also note that growing framework code can reduce your app code size overall; I don't want to make specific claims about React but at Facebook we often see code size increase when 10 people all try to avoid importing the same module and end up reimplementing 20% of it 10 times and increasing the overall size.)
I haven't looked lately at the biggest contributors to React's file size. We're certainly willing to try to modularize more when it makes sense, but we do that more with an eye towards developer flexibility rather than runtime efficiency.
This release gained 3k gzipped (it was 39k), in large part due to complete SVG support. In this case I think it was worth it because this was a longstanding pain point.
There may be some ways to drop some dead code that we haven't eliminated efficiently yet. Here is a recent example I found that drops 1kb: https://github.com/facebook/react/pull/6401
It would be cool if somebody in the community could spare some time to look into the biggest factors as well.
We definitely want to have a good explanation for where those bytes are coming from. That is, what React gives you for them. I plan to write a high level overview of React internal implementation so I might as well look at relative size cost of different pieces of it as part of my exploration process.
In any case don't forget 43k is less than many images.
Just out of curiosity, why do you only consider gzipped size when determining bloat? Transmission size/time isn't the only factor, there's also the time it takes the JS engine to process the code. 43k gzipped is okay for the transmission, but the engine is still parsing/processing the unpacked version, no? 145k minified for a foundational library is not insignificant. jQuery 3.0 minified as a reference is ~86k.
Yeah, this is a good point. We do consider time-to-interaction an important metric (it is especially important for React Native), and indeed having less JS code to parse is going to give better results. I don't have a good answer other than that we definitely care about these metrics, and Facebook uses React more and more, so there will be more pressure on the team to improve them.
We are not in a perfect place right now but we are not just brushing it off either. As I said in another comment, at least having a good explanation for these bytes is something we plan to do in the following couple of months.
There are some other improvements to startup time that we are considering such as bundling to a flat file and removing some dynamic overhead (https://github.com/facebook/react/issues/6351). Also the plan is to have React go through significant changes this year (adding an incremental reconciler) so right now is not the best time to optimize the script size.
But yeah, this is on our radar. Community help identifying specific issues is also very welcome.
You're not wrong, it's just that the time it takes to transfer something across a network is usually slower and a lot less deterministic than processing something locally on your machine. While I'm pretty sure it takes longer to evaluate more code than less code, I would imagine that the time differential is negligible compared to the time savings you get from gzipping JS/CSS assets for transmission over a network. Otherwise, why would we gzip assets at all?
Parent is not saying not to gzip. They're saying that more js takes more time to parse, regardless of compression. Iow, they are not talking about decompression overhead, they are talking about what happens after decompression.
If you're aiming for a really snappy app, and you've made sure everything is fully cached, in a ServiceWorker even for totally instant loading, then JS parse/evaluation time becomes significant.
Right, but you can use caching / service workers to solve the network time, but there's nothing you can do to speed up the initial javascript load time.
I think it suffers from the same problem as other asset caching. Think of all the sites you visit and how much JS they load. 1000 websites with 1MB of JS will hit 1GB of storage.
> I plan to write a high level overview of React internal implementation
I would love this! I really like how you basically re implement redux in the first few videos of your EggHead Redux tutorial. It really took my understanding to another level.
Somebody suggested having a micro-kernel approach to load modules onDemand in production, e.g. SVG, renderToString will further reduce the initial load time / size.
There's interesting work happening with tree shaking, which allows you to do static analysis at compile time to remove unimported code. The new JS `import` syntax is specifically designed to enable this, so it's just a matter of time until tooling and libraries catch up.
I feel it's important to point out that React is now an ecosystem. Relay is comparable in size to React, then we throw Immutable into the mix, and so on. These are all dependencies that will be used on practically every page on a website, so cannot benefit from code-splitting techniques.
It's certainly my experience that using React over a mess of jQuery and its plugins leads to smaller file-sizes when enhancing functionality on a single page, but when you're using React (with Relay, React Router etc) to build the entire site, you end up with enormous payloads.
Let's ignore the gzip size for now, since there's more to file-size than just data over the wire. A production build of a simple React + Relay project (i.e. one that does practically nothing) is around 1mb in size. We can't reliably do server-side rendering with Relay yet, so this is 1mb that must be downloaded and executed before you can see and do anything at all.
People should stop looking for a silver bullet. If you are using react + relay, you must be building a very complex solution, where the overhead for such abstractions is justified.
If you are at a point where your task doesn't justify these abstractions - don't use them.
I've found that the benefits in terms of developer experience, and being able to build good UI quickly, start accumulating almost immediately, even on tiny projects.
My initial reservations about the amount of boilerplate in setting up Relay, were in large part due to the server-side half of the equation. Since switching to building my backend in a different language (now using Elixir with absinthe, rather than node and graphql-js) I've found my productivity has skyrocketed.
At this point the overhead in terms of file-size (and the still pending support for server-rendering) are the only things giving me pause for thought.
For React you kind of have to use the minified file size, because the dev file size is SHOCK FULL of dev-only validation and error messages and other logic that is made to be minified away. The prod build isn't even close to the same code.
With that said React + ReactDOM is 180kbish unminified.
I can't find Relay's size as a single file. Is it really over 800kb?
I'll need to do a webpack report to get the actual numbers out.
However, my dev build is 9mb, my production build is 1mb. So when i'm talking about baseline filesize for a React + Relay project, I am talking about production builds. Not all of this 1mb comes from those two libs, but they contribute very heavily towards it.
I should add, that if there's a truly blessed CDN for these dependencies (cdnjs is the closest we have to that) that everyone uses, it puts us in a much better place. So we should actively be encouraging people to use them.
I'm very split on using these CDNs. I understand the advantage, but do I really want to make my site import Javascript from a third party source? Their downtime becomes my downtime, their security issues become my security issues, and I've now shared my visitor's usage patterns with a third party...
I see your point. Indeed this does not sound very good. I don’t know enough about Relay or their plans so I can’t really say much.
Nevertheless it is an ecosystem precisely because you get to pick the pieces you need. For comparison, Redux + Redux Thunk is about 3kb. Immutable is not required either. Of course your app code would have to include data fetching logic which would make it much larger. But those are all tradeoffs you have the power to make.
I agree it's about picking and choosing the right things, but as I think we've both mentioned elsewhere, an audit and explanation of what contributes to file-size in these libraries would be super useful, for a few reasons:
* It helps us justify to our bosses why a 350kb library is needed instead of a 3kb one.
* It helps us understand the overall picture of JavaScript today, what kind of problems we're having to solve that could eventually be better solved by browsers themselves.
* It just makes me more comfortable in my decision to send a lot of JavaScript over the wire.
Without such an explanation, we're essentially taking it on faith that these libraries are written by people who value good quality lean code.
Relay was built to address concerns that our developers faced with managing data in complex apps. Needing to reimplement solutions to the same complex problems (error handling, request coordination, caching, etc) could take time away from focusing on building products. Relay helps to solve these common cases, and it sounds from your comments above ("my productivity has skyrocketed") that you've seen the benefits too - great to hear!
> Without such an explanation, we're essentially taking it on faith that these libraries are written by people who value good quality lean code.
We built something that works for us, balancing feature set, runtime performance, maintainability, code size, etc. We're continually focusing on performance - see https://github.com/facebook/relay/blob/master/meta/meeting-n... for more about what we're focusing on - but so far file size has not been the most impactful thing to work on.
As far as understanding Relay's functionality, we've covered the more obvious aspects in talks (http://facebook.github.io/relay/docs/videos.html) and in a conceptual overview (http://facebook.github.io/relay/docs/thinking-in-graphql.htm...). But an analogy to React is probably simplest. React converts declarative UI descriptions into imperative updates on the DOM. Relay is superficially similar - it converts declarative data descriptions into imperative calls - except that there is no DOM for data, so Relay also implements the query representation, object graph for cached data, networking, mutation queuing & rollback, etc.
Would it be possible to split those libraries in smaller modules?
For example Preact (https://github.com/developit/preact) has the same basic API of React but it's just 3kb because a lot of features are stripped away in optional modules.
Also, using preact-compat you could replace it as-is* in your bundle right now.
Depends if I need the features in those optional modules. An app is essentially made up from architectural modules (frameworks, alongside anything else that you use everywhere), and then specific pieces of functionality (with dependencies that are used less frequently).
If the bulk of your bundle size comes from page-specific functionality, then you can easily benefit from code-splitting and tree-shaking techniques. But those techniques won't help you at all when it's your core architecture that are enormous, because every single page on your site depends on it.
Perhaps the React team didn't themselves promote based on file size, But many people have in medium articles etc. I'm the kind of person who works very hard to keep my page size at a minimum, and there are definitely others like me. What do you think of the idea of splitting React up into several files, so you can only input what functionality you want? Do you think that's possible in Reacts code base? For reference, RxJS 5 does this, so you can import the operator flatMap only if you intend to use it.
React doesn't really have much of a public API surface. In Rx you can give up some operators but React doesn't have a similar concept.
That said we will continue exploring how to drop some weight from React further down the road. This is not a priority right now because some large changes are coming relatively soon to React (work on incremental reconciler) so it would be unwise to focus on size optimization before a big refactoring.
I think that we will be in better position to consider what can be removed or separated after we ship the incremental reconciler. Hopefully this will happen within a year. For now I don't have a good answer to this but I'll try to at least provide a summary of how large different pieces are when creating a high level technical overview of React next month.
> When React is big pretty much all their selling points go out the window
You can't be serious. React's selling point for me is that it enables me to write web apps in a whole new way. Coming from an Angular background it's amazing how much cleaner, better and more pleasant it is to work with React. To bring up file size as the main selling point is just ridiculous.
When React is big pretty much all their selling points go out the window.
I strongly disagree. If React's API were to get all big and complicated, that would be a concern. The tight scope and clean interface are definitely selling points for React compared to many larger (in scope and interface) frameworks and rendering libraries. But I really don't care about the file size, as it's far too small to be anything but noise in almost any web app where it's worth using a tool like React at all, and it's going to matter less anyway as JS build tools get more sophisticated and connection speeds increase over time.
Additionally, React is intended for highly complex, highly interactive applications, time-to-content is less important than on, say, a news reader. It might take 2 or 3 seconds to get the first page, but once you're there your interactions will be quick. Even if you do code splitting, well-bundled code will have a "commons" chunk, so fetching a new page won't redownload React and other large libraries.
Finally...had to go so many comments down to find someone making a logical point. People will complain about anything. As someone who has made a massive app with React, well over 1000 components, people literally complain about a 43kb gripped file size... Most likely none of these people have built a large application with React and use it for stuff that a static site would be good for. Then they complain about loading time and rendering time... Also, on an app people ACTUALLY use and often...the files are cached anyway. The front/web/JS doesn't have a fatigue problem...it has a chronic bitching for no reason problem. Reacts benefits from a declarative api and such far out weigh file size...the people who complain about file size don't understand that and never will...again...probably because they haven't written anything used in production by real users ever. /rant
In my opinion, I don't think size is the big selling point for React. It's the simplicity and clarity of the programming model that it enables. For me, being able to write most of my code functionally (pure functions, immutable data) is a big win.
That said, I've been tinkering with this project on the side:
It is time to deploy common JS libraries out of band, like native libraries. Download them once, precompile them once, and then use precompiled version at any site, which requires it.
That's basically the purpose of public CDN-hosted JavaScript, like cdnjs [1] and what most major JS libraries do [2].
However, I think this is becoming less common with the increasing popularity of module bundlers like webpack. It's still possible with webpack [3], but it's not as easy as a 'npm i react' and 'import React from "react";'.
That said, a module bundler could easily automatically swap out local modules for modules hosted on CDNs in production builds (with fallbacks in case the CDN is down). I'm a little surprised this doesn't seem to exist yet, at least for webpack.
It would be kind of nice if you could import modules directly from HTTP sources, like Go. For example:
import _ from package 'github.com/lodash/lodash/tree/4.8.2';
import $ from package 'jquery.com/version/2.2.0';
Your browser would make the OOB requests and cache the results in a shared directory, which any JS code that's running locally can use. This way, HTTP requests for 3rd-party components shared across websites aren't repeated over and over again for the same libraries. And it could even be extended to CSS:
Better idea would be to reference it via cryptographic signature. Then you could host your own copy of lodash for example, but still benefit if a client already has it downloaded from a different site.
That attribute is meant to verify resource contents. The strategy of using it as a cache ID is still being discussed, but last I looked into it, there were some cache poisoning concerns.
Libraries on CDNs have drawbacks as well, like the additional dns-roundtrip.
I think what's needed is a heuristic webpack-loader, where you can specify both a self-hosted URL and a cdn-URL. It could work like this:
* Load the CDN version and fallback back to the self-hosted version, if it doesn't exist or takes too long.
* In certain cases (probability, origin of request, etc.) load the self-hosted version first, but load the CDN in the background, so it get's cached.
Is this related to the bandwidth costs you incur in aggregate? I guess I too would be mad after converting all of my image assets into vector and CSS3 concoctions only to have my site size foiled by a dependency. But in reality one image is much larger than this.
On mobile and in most of the world where fast internet isn't ubiquitous, cutting down on the size of your page can make your users' experience much better. It's hard to do that when your dependencies get bigger.
FWIW I used TypeScript in anger with React, and hated it so much I rewrote all the React code in regular JavaScript. TypeScript and React, IMO, are a poor fit.
To be clear: I do like TypeScript and use it almost exclusively on the server, so this isn't a case of me "not liking TypeScript". It's the combination of the two. Basically, all of the stuff I did to keep TypeScript happy when using React was worthless overhead and brought nothing but pain, warping the codebase horribly. The JavaScript version with React was smaller, lighter, and had way less cognitive overhead. YMMV.
What is exactly your problem with TSX? I've been using TSX for a while so maybe I can help. The only thing that TypeScript really adds to JSX is component parameter validation, which is quite neat to be honest.
TypeScript also validates the expressions that supply the values. And if you use Visual Studio you get intellisense and validation for HTML, and intellisense and instant validation of your javascript expressions. These benefits are awesome, and by the way, all this is independent of React. You can use TSX just as a templating library using UIBuilder, see: https://github.com/wisercoder/uibuilder
I realise it's a lot of work, but as more and more downstream projects come to depend on React, having a "community build" that would build those against your RCs and run some integration tests would be a great way to improve reliability of the overall ecosystem.
Also note, if anyone is using "reactify" and has used the spread operator in JSX, as 15.0.0 removed React.__spread, your build will break.
(Reactify uses the deprecated react-tools which uses React.__spread.)
Yeah. Putting a warning should solve the problem because people will start complaining to the transform vendors about this and hopefully they will deprecate their transforms or use Babel internally.
Never mind me, I just wrote some parts of the changelog :-).
In my opinion the biggest credit in this release goes to Ben who single-handedly turned around the internal implementation of React DOM to stop using ids. The big span change submitted by Michael was also possible thanks to Ben's work.
Yeah I wasn't too thrilled with Angular 2.x either. I had always been watching React and liking it, but I had no experience with it. So one day I decided to give it a whirl and see how far I could get converting the Angular 1.5 app I had already started and get it to the equivalent state in React.
Long story short, I took 40+ hours of Angular 1.5 code and converted it to React code in a single 8 hour work day without any prior experience in React. I was both impressed and giddy. The next day I merged that reactify branch into my develop branch, and continued on.
I feel that React does components better. I feel that Angular 1.5's introduction of the component sugar over top directives and even components in Angular 2.x just felt shoehorned in as an afterthought.
I couldn't even spend a day trying to figure out how to componantize this app in Angular 2.x let alone 1.5.
Now I'm head deep in learning as much about the ecosystem as I can, Redux, Relay, GraphQL, all that good stuff. I've officially ditched Angular for the React ecosystem.
> I took 40+ hours of Angular 1.5 code and converted it to React code in a single 8 hour work day
That doesn't seem like a fair comparison considering that when you first wrote that code, you probably had to figure out things unrelated to whatever framework you were using and didn't have to deal with that aspect when porting to React.
For example, if I rewrote a Django app in Pyramid, it wouldn't take as long as writing the original app even with the overhead of learning Pyramid, because I've already figured out the functional aspects of the app.
Generally speaking, learning frameworks isn't the hard part of writing an app, unless the app is trivial.
I took it as a statement of "someone new to the framework, can quickly rewrite their old code" if they came from Angular.
Which is an important metric.
When I converted a Backbone app to Angular, it took 3x as long simply because of the amount of ceremony and process that I had to go through to do things the idiomatic Angular way.
React on the other hand, as a view layer, is so dead simple that you can internalize its workings within hours of study. The complexity with React comes with everything else---Redux, React-Router, GraphQL, CSS in React, etc.
None of that stuff is defined as part of the React framework itself, but that's the only way you can make a "fair comparison" to Angular which is a full-stack framework.
I went through a similar experience with our code base and saw similar results. Our project is quite a bit larger (some 20k loc) but converting it to React has been a huge win. Performance has improved without all the custom code needed in Angular, bugs are easier to find and fix, and reasoning about the code is a ton simpler. I hate to think about all the time we spent trying to optimize the Angular digest cycle... .
We also discovered that since writing components is so much easier in React than with Angular 1.5 we tend to write a lot more of them. This makes the components easier to test and has greatly improved reuse across the code base. Which in turn makes adding features possible that would have destabilized the old code base.
I did look at Angular 2 and just saw more of the same and didn't want to deal with it anymore. I found that I personally aligned well with the React ecosystem (Redux, JSX, CSS in Javascript) which probably colored some of my decision.
> I hate to think about all the time we spent trying to optimize the Angular digest cycle...
Yep. I spent a few days last week reimplementing a heavy part of our app in React, specifically because of this problem. "Make everything a component" is a great idea, but does not work so well in real-life Angular 1.5 when you're building a complex tree-like editor. The digest cycles grow longer and longer.
The code reimplemented in React is 30% smaller and 8x faster (with some other tweaks to cut down on the frequency of digest cycles). I'm really enjoying React so far.
Sounds like that is credit to both Angular and React for facilitating largely library/framework agnostic code.
I should mention that Angular 2 is quite a work in progress currently - even though it has the beta label, it still is very much far from production ready IMO. Components in Angular 2 are largely well conceived, excepting the interactions with inputs and outputs and how they interact between reusable components. React was cited by an Angular team member to me as being vastly easier to use when it comes to intercomponent communication currently - I trust the Angular team will eventually get it right, but for now there are certainly some holes.
Could you share more about the experience? I'm building a dashboard for a realtime web system and React seems to be a solid choice to keep things simple and tidy.
I recommend you look into Aurelia which was created by a member of the Angular team who believed Angular 2 was on the wrong track. Aurelia could serve as a replacement for Angular, and you can use it with React if you want to. Yesterday the Aurelia blog mentioned how their View Engine Pipeline can use React and Angular components all together in Aurelia:
I had no idea react used .innerHTML. That provides a significant performance drawback compared to other DOM APIs due to it requiring the browser to re-flow / re-render so much more. Glad you guys updated :)
We only used .innerHTML when creating new subtrees, which requires a reflow anyway when they're inserted. There are reasons that document.createElement is faster for us, but that's not one of them.
Even inserting new subtrees should be significantly faster via appendChild() versus innerHTML. The browser has to do a whole heck of a lot more when you don't build the nodes first.
I'm curious though. I'll have to check out the source to see how it works in React now. You know, when I find some free time :)
The native API calls such as createElement are pretty expensive (in V8 it's like 10 function calls to finally reach the target C++ function) and you are making a lot more such calls when not using innerHTML.
Interesting, thanks. I had heard that calls from JS to C++ were cheap but calls from C++ to JS (such as if Array.prototype.map was implemented natively) were expensive.
Might depend on how you build the nodes. If you build all the nodes and then append it should be a lot faster. Maybe browsers have optimized for this use case better but I ran benchmarks on this a couple of years ago and createElement / appendChild was always a lot faster.
IE and Edge are much faster when appending nodes first (thus always appending empty nodes) but Chrome, Firefox, and Safari are similar speed regardless of the order. We do the former in IE and Edge only and the latter in all other browsers because it's more convenient (and makes more logical sense to me).
I think document.createElement could of been prone to memory leaks, as dom objects cannot be freed on the C++ side if it's still accessible from JS, which can be difficult to judge sometimes
The documentation says this is faster because the browser does not have to compute reflow during DOM construction. But is this also true if the DOM tree is built "bottom up" (i.e., first the leaves, then their parents, etc, and then finally hook everything into the document body)?
I just tried the new release. The data-reactid is indeed gone when rendering on the client side. But on the server side, the html string generate by ReactDOMServer.renderToString still has data-reactid. Is it because client side can validate the server result and decide if it needs to re-render the page?
Not quite -- it's so that the client can know which node is which. (When rendering on the client, it can just store a direct reference to each created node.)
> There were a number of large changes to our interactions with the DOM. One of the most noticeable changes is that we no longer set the data-reactid attribute for each DOM node.
Nice. I currently use another functional reactive virtual DOM data binding mechanism and the data attributes everywhere was one of the things that didn't appeal about react. Since react is getting crazy network effects from community I might check it out for my next project.
Great to see React continue to improve and streamline as new versions come out. They've managed to avoid the huge bloat that happens to frameworks as they age. Kudos.
The thing is, they have bloated out the library quite a bit. One of their early selling points was how much smaller the React library was than Angular/Ember. Now the size of react.min.js has ballooned up to 145.4kb, which is only 10kb short of Angular, and with addons it's even bigger.
As someone who focuses heavily on small page size, that's a pretty harsh regression. Especially when compared to all the other VDom frameworks like Riot/Mithril/Cycle which are all much much smaller and achieve about as much (in my opinion).
I remember quite the opposite, by 2013 people were quite concerned about react size (can't exactly remember it's size, but I think it was somewhere close to how it is nowadays or larger)
React 14.x was built with babel 5.x. React 15.x is compiled with babel 6.x. Must user jsx code now be compiled with babel 6.x to be compatible with React 15.x?
Am I correct in assuming that you're only talking about the production build being faster?
Going from 0.14.8 to 15.0 RC2 there seemed (subjectively) to be a noticeable increase in lag in some pages we're working on with a moderately large amount of content being rendered (things like tables or SVGs with a few thousand elements) when using development builds. However, looking at the changelog and how many new warnings seem to be included now, it doesn't seem unreasonable that those would slow things down a little overall even with the other changes.
As long as those aren't going to affect production, or if there were some final changes that have gone in now but weren't in RC2, it's not a big deal. However, if the React experts wouldn't expect a net slow-down even in a dev build, maybe we should look into it a bit more.
That's possible. We haven't done much optimization on the dev build but I was planning to look for low-hanging fruit soon to see if there are any significant improvements we can make. I wouldn't have expected 15.0 to be much different from 0.14 though.
OK, thanks. It doesn't seem to be a huge difference or anything, just enough to be a bit noticeable in some pages we're staring at for a disturbing number of hours per week at the moment, so I was curious. Compared to things like implementing shouldComponentUpdate in the right places, the change is tiny.
It's great for...
- Static D3 components or other such libraries (things like Backbone should work too!)
- D3 components with simple state and event interaction, like tooltips on charts
- D3 components such as progress bars that can be animated using react-motion, for example
It's not so great for...
- Physics based D3 components or anything using a lot of DOM mutation and state
- Linked to the previous one, brushing and filtering of selections using the built in stateful D3 tools
- Essentially: Anything with a lot of DOM mutation from timers, events or internal state will be hard to useation from timers, events or internal state will be hard to use
But I've mutated state in an onMouseOver event, redrawn and it's buttery smooth.
It will likely strengthen the case for having react manage the DOM while d3 takes care of any fancy math you need, like computing the d attribute for a <path> in a line chart. Previously there were some SVG elements that simply couldn't be expressed in React and so you had to either push that DOM management onto d3 or dangerouslySetInnerHTML
We updated the patent grant to be less restrictive in response to community feedback about a year ago (several months after the discussion you linked):
Since that change I've heard few complaints, and I know for a fact that several large companies who were previously unwilling to use it are now content with the language.
I noticed you italicized the word grant. In reality the "grant" isn't granting; it is taking away. Let me explain:
If React didn't have an explicit "grant" there would be an implicit grant. Is the explicit grant better than an implicit grant? It isn't because the explicit grant has what is known as a strongretaliationclause.
React's patent "grant" gives you a license to React's patents. This sounds like a good thing, however, the "grant" has a "strong retaliation clause" which says that if you make any sort of patent claim against Facebook this patent grant automatically terminates.
Which means Facebook can now sue you for patent infringement for using React. You may think this is no worse than not having a patent grant at all. But that's not the case. If React didn't have an explicit patent grant then there would be an implicit grant which does not have any retaliation clauses, and cannot be revoked.
If you work for a software company and your company has patents then keep in mind that by using React you are giving Facebook a free license to your entire patent portfolio.
> If React didn't have an explicit "grant" there would be an implicit grant.
Please provide a citation to any court decision so holding, or to the text of any law that clearly implies this.
The reality is that a couple of lawyers have come up with the speculative theory that maybe, some day, a court will read the plain text of the BSD license as granting an implicit patent license because some of the words its uses are kind of similar to the words used in a seemingly unrelated law.
This theory is...tenuous at best. More to the point, it's just a theory about what a hypothetical court might rule one day in a hypothetical case. (Plus, I mean, hypothetically the court might rule that an explicit grant can't limit the rights granted by an implicit grant in this case. Since we're just speculating about what new rules a future court might hypothetically make.)
> If React didn't have an explicit patent grant then there would be an implicit grant which does not have any retaliation clauses, and cannot be revoked.
I'm not a lawyer. My understanding is that this is hypothetically possible but stands on shaky legal ground and has no legal precedent. Whether or not an implicit grant exists seems to depend on who you ask. This is apparently why GPLv3 includes an explicit grant.
It could also be that there aren't any patents that cover React. I don't personally know of any that do.
>It could also be that there aren't any patents that cover React.
If that is the case, why have this controversial patent clause at all?
I haven't read them all, but Facebook seems to have a number of patents which at first glance are so broad they could apply to react:
https://patents.google.com/patent/US20160092096A1/en
"a method performed by one or more computing devices including defining a hierarchical structure for a user interface (UI) that includes defining one or more layers of the hierarchical structure, adding one or more objects at each layer, and specifying one or more relationships among particular objects"
https://patents.google.com/patent/US20150277691A1/en
"In one embodiment, as a user is scrolling through a first series of content items and reaches the nth content items from the first series of content items, display a visual indication that there are additional content items from the first series of content items existing after the nth content item."
https://patents.google.com/patent/US20150113066A1/en
"A communications system including one or more alert gates and an alert controller. Each alert gate is configured to detect a different type of alert feed corresponding to a particular kind of alert. The alert controller is connected to the alert gates and operable to receive detected alerts from the alert gates and to deliver the detected alerts to a user of the communications system."
Even the text of some are pretty much as expected. It seems that they have a patent on single page applications (SPAs) to-a-T [1], elaborating that an application may use mechanisms like XHR and append an identifier to the fragment portion of the URL to keep history.
Even if I subscribed to the idea that this was without prior art on the web in 2009, equivalent functionality already existed in desktop applications. To claim that it is a novel invention because it now works on a web page is disheartening. It's exactly this kind of behavior--claiming new ownership based on each abstraction--that leads to restrictions in some countries on patents which can only exist in software.
I think that rather strengthens the argument that Facebook's patent grant isn't a good reason to avoid React, given that almost any possible webapp written with any framework is likely infringing. :)
If React doesn't have any patents then Facebook could solve this issue by removing the "grant" or by removing the strong retaliation clause from the "grant".
I am not a lawyer either, but I think the idea of implicit grant is compatible with common sense.
The link you gave doesn't use the word implicit and actually says:
> Wherever possible, make sure you have an explicit license to any necessary patents held by the licensor.
Then it goes on to distinguish between weak and strong grants.
I think you mean weak, not implicit. AFAICT, there isn't an implicit grant with patent law, because the default is that you're infringing without an explicit grant.
The link in the original post was about strong vs weak retaliation clauses in an explicit license, but the parent is also talking about implicit patent licenses, which do have some case law and opinions surrounding them in the United States [1].
The theory behind implicit patent licenses is that the licensee (often the buyer) will have some idea about which rights they have to the thing they are licensing, absent any other agreement. For a software license like the 3-clause BSD, this means that if they are licensing the software from the holder of the patent, they can reasonably expect to not subsequently be sued for infringing on the patent while following the terms of the license.
Omitting any kind of patent mention from your product's license, and later claiming infringement for reasonably expected use, would look subversive and generally be unconvincing.
Yes, a year ago. They weren't potentially dangerous in the first place really but they updated them after the feedback to be much more explicit and literally no-one appears to have issue with it anymore.
Yes, it is. React uses it to track identity of components in a list; it's significant especially when reordering children and affects both which DOM nodes get reused as well as whether this.state is preserved.