I'm disappointed that "No" and "I stopped using Rust" were merged into one answer for the "Do you use Rust?" question. Knowing the reasons people have never used Rust compared to the reasons people have used Rust but moved away would have been a really interesting result.
Yeah that was a mistake. We were trying to cut out redundant questions, but it would have been so easy just to split this into three answers. We might be able to split this out based on secondary information, but we didn't finish that analysis in time for this post. Maybe we can get it into a future post if we trust our result.
Actually I filled the survey with No.
But actually it was more like "It's too early for a high level language, since the tooling is still lacking, but I tried it"
> Since this is apparently a self-selected sample of people interested in Rust, a 1/3 "No" report is a problem.
How do you know what usage level is a "problem" without any point of comparison?
I'm interested in JavaScript, for example, but I don't use it more often than a handful of times per year, if that. I wouldn't be surprised if a third of people answering a JS survey were in the same boat. And I'm not worried about the future of JS.
> If the survey was from a random sample of all programmers, the usage level would be much lower.
Is there any language which more than a small minority of all programmers in the world use?
"I'm interested in JavaScript, for example, but I don't use it more often than a handful of times per year, if that. I wouldn't be surprised if a third of people answering a JS survey were in the same boat. And I'm not worried about the future of JS."
This may or may not counter the point. Your language specifically... per various comments I've read here (esp vs Go language)... targets programmers demanding high-efficiency or low-level usage that are basically using C, C++, Objective-C, Java, or C#. Vast majority are with most in efficiency camp using first two. So, if data self-selected for those, then a third saying No could be a serious failure if the goal was replacing those with significant uptake. At the least, it would be meaningful metric on if Rust was succeeding.
Unfortunately, we need the people responding to represent those groups to be sure. I suggest that whoever has the responses filter out everything but those languages above. Then look at the No's or any problems reported. Then, filter again to get just C, C++, and Objective-C as they're the low-level ones. Then look at the negative responses again. That could get information on uptake in most important targets plus generate ideas for action to take for key demographic.
First of all, it's not "my language". I haven't even worked on the compiler for over a year.
Second, think about C++. A year after it came out, were 90% (say) of C users who had heard of C++ using it? I highly doubt it.
There simply isn't enough data to compare to. So you can look at that number and support any narrative you want by making up what you think a "healthy" number "should" be.
Or it could just reflect strong interest for a relatively young language. I'd love to write some Rust in my day job, but it's rough getting support for using new toolchains and package managers (i.e., cargo). Relatively modest changes (C++11, CMake) are ambitious.
Anyway, if a suitable greenfield project or even a job offer surfaced, I'd be extremely interested.
Note that we deliberately posted the survey to places where it would be visible to more than the typical Rust community, and made it explicitly clear that we were looking for feedback even from people who didn't currently use the language (after all, we regularly get plenty of feedback from people who are actively using the language, via our discussion forums and RFCs). Perhaps the single biggest goal of this survey was to determine what is keeping people from being able to use the language, and that means reaching out to this crowd. The line from the OP, "We’re happy to report that more than a third of the responses were from people not using Rust", isn't being facetious!
In addition to the typical Rust places (users.rlo, /r/rust) and the typical generalist programming communities (proggit, HN) we also encouraged speakers to share the link to the survey at the end of any talks they gave (which gives us a good mix of both current users and non-users (especially if the talk is part of a generalist meetup or non-Rust-specific event)). We also included the survey in any official blog posts during this period (many of which, like http://blog.rust-lang.org/2016/05/16/rust-at-one-year.html , we expect to be general enough to appeal to more than just our usual users). And we also urged our community members to pass the link on to their friend groups, as well as any other communities that they may be a part of.
I'm curious if there were any significant responses to the tooling/ecosystem questions regarding packaging and integration with Linux distributions.
I'd like to provide an application written in Rust in major Linux distributions. "cargo install" will not work, both because it depends on network access, and because it pulls in dependencies from outside the distribution. Similarly, many distributions have policies against bundling library sources in the application. There needs to be a straightforward way to turn a source package with a Cargo.toml file into a package for Debian and Fedora that depends on other packages corresponding to its Cargo dependencies (and C library dependencies).
Cargo install was never intended to be used that way. It's for things useful for Rust developers, not for people who use a program who happen to be written in Rust.
There's a few things at play here. For "build from source" distros, we've been making some changes to make it easier to package rustc. Namely, instead of relying on a specific SHA of the compiler to bootstrap, starting with 1.10, it will build with 1.9, and 1.11 will build with 1.10. This is much, much easier for distros. As for Cargo dependencies on those packages, we'll see. There's a few ways it could go, but we need the compiler distribution sorted first.
A second is that we'd like to eventually have tooling to make giving you a .deb or .rpm or whatever easier: https://github.com/mmstick/cargo-deb is an example of such a tool. This won't necessarily be good enough to go into Debian proper; I know they tend to not like these kinds of tools, and want to do it by hand. But "Hey thanks for visiting my website, here is a thing you can download and install", these kinds of packages can be made with this kind of tooling.
In general, it's complex work, as then, these issues are also different per distro or distro family. We'll get there :)
> This won't necessarily be good enough to go into Debian proper; I know they tend to not like these kinds of tools, and want to do it by hand.
No, not "by hand". By automation provided by distribution.
And you know what it takes for this automation to be able to build a DEB out
of the source tree? A "compile" command that (a) does not touch network, ever,
and (b) does not expect dependencies to be downloaded into the source tree nor
$HOME/.whatever, but allows them to reside somewhere in /usr/lib.
It's that simple, yet most of the language-specific package managers
completely fail at this or at least make this building mode awkward.
BTW, the same stands for RPMs, and virtually any binary packaging system.
> My understanding is that the _configuration_ is usually written by hand, and then the automation takes care of the actual build.
There's absolutely nothing wrong with auto-generating the packaging files in the debian/ directory, as long as the uploaded package contains the generated versions. I'd love to see a tool that can generate a Policy-compliant Debian source package from a Cargo crate, ready for upload.
> (a) is already covered, as is (b). That said, we're still working on making it easier.
I've seen the proposed plan for addressing (a) and (b), namely shipping the full source code of library crates in the -dev package. Has this been implemented, in a way that Cargo can use?
I would too, I didn't realize that it was achievable, but even as a long-time Debian user, I haven't payed close enough attention to these policies, it seems.
Oh no, I meant that Cargo itself will already not hit the network, and you can configure exactly where it downloads sources. I wasn't aware of any "Cargo can use a .deb for source" plans.
I'm glad to read this. The only thing that remains then is to keep programmers
(those writing applications in Rust) from breaking either of these points...
> That said, we're still working on making it easier.
...and making the documentation clear how to use the build tool in this
manner, even for sysadmin who doesn't write Rust code (I don't know the
current state of Cargo's docs; maybe it already achieved that).
Are disabled people "under represented in technology"? Just a question... I have fused wrists from a motorcylce accident. It's inconvenient and I cannot drive or play most sports, but I don't consider myself disabled. Am I under-represented? Do I self-identify as disabled? No. Would others in my position? Maybe... I guess that's a question about disabled threshold...
Do wheelchair bound people identify as "under-represented in technology"? I don't even know. All I know is that some of them may say yes, and some of them may say no. So we don't know who is in the 81%.
Gay - are gay people "under-represented in technology"? Somehow I doubt it. Again same problem. You're gay. You're part of a minority group, definitely. But are you "under-represented in technology"?
Wouldn't it have been better just to spell out the categories and ask people if they are in them? "Are you white", "how old are you?", "black", "gay", "trans", "female", "disabled" etc?
It is fine if people don't consider themselves underrepresented, we just want to be proactive about discovering if people end up self selecting themselves out of our community because they don't feel welcome. Furthermore, we are starting some initiatives to focus growing a diverse group of participants (https://github.com/rust-community/RustBridge), and so we need some baselines to see if we are successful or not.
We did have a number of categories that people could select from. Our plan for next year is to identify groups from the free text for next year. We are tracking our improvement plans here: https://github.com/rust-community/team/issues/28
I'm a minority who is often "not represented" in things, and that's neither good nor bad - it's simply neutral.
The idea of "representation" is rather silly and I'm always amused by how otherwise extremely intelligent software engineers completely throw out their scientific rigor when it comes to political issues and diversity. There's simply no evidence that deliberately assured diversity has any inherent value to a software project (or any kind of project).
And this is before bringing up the can of worms that is social engineering. The nanosecond in which you introduce deliberate manipulation of what would otherwise be organically formed groups, you unrealizingly cross a line into the slippery slope of what's OK or not OK for you to manipulate, resulting in a relativistic argumentation spiral to authoritarian behavior.
got it - sounds like I probably misunderstood the motivation which appears to have been less about demographics than about under-representation identification. Kudos on the whole survey and this particular section, my nitpick aside, I find the whole thing highly interesting and am personally impressed by your desire to know your user base in this way. Not a Rust user yet as I am a scientific programmer and on the exploratory side I really need a REPL, but I will make an effort to look at Rust for the parallel side of our production code.
Yeah, we iterated a bunch on this question, and I still don't think we have it quite right.
On the REPL side, we've had a few prototyped out [1] out over the years. We'd love something someday. What do you use for your scientific computing? We are also have some projects that are trying to ease using Rust with other languages ([2], [3]), so knowing what we should priorities would help.
Python (Numpy/Pandas really) is our go to for all the usual reasons on exploratory with some R (but R is becoming too slow for our growing data sets). What we're really missing is a language that does parallel computing, not necessarily only SIMD (Cuda style), but something that can also be more flexible on algorithms which have step dependencies. Scala/Spark works here, but it's more about big business batch data in our view, whereas we are about big non-linear optimizations in finance (fixed income curve fitting on huge numbers of bonds), and a further requirement for soft real time. We're very very excited about the Knight's Landing Xeon Phi, which will, we think, give us good performance on SIMD, and much more algorithmic flexibility because it can also be treated as a 288-core Xeon (hopefully). I think Rust could be a very good fit here because I predict that these "mega-core" chips will become very important. We can tolerate very little latency between processes so we basically cannot go multi-node too easily. We are also relatively small so don't have super-computer budgets.
I'm surprised there aren't more Java devs responding. Rust is characterized as a systems programming language, but Java is a language that has made incursions down into the C/C++ "systems programming" space over the last decade than any other language, and it is ripe for disruption by Rust due to Rust's superior efficiency.
In agreement with the survey, I believe this will not happen until there's good IDE support, and until more hardened enterprise-y dependencies/libraries emerge (e.g., database access to things like Oracle via OCI or ODBC). After an extended run with Rust, I returned to the Java 8 world briefly shocked to find a) how much more productive a good IDE made me and b) how productive such an IDE made me in terms of writing inefficient code! In there lies an opportunity. Rust should be attacking Java from below. The waste in a typical Java stack should make us all weep. The only barrier I see is perception around the ownership learning curve.
In bouncing between languages, what Rust lacks (and what other languages have) is the ability to pass the sip test...the first experience. Out of the box, untuned Java screams due to JIT and the IDEs; it's only far later when you try to do something serious that you feel the cost of the abstractions that help productivity, and you're in a forest of JVM tuning options and writing non-idiomatic Java code in order to get reliably performance and behavior. Rust requires too much faith at this point--that the microbenchmarks matter and that investment in mastering the borrow checker will pay off with more stable performance and behavior later.
Other points: Cargo is underrated. Traits over primitives is a huge win over Java's boxed collections. Community involvement and health is underrated.
Someone posted a great question here that they later deleted. My full response also wasn't submitted (and lost), so I'm trying to recreate part of it here. It was actually a good question about whether Rust's efficiency has been tested.
I've done some testing; Java is faster in terms of the sip test for a lot of stuff, especially for cases where you micro-benchmark single processes without regard to overall CPU or memory usage. However, when you scale up (to the point where JVM tuning comes into play), the raw performance in terms of response time is comparable, while the overall system utilization (e.g., "time" measurements of user/sys) is much better for Rust--Java uses a lot more system time due to JVM housekeeping threads. So more efficient is not a win for Java, even though wall-clock benchmark time might look better for a single process (as long as you don't look at overall system utilization.)
Java's performance comes at the price of more memory usage and higher variance in response time. You can attempt to constrain one or the other, but you can't constrain both at the same time. Microbenchmarks typically put a high enough ceiling on memory that they're relevant for very constrained loads. To be robust beyond the microbenchmark, incremental GC is required, which levels the playing field between Java GC and Rust's jemalloc. Still, even when you attain parity, the Java solution is using far more memory due to the JVM's OOP overhead.
Another example of a hidden "cost" of Rust comes from things like Unicode support. Recently I tested a regex-heavy algorithm that used burntsushi's regex library for Rust. In initial testing, Java blew it away. What I later realized was that the default Java implementation I used did not support Unicode characters. When I enabled that support, and enabled incremental GC (to support the scale of testing I was performing), the performance was similar.
This is another example of Rust being mischaracterized up front. Older languages took short cuts, and fare well for the low end of testing (the microbenchmark). Rust tends to look forward and to the bigger picture.
Anyway, sorry you deleted your legitimate question. It was a good one, and the kind that will make Rust better.
> Another example of a hidden "cost" of Rust comes from things like Unicode support. Recently I tested a regex-heavy algorithm that used burntsushi's regex library for Rust. In initial testing, Java blew it away. What I later realized was that the default Java implementation I used did not support Unicode characters. When I enabled that support, and enabled incremental GC (to support the scale of testing I was performing), the performance was similar.
Could you explain a bit more about this? I find it surprising. If you can't share the code, perhaps you could share the regexes? Which Java regex engine did you use? (There really should only be one case where Unicode support causes performance problems, and that's when you use word boundaries.)
Thanks! If you come up with an example I'd love to see it.
Generally, even though `\w` in Rust's regex library supports Unicode, it shouldn't result in a slow-down compared with the non-Unicode `\w`, assuming you're using find_iter. (Of course, Unicode support isn't free, but the primary cost here is memory and compile time, not matching performance.)
If you were indeed emitting `String` (a new allocation for every match) instead of `&str`, then that could certainly be a possible explanation for the slow down.
> My full response also wasn't submitted (and lost), so I'm trying to recreate part of it here.
Just an aside, that's exactly why I installed browser extension "Lazarus: Form Recovery" years ago. Let's me easily recover/reuse anything I typed into a form previously. Hasn't been updated since 2014 (Chrome web store) but works fine.
Thank you for publishing the results of this survey. It has inspired a similar survey for the Nim programming language[1] that is still open, so please take a look if you have a spare couple of minutes.
It will be very interesting to see how the results differ between Rust and Nim.
My favorite part about this survey is that for many of the questions there is a majority agreement. Many developers believe that tooling needs to improve or that crates should work with stable features.
I think this is great for the Rust team going forward knowing that they will act on it.
A component of this that's also nice is that there weren't any real surprises: if you had asked me what needs to improve, I would have said the same thing the survey did overall. So it was really cool/nice to see our suspicions confirmed, and knowing that we are already working on some of this stuff.
It's early days for Rust, there will be general agreement on what needs work because it's usually obvious and everyone has roughly the same experience with the language.
I feel the difficulty comes when the language ages and the most apparent problems have been tackled. Take Python, what's needed: async ? GIL fix ? more functional? faster execution?
> async ? GIL fix ? more functional? faster execution?
Three out of these four boil down to "this isn't fast enough", which makes it seem as though the Python community is unified in understanding the problems it faces. :P
"Person of color": perhaps this answer had so few responses because few people identify as being a person of colour. The wording does seem rather twentieth century.
Thank you. This reddit thread discusses it some: https://www.reddit.com/r/rust/comments/4ikawg/launching_the_... plus there's this little gem[0]: 'Talking with my Korean friends here in Korea, I found that they didn't check "person of color" checkbox because they had no idea what that is. Typical response: "does it mean person with colorful personality?" Just for your information.'
That's actually a poorly-constucted survey question since a person can be a part of several underrepresented demographics in technology, which makes percentage breakdowns inaccurate.
This survey question allowed for people to describe themselves as they decided, and so people could select multiple choices if they wanted, as well as enter in free text. The percentages John used in this writeup should just be compared to the total number of people that replied, rather than across each of these demographics.
I've met most of my trans friends (three out of four) through Rust, it's indeed interesting that Rust apparently manages to appeal to such a specific demographic.
If items ("fn" definitions) implemented type inference instead of requiring the programmer to manually document something the compiler can figure out, the learning curve would be easier.
Especially with tooling that shows the type of any given expression (integrated with an editor).
> If items ("fn" definitions) implemented type inference instead of requiring the programmer to manually document something the compiler can figure out, the learning curve would be easier.
No, it wouldn't. Not when the compiler is spewing error messages about types you didn't even mention, especially if the compiler inferred that type on the basis of code in some random other file you didn't know about.
Guess our experiences are different. In F# I do this all the time, and I've never had it be an issue. In Rust, if I'm trying to play around with something, I have to figure out the signatures even when there's only one option.
I hear this argument that the inferred types will be confusing (from C#, F#, and Rust communities; I'm sure it exists in others) -- but they will be accurate! It's something the human has to figure out anyways, except without computer assistance. All because of some misguided attempt at enforcing documentation guidelines and verbosity on programmers, sight-unseen.
I guess users can wait for IDE support with a "show inferred types" option.
Edit: And just as a note, I really love Rust. My only core complaint is the verbosity. And that's a small price to pay since I'd either be paying with perf/memory usage (more expressive languages) or bugs and far more verbosity (C).
Just as an anecdote, the fact that Haskell does this (infers type signatures for functions) is one of the things I don't like about it, because it seems to often infer things that are surprising to me. I think inferring types in most places, but enforcing explicit types at the level of method definition strikes a really good balance. I care about the types required to satisfy interfaces, but enjoy not needing to care so much in implementations.
IDE support for creating method signatures by inferring types would be neat though.
Haskell _allows_ it. The programmer decides if they want it. Requiring a full signature inhibits natural code writing by introducing a high amount of overhead per function. Writing a 1-line inner function is no longer an obvious win. The annotation requirement applies to nested functions in Rust.
I don't really understand this issue. If I write a one line function in rust, nested within another function, I will use a closure. Which, incidently, rarely requires me to annotate any types.
I can choose to use explicit type signatures for functions I write, but I can't choose to use them for all the functions I read. I read a lot more than I write.
We also may just disagree about what constitutes a "high" amount of overhead. Thinking about the contract of my method does not seem like overhead to me; I have to do that anyway.
> All because of some misguided attempt at enforcing
> documentation guidelines and verbosity on programmers,
> sight-unseen.
Characterizing the tradeoff in this way is very uncharitable. In Rust, function signatures are the absolute source of truth about the contract that a given function upholds. In systems with global type inference, the entire program is the source of absolute truth for any function whose parameters are inferred, and local reasoning becomes much more difficult.
My apologies, but that's what's been explained. Something to the effect of "it's best that people document/annotate their types, so we require it."
Your description of contracts in Rust vs other languages isn't reflective of how they can be used. With global inference, nothing prevents a programmer from deciding where the major boundaries of his program are and annotating where desired. This not only provides the benefits of both worlds, but allows the author to decide where the contract and documentation goes.
Nothing is gained by requiring the programmer to annotate every little helper function, nested or otherwise. And for people just playing around, it brings a significant barrier. They must not only sort of grasp how the borrow system works, they must be able to write a flawless annotation to boot.
At worst, lack of annotations should be a warning (the warning can include the inferred type!), like the casing/naming style ones. Perhaps with an option for public/exported functions only.
(Sorry if my tone comes off hostile; not my intention.)
> "it's best that people document/annotate their types,
> so we require it."
That may be how people interpret it (I'd be surprised to find that line in any official source), but the reason for this choice was ultimately technical, not philosophical. Having function signatures be the ultimate source of truth means that the typechecker can ignore function bodies entirely, which has large implications for the design of the type system and the layout of the compiler pipeline, and more importantly this feature enables fine-grained incremental recompilation at the function level (i.e., if the body of a function changes but its signature does not, then the compiler knows that consumers of that function do not need to be recompiled and can skip an enormous amount of work (assuming that inlining is minimal, which is indeed the case in debug mode, where incremental recompilation is intended to shine)). We're not taking advantage of incremental recompilation yet, but it (along with speeding up the compiler in general, as corroborated by the survey) is the primary goal of the compiler team this year (the prerequisite overhaul of the compiler middle-end is just about to land: https://github.com/rust-lang/rust/milestones/Launch%20MIR%20... ).
"it's generally regarded as best practice to write out the types of your functions, hence Rust's choice here. ... Rust follows 'explicit over implicit'"[1]
If it's a technical limitation then that's far more understandable. I've just not heard this explanation until now -- not that I really was seeking it I suppose. Thank you.