With respect, and I say this having experimented with Haskell, OCaml, any many other functional languages as well, just, no. Those languages come at a MUCH higher mental cost than go. Go is wonderful because writing it doesn't feel like I'm attempting a CompSci doctoral thesis. I can also say I have yet to see the Go runtime crash, and I am using in production.
What do you mean by "mental cost" here? Cost of initial learning? Then I agree... But learning a language is an O(1) cost that takes a few weeks to a few months. We program with the language for many years.
I never said the Go runtime will crash, but your programs will, because of unsafe nullability, mutability in the concurrently shared state, and various other problems in Go.
Better concurrency in Haskell: In addition to having light-weight threads like Go and channels like Go, Haskell also has software transactional memory which Go lacks. Haskell also has `par` annotations on pure expressions that allow parallelization with guaranteed lack of effect on program semantics. Haskell also has data parallelism.
Haskell (and most of the other languages mentioned) is exceptionally clever. This, above all else, is it's downfall. For me, especially when tasked with building a high productivity development team, clever code is a ticking time bomb. It's easy to write, but hard to maintain and modify. It requires more mental ram to analyze any given piece of code, and is much more difficult for multiple programmers to contribute to. It's tiny, to be sure, but, again, I think that is negative, not a positive. Maximum clarity is not minimal code.
Go, on the other hand, is not clever. It's boring as hell, honestly. This is a Very Good Thing® when it comes to building out a dev team, and I feel, the single biggest reason Google put the resources into creating it.
As for the crashing, of course I see some dangerous areas. Educating developers on avoiding a small regions of pitfalls is much easier than managing a team of clever coders though.
I'm not a haskell expert (obviously), but the time it takes me to parse things like this is time I would rather spend reading 5-10x the number of lines and getting the meaning right away.
let loeb x = fmap ($ loeb x) x in
loeb [ (!!5), const 3, liftM2 (+) (!!0) (!!1), (*2) . (!!2), length, const 17]
(btw, I have no idea what the hell this does. Something to do with spreadsheets, apparently. I found it on http://www.haskell.org/haskellwiki/Blow_your_mind, which has enough cleverness to make me want to cry)
I've seen cleaner and more readable code in production haskell, but this sort of thing happens enough that I'm very cautious.
I think the first paragraph of the page linked explains why that code is so impenetrable. You will not have to read or write Haskell like that, ever. Good find though! On a similar note, the Haskell community is amazing. You can learn absolutely everything between the Haskell wiki, freenode irc and hackage. How great is it that EVERY lib/package/framework is documented on hackage in exactly the same format? Very great. Coming back to JavaScript is a bummer :(
I don't think you'll ever find this kind of code in production.
With years of Haskell experience, I rarely encounter code that is hard for me to read. This is a good counter-example, and is not typical code.
I read 10 lines of Haskell code roughly as fast as I read 10 lines of Python code -- yet the 10 lines of Haskell code can pack much more useful information.
So Haskell is a great tool for more efficient communication between programmers, who can write shorter messages to each other to convey the same information.
But, then you gotta remember, "worse is better" (New Jersey style), and Go is worse. New Jersey style: (1) be simple in both implementation and interface (2) correct, be correct as long as it doesn't make it more complicated, simple is more important than correct (3) consistent, when you can be consistent, do be, but simplicity is more important and (4) completeness, be as complete as you can, but realize completeness can be sacrificed for any other objective.
I think C and Go share a lot of New Jersey style -- Go is great in simplicity (remember simple isn't easy). I think Go will continue to go quickly because of these attributes.
> I never said the Go runtime will crash, but your programs will, because of unsafe nullability, mutability in the concurrently shared state, and various other problems in Go.
Personally, these problems are not what are a time-sink for me. The problems that I spend a vast majority of my time on tend to fall into two categories.
First, there are design problems. i.e. how do you model your data to be queryable, how do you architect services to make them resilient to machine failure, how do you monitor services, how do you route logs, etc. With these problems, Haskell doesn't help here. Go doesn't help here either.
Second, there are operational problems. i.e. your FS might randomly corrupt some data you stored previously (remember to checksum), your caches might get out of sync (fun problem if you ever go multi-DC), your service has shoddy backoff and DDOS's a failing downstream service, etc. With these problems, Haskell doesn't help here. Go doesn't help here either.
Am I saying that there's no value to the fact that Haskell solves these at compile time? Of course not. Just that the relative amount of time that would save for me is not my deciding factor when picking a programming language.
Another way to think about it is that people have (I include myself in this category) written large-scale projects in dynamic languages which have all the problems you mentioned plus a few more. And yet, the developers at Twitter, or Facebook, or Reddit don't spend the vast majority of their time face-palming at type-errors or NPEs, or ConcurrentModificationExceptions. They have other concerns.
I think people tend to underestimate the amount of time they spend on problems, when those are uninteresting. A single null dereference error may be trivial to fix, but the overhead around fixing any bug may be costly. For example, you might need to deploy a whole new version, rerun test suites and have a bunch of meetings. Then after all of this, you might remember that the problem only costed you 10 minutes of fixing the bug.
When you implement a red-black-tree, do you not spend any time testing your invariants? Or figuring out the bugs? That's a good example of where the Haskell type system can simply give you compile time guarantees saving you from bugs and from having to test them.
I also worry about the problems you mention, many of which Haskell indeed doesn't help much with. I don't see how these problems (which you solve once, usually by reusing a library) are what costs the majority of the time. A big, non-trivial implementation has bugs and will require testing. Haskell will have less bugs, and require less tests. This is a pretty big deal.
It's weird that you bring Twitter as an example, as they canned a dynamic language solution for a static language that is in many ways very similar to Haskell.
I've seen multiple large scale projects in dynamic languages. They all fail to scale well, both performance-wise and maintenance-wise. Statically-typed systems scale far better along both of these axis.
> I think people tend to underestimate the amount of time they spend on problems...
Again, the implication is that people who use languages with looser type-systems than Haskell spend lots of time dealing with the problems that you mention. In my experience, that is not the case. You can claim that I'm underestimating the impact of such bugs if you'd like.
> When you implement a red-black-tree, do you not spend any time testing your invariants?
Testing is, as far as I can tell, the reason that these problems don't come up. In the process of solving all of those problems I listed above, you tend to write a bunch of tests that exercise the same code that is run in production.
> It's weird that you bring Twitter as an example, as they canned a dynamic language solution for a static language that is in many ways very similar to Haskell.
I assure you that this transition is far less complete than you might think, and even when complete, will have more Java than Scala. Note that both of these languages allow shared mutable data, both allow null pointers.
> They all fail to scale well, both performance-wise and maintenance-wise. Statically-typed systems scale far better along both of these axis.
Oh, I agree. Go is statically-typed. However, you're advocating going even further along the spectrum, and I'm saying that going further brings diminishing returns, and starts costing you in terms of available engineers, and your productivity in writing code. I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.
They guarantee the correctness of the invariants of the Red Black Tree, and they easily replace hundreds of lines of test code which give no guarantee.
We might still need to write tests, but a lot fewer of them. Also, those we write will give us far more "bang for buck" because we can use QuickCheck property testing.
> Note that both of these languages allow shared mutable data, both allow null pointers.
Scala shuns null - and only has it for Java interop. All Scala developers I've discussed this with program as if null did not exist, and never use it to signify lack of a value.
> I think Go occupies a good point along this spectrum, where I can write robust code without arguing with a compiler.
When "arguing with a compiler", you're really being faced with bugs now rather than later, when the code is no longer in your head - or worse, in production. If the type checker rejects your program, it is almost certainly broken, and it is better to "argue with a compiler" than to just compile and get a runtime error later.
Availability of engineers is a good point, though as a Haskeller, I know both companies seeking Haskell employees and Haskellers seeking employment (preferably in Haskell).
Consider the flip-side of the engineers availability is the "Python paradox".
> Testing is a cost. It is more code to write, more code to maintain.
I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.
> They guarantee the correctness of the invariants of the Red Black Tree, and they easily replace hundreds of lines of test code which give no guarantee.
Code with mathematical invariants seem like such a niche area, though. The average type of test that I write is "ensure your service calls service X first checking for values in a cache; ensure that it can handle cache unavailability, service X unavailability, cache timeout, service X timeout". Maybe you could figure out a way to encode that in a typesystem, but I'd wager that it wouldn't be as readable as the equivalent written as a test.
> Scala shuns null - and only has it for Java interop. All Scala developers I've discussed this with program as if null did not exist, and never use it to signify lack of a value.
That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention.
> When "arguing with a compiler"...
I misspoke. I was thinking of the learning process, not the process of writing code.
Although my current focus is backend systems, I have worked across the spectrum in the past. I've worked (not including languages I dabble at home) in JS (for browser UIs), Java (for Android UIs and servers), Obj-C (for iOS UIs), Python (for servers and scripts), Scala (for servers), Ruby (for servers), and C++ (for servers). These languages represent a wide spectrum on the dynamic to static spectrum. I don't find myself writing much safer code when I go from JS to C++. The same for a shift from Ruby to Scala. The same for Python to Java. These shifts are relatively large on the type-system spectrum, as they go from dynamically typed languages to statically typed ones. You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me.
> I'm not saying you write tests to protect against NPEs. I'm saying that you write tests to ensure correctness of your code, and as a side-effect NPEs are flushed out of your code. This is my theory explaining why NPEs are not a timesink for me.
You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence. With Haskell, I can be reasonably confident about my code with very little tests.
> but I'd wager that it wouldn't be as readable as the equivalent written as a test.
Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.
> That's exactly right. Languages with nulls, and mutable shared state are perfectly reasonable to use if programmers do the right thing by convention
I think Scala users will generally disagree with you. They'd prefer it if null was ruled out in the language itself. That said, Go convention is to use nulls, not shun them.
> I don't find myself writing much safer code when I go from JS to C++. ... Ruby to Scala ...
Your code is much safer simply by construction, so I am not sure what you mean here.
> You claim that a significantly smaller shift, removing nullability from a language would be a big deal for reliability. That doesn't seem likely to me
Hitting type errors at runtime, null dereference crashes in Java and "NoneType has no attribute `...`" in Python is pretty common IME.
I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.
> You never know when you have enough coverage to rule out NPEs or any other bug. And to get confidence about lack of NPEs you want to have coverage of all lines involving dereferences, which means you need near 100% test coverage to have a reasonable level of confidence.
This is not true. In the example I spoke about above, if I take a CacheClient, and a ServiceXClient when my type is being constructed, assign them to local fields and then never modify that field again, then I don't need to exercise every dereference of these fields, just one. And again, I don't test that my code handles NPEs, I test that my code does what it is supposed to, and in the process of doing that, NPEs get flushed out.
> Generally types are far more concise and guarantee more than tests. I find 5 lines of types more readable than dozens or hundreds of lines of tests.
I think you are viewing this through red-black-tree colored glasses. Specifically, you believe that a lot of code has mathematical constraints the way that example did. To me, this is an extremely remote possibility. I think if you tried to encode even the smallest real-world example of this, say a service implementing a URL shortener, you would run into a wall.
> Your code is much safer simply by construction, so I am not sure what you mean here.
I should have said more reliable.
> I do think non-nullability aids reliability, but that having sum types, proper pattern matching and parametric polymorphism aids it even more. And Go lacks all of these.
Truly, it baffles me that people still harp on the reliability aspect. It is quite likely that every piece of software you use day-to-day is written in a language with nullability, without pattern matching, and no sum types. Most of that software probably doesn't even have memory-safety (gasp!). Probably every website you visit is in the same sorry state. I'm sorry, but your arguments would be far more convincing if the world written in these languages were a buggy, constantly crashing hell. It's not.
I guess to progress from here we'd need to laboriously compare actual example pieces of code. For example a URL shortener is going to be easier to write safely in Haskell, where I am guaranteed by the type system not to have 404 errors in routes I advertise, or XSS attacks.
Also, in my experience, computer software is buggy, unreliable, crashing, and generally terrible. I think people who view software differently have simply grown accustomed to the terribleness that they can't see it anymore.
Also, reliability is interchangable with development speed. That is, you can trade one for the other. So if you start with a higher point, you can trade more for speed and still be reliable. In unreliable language, typically reliability is achieved by spending more time maintaining test code, doing QA, etc. In a reliable language more resources can be spent doing quicker development, and less on testing and QA.
When you see a reliable project implemented using unreliable technology, you know it's going to scale poorly and require a lot of testing.
I'm a little confused here. You realize that Go is also statically typed, right? I'm not sure where any debate about dynamic languages started. The points you make about static vs dynamic are valid, just not relavent at the moment.
It's funny, too, how you talk about "implementing a red black tree" like it's an everyday occurrence. I'm guessing you are a teacher/researcher (in which case this entire discussion makes much more sense). On any application team I've worked with (in the valley or out), implementing a binary tree from scratch would require extreme justification and literally have to be the only way possible to solve the problem.
On the dynamic..static axis, Go is much closer to the dynamic side than to Haskell's side.
I am not a teacher or researcher, I am a practicing programmer writing code that is used by critical systems as well as ambitious projects that will (hopefully) be used by many real people.
A red black tree is just an example with invariants that everyone is likely to know, so it's a nice way to illustrate the point about the power of types. Known problems are solved problems, and unsolved problems are unknown problems -- so either my invariants' example will not speak to you because you don't know it, or you will reject it because you can just re-use a library.
Ok, look, I understand that you spent a lot of time learning Haskell, and desperately want that time to not have been in vain. You have to back off the preaching though. The conversation you joined wasn't about Haskell. You barged in and made it about Haskell. On the way, to justify your comments, you have put forth some pretty ridiculous claims. Go being dynamically typed, Haskell being 30-50% quicker to develop in, glossing over the valid points about it being more difficult to learn, etc...
We get it. You don't like Go. Some of us do, and would rather have a productive discussion about how to take advantage of it's features and avoid the traps rather than get into a pointless debate about a language that we are highly unlikely to ever use. (after this discussion, I certainly never will)
You are giving the Haskell community a bad image with this kind of behavior, and I kindly request that you not reply to any more of my comments with anything to do with Haskell.
I don't need to "want that time not to have been in vain", I am already developing with Haskell and reaching extremely high productivity levels. What I want is for people to spend less time improving the eco systems of poorly designed languages that repeat past mistakes.
Go is more dynamically typed than Haskell. It isn't a 0/1 thing. Parameteric polymorphism is dynamically typed in Go. Nullability is dynamically typed in Go. These are huge parts of the languages.
Instead of engaging in a discussion, you're repeating mistakes in a condescending tone.
Your last part of the comment is only appropriately responded by "You are giving the Go community a bad image by pounding me with ignorance with every reply, I kindly request that you study the matter before replying further".
I can't remember the last time I implemented a data-structure in a web framework. You just use the built in dictionary/hash table and get on with life.
Well, whenever you write non-trivial code, your code is going to have invariants. These invariants can be partially tested or they can be fully type-checked. The latter is better and cheaper, when available. Haskell makes the latter available far more often, so you don't have to pay for the former. This is not just useful for data structures, but code in general.
This post kind of sums it up for me. No, I'm not thinking about invariants. We're talking about writing a web app here. Typing is pretty meaningless when 99.9% of your data is just strings.
I think you are vastly under-estimating the learning cost. We're talking about teams of developers, not a hobby project. A few months * multiple programmers adds up to man-years really quick.
Go is much simpler. We deployed our first production (admittedly a fairly minor piece) Go service under a week after we made the decision to start using Go.
PS: State is only shared when you make it so. Go concepts like channels make it very easy to write, _and debug_ clear, decoupled non-trivial parallel code.
If you have to pay the salaries of programmers for the next 3 years to develop some solution, and know that 10% of that time will be spent learning a new technology that will make them 20%-50% more effective for the remaining 90% of the time. Would you do it or avoid it, based on the large cost of 10% of these programmers' time?
I also challenge your "better concurrency" claim.