This might be the most revolutionary technology in the world, but I really have no idea because your landing page is full of the vaguest most utopian promises. You might say, well, you should go read the "technical docs", but have read about so many hyped techonologies that came to nothing at this point that I am incredibly skeptical.
If you can't tell me what you are doing in a single paragraph, then you've got a problem. If you can, why doesn't your homepage reflect that? Show me! I don't want to scroll through another website with 16 point font.
We all love the Unix philosophy and puppies, too.
Please take this as the constructive criticism it is.
I don't disagree with your criticism. I read through the Urbut whitepaper [1] a while ago, and there's some meat and novel ideas there. As far as a concise description, here's my attempt, followed by some content from the whitepaper.
The key idea that distinguishes Urbit is its focus on deterministic computing. Urbit is a computing environment, like a virtual machine, with the distinction that the entire computational result is designed to be a deterministic function of the inputs. The inputs are represented as a sort of transactional log-structured file system, where computation is a pure function of the input events. They have provisions for multiple computers interacting over a network while handling inputs in that way.
They implement a sort of combinator-based assembly language and virtual machine, a higher level language on top, and then an OS on top of that. Their virtual machine runs on a regular computer, but in theory you could run an Urbit VM on any machine and observe the same computational result. This is not a trivial problem to solve for an entire OS and network! They invented some interesting optimization techniques ("jets") and networking concepts to make it feasible. As an analogy (this is a stretch): it's like Plan9, except purely functional and deterministic.
> Urbit is a new clean-slate system software stack. A nonpreemptive OS (Arvo), written in a strict, typed functional language (Hoon) which compiles itself to a combinator VM (Nock), drives an encrypted packet network (Ames) and defines a global version-control system (Clay). Including basic apps, the whole stack is about 30,000 lines of Hoon. Urbit is a “solid-state interpreter”: an interpreter with no transient state. The interpreter is an ACID database; an event is a transaction in a log-checkpoint system.
Their goal is to make hosting this virtual machine easy enough to do that anyone could, while (if I recall correctly) decoupling it from the underlying physical machine it's running on, such that it could run anywhere you want it to. Then they speculate that, if many people hosted their own, then people would be less reliant on cloud services, or at least could fully trust computational results from the cloud.
The ideas are unfortunately cumbersome to follow once the whitepaper and other technical content reach the point where they explain things with made-up, Urbit-specific terminology [2]. The authors explain their rationale for their choice to do this (IIRC, their reasoning was that they need to name so many new concepts that it's better just to assign random words like codenames - something like that). I feel ambivalent about this, but I will respectfully mention that the choice to do this might make more sense from the perspective of someone who lives and breathes Urbit every day. From the perspective of a beginner, it's a turn-off; I find it grating and it doesn't work for me. Haskell and Rust have terms to introduce, but their material doesn't leave me feeling bamboozled.
If an alien species had invented computing, what might it look like? I think Urbit is a credible answer to that question. I'm joking, but only partially: from the perspective of "throw out everything and start over from scratch to see what modern computing would be like", Urbit seems like it may be a sincere attempt. At one point in the past, the ideas appeared to be deliberately obfuscated, like performance art or like Brainfuck. They've subsequently made efforts to explain the ideas and build things up from basic concepts, and I think the authors are sincere, but don't hold me to that. In conclusion, I think it's technically neat, but I think it'd be a lot neater if it were documented and presented within a grounding of typical CS and software terminology/concepts. Or if they can't give up the custom terminology, I'd want to see ~10X more material (per unit concept) introducing and clarifying the ideas slowly from the ground up. If they're inventing their own alien computer science, then they need to write the alien "C Programming Language" (K&R) and alien "Structure and Interpretation of Computer Programs" [3].
It might not be easy in terms of connecting it to your existing knowledge, but it could be easy in terms of learning it for the first time. Beginner's mind.
> while (if I recall correctly) decoupling it from the underlying physical machine it's running on
> Urbit is a new clean-slate, full-stack server. It's implemented on top of the old platform, but it's a sealed sandbox like the browser.[0]
It appears that you do indeed recall directly.
Also, thank you. The two quotes from your comment, plus your sentence following, do describe it in a nice single paragraph. Granted, I do have to research most of the technologies in the stack, admittedly.
I hope they integrate a proper capabilities framework too, ideally one with cryptographic credentials that allows for reasonable assurances in distributed systems. Stuff like being able to provide a cryptographic guarantee of who have access to what data.
What are they talking about? Dry/wet arms? Batteries? Metal cores? Bridges? Gates? Superpowers?
For me it looks like they've just replaced common concepts with random "cool" words, deliberately obfuscated the syntax and try to sell it as an innovation.
You jumped straight into the 'advanced types' section of their high-level language documentation, which is quite different from most "mainstream" programming languages, so of course a lot of concepts are going to be very foreign to you. You might want to check out their talk at LambdaConf for a better overview of these: https://www.youtube.com/watch?v=I94qbWBGsDs
Also, I doubt they could handle it, their scope is huge: new OS, new programming language, networking, encryption, time, compilation, virtualization etc.
I don't believe group of 5 unknown people can handle it.
Not only that, they are incorrect constants. The definition for :cet, for example, is only correct for non-leap centuries. 2000-2099 has an extra day compared to 1900-1999, or its partner :qad, which has the opposite problem and assumes any 4-year span has a leap year.
Of course, the language has other problems too, like 'moar' being used as a serious part of the language's grammar, or worse, more nebulous verbs like 'snag', which makes the language even more difficult to understand for non-native english speakers. It's just an utter mess.
>
Au contraire! They managed to almost eliminate any advantage that programmers who speak english usually enjoy.
What are those advantages?
When I was young and my English was really bad, I simply learned the keywords just as mathematical terms (i.e. if, else, for, while, break, next, goto, gosub, signed, class etc.) - I mean, they aren't that many (say 40 and you've covered most mainstream programming languages that aren't really verbose (OK, COBOL is an exception, but who uses COBOL ;-) ). This really isn't much harder than learning mathematical terms as sin, log, cos, exp, cot, asec etc. even if you know no word of English.
What really makes programming hard when you are not sufficiently fluent in English is that lots of documentation is only available in English (this was the main problem I had). And I don't see how Urbit is going to change that.
As a native english speaker, I'd like to politely disagree with this phrase! (and that was the GP's joke... since they're using so many made-up terms, it effectively isn't english).
If you know about Yarvins other beliefs, the fact that t seems obfuscated makes quite a bit of sense in my view. I think it's a superiority thing for him.
At its core? I think it's kind of interesting. Would have been more useful if he had made it simpler to understand/develop in. In its current form, it seems like a vanity project to me.
Discounting Yarvin feeling superior, or others perceiving that, I am in favor of the 'start from scratch' here. Worst-case scenario it fails, and only drags a few bleeding edgers with it. If it succeeds, we have a whole new way to proceed without necessarily abandoning the old.
Just at the hardware level, we are forever, it seems, stuck in the von Neumann architecture with some recent hops to vector processing on GPUs and FPGA/ASIC/GPU hybrids. We need software for these new platforms if they divorce from the von Neumann machine. I am not calling for reincarnating Lisp Machines, but maybe its time to build and test new (not the same old) hardware and software,to see how it competes with this path we're on. Why not? Experimentation is fruitful one way or the other; you learn from your mistakes too.
I am interested in Urbit, and I can separate myself from Yarvin's political writings, the same way I could still concede that 1 + 1 = 2, even if Hitler had written a paper saying so without any of his views contained therein.
Yarvin admits the obfuscation by using different terms, or swapping 1 and 0 from their current boolean understanding, 'may hold water' per the bootleg YouTube video of the talk he just gave. I personally think when you are trying to usher in something different, it is helpful to shed old terms. So much weight is carried by words, good and bad. It sometimes helps to freshen up the lingo with the new or slightly-altered concepts. After having read a little Wittgenstein and the late Umberto Eco's works on semiotics, I am convinced language or signs carry significant biases that are useful to put aside or rename if it helps you to think of an old concept in a new light.
>After having read a little Wittgenstein and the late Umberto Eco's works on semiotics
Do you have any suggestions on good starting points? I found Tractatus Logico-Philosophicus to be a rough introduction to Wittgenstein. I'm also curious what Eco you'd recommend; I've only read his novels and essays.
I've only read two of Eco's fictional works: The Name of the Rose, and Foucault's Pendulum. I loved them both, but then I picked up The Island of the Day Before, and didn't finish it. I have also read his essays.
The book I was referring to was Theory of Semiotics by Eco. I picked it up in 1982 or 83, after having read The Name of the Rose, and I honestly didn't know what Semiotics meant until I spent 20 minutes reading it in the bookstore (no Googling then!).
I had read Tractatus Logico-Philosphicus around the same time, 1983 or so. I read most of it, some pages multiple times, but I didn't finish it. I only grasped enough to know I wasn't going to change my major to Linguistics. I was hopping off of references in the bookstores the way I now follow links down rabbit holes.
I should revisit these works now that I have some more years on me!
Wittgenstein only really published two works: The Tractatus and Philosophical Investigations, in which he completely revises his approach. Read the latter.
Elitist language can be used by any group to partition themselves from others. I'm sure many groups (formal and informal) do it.
That said, I believe there has to be a floor to the language level you choose, and it's reasonable to use "high school English" as that floor. Besides, I checked out that link (history section) and didn't see anything that screamed "overly clever"...
The comparison to marxist texts is more pointed if they were unapproachable not just to the layman, but to people proficient in (non-marxist) political economics -- which is arguably the case.
I'm not sure why that's ironic. Extremists tend to have a lot in common in their overall approaches, even if they may be violently opposed on the specifics.
I dived into urbit a while ago, and the way it is described makes sense. It's a completely new paradigm and trying to fit naming from popular programming languages would only lead to confusion. You have some new things, built on top of other new things and so on. When trying to describe higher level abstraction layer you can't afford to spend 2 sentences to describe each element from the lower one. So sure, you could go with some smart sounding names, but they would need to be much longer and they would make things seem more complicated than they are in reality.
I agree. What it really needs is an 'example' page, or a walkthrough of how/what you'd use it for.
I just spent 10 minutes reading the docs and I'm not sure I get it. Is it like Owncloud? disapora*? ITTT? Is it an interface to mirror all my content from other social media sites, and interact with them? You say I can use it to authenticate with sites - is it an OpenID provider? Something something blockchain? All of the above?
This page explains a little bit [1], but a walkthrough or video or screenshots or something would help immensely
It's personal computing, explained in the context of a VC pitch to people who have never heard of an OS, think the browser is their computer, and don't seem to comprehend software that gets installed.
Look at the Evernote example - wouldn't it be great if you could replace the Evernote UI with a different one for $10? Well, you could do that today on any OS with an app and a REST API, but it either doesn't exist (because it's not that great) or it does already exist (so what's revolutionary?).
The same thing applies to almost every single thing lauded. Your data is saved locally! Great, my UrbitTwitter clone keeps my data locally in some undocumented format. But that's ok, I'm not beholden to the app developer in case they die (yes, that's in there), because I can get someone else to reverse engineer that format and build me a new twitter clone. How does Hoon make that special?
Gmail works locally! And <other Web service> Translation - someone (lots of someones) will build APIs and SDKs for our platform to interface with those services, and they'll be used just like in any other OS to make apps that run in your machine.
It lauds "you can install software, and it runs locally, forever!" as something new and great, when really that's existed since we got off mainframes in the 70s.
Everything it offers is stuff that was solved the first go around in personal computers, but this time with a cryptographic identity and an obtuse language. Maybe there's something I'm missing, but I've been looking at urbit for 2.5 years, know (good) people that worked on it, and still can't find a compelling reason to think of it as something other than a hipster OS, reinventing decades old ideas because this time they'll do it right.
> the ideas appeared to be deliberately obfuscated
It reminds me of early days of http://21.co ,the Bitcoin computer, with the same ambiguous wordings. I assume the libertarian socialistic aspects of this genre of technology is prior to its technical and practical aspects.
Actually, I've been following Urbit for a while - it's decidedly not designed to be socialist, or libertarian, it is instead feudalistic. Specifically, the network protocol, once you get past the level of "two computers communicating", is based around a system where users are given land (a "planet") by a landlord (a "star"), and are then tied to that landlord in terms of infrastructure and trust.
The system is also, weirdly, specifically designed so that not every person in the world can have their own planet.
Sure, but as a global personal computing infrastructure, the idea of absolutely ensuring that some people's identities will be worth more than others (not only in terms of monetary worth, but in terms of trust, etc) based on an artificial limit is a bit weird.
Urbit people make too much effort to justify urbit extrinsically. Hey, we can use it to make P2P facebook, and to render websites that resemble a ruby-powered startup launch site. OK but urbit is also good for urbitting. Why stoop to using the idiom of today's crappy browser apps, and why bother justifying urbit to people who are apparently happy with the way things are on Earth now? Forget them, I say!
Below the intro paragraph on the homepage are two links: "Learn more" and "jump to the technical docs".
Given your aversion to the technical docs, I'd imagine you might click "learn more". This scrolls the page down to another lead in para with links to "overview" and "beliefs and principles".
So sure, it's two clicks not one but are you seriously telling me the https://urbit.org/posts/overview/ (posted on HN the other day) isn't a well written explanation of what Urbit is / does / why you'd be interested in it?
>This might be the most revolutionary technology in the world,
Don't worry, it's not. It's a VM with bent towards distributed computing. That's all.
I mean maybe one day it'll have an ecosystem of libraries and applications that makes that interesting but it's not today and it's not inherent in the technology.
I think it's a piece of software. That's all I know. I don't know if it's something you run on your own computer or distributed on a blockchain or something. I guess it's supposed to be obvious. It's not obvious to me.
I dunno. This thing is weird, but they put a lot of work into it, and you can download something that runs.
What they want to build, from the user perspective, seems to be a federated social network. Like Diaspora, only with some of the problems solved. The two big user-level problems they claim to solve are 1) spam, and 2) being tied to a service provider.
The solution to 1) is that you have to buy an identity from someone. You can't create identities for free. This is a profit center for someone, although I'm not clear whom. Not clear how much a personal identity costs, but there are only 2^32 of them.
The solution to 2) is that you can pick up your ball and go home - take the entire state of your online presence and move it to another server. The routing gets fixed somehow. Sort of like cell phone number portability.
Those are both good features. Right now, they apparently power only an online chat system and the ability to host web pages driven by programs in their language. Somebody could potentially build a Facebook-like system on top of that.
The terminology and the cult-like aspects are seriously annoying. It reminds me of Xanadu and its team. (I knew that crowd. Mostly extreme libertarians. Everything is pay per view in Xanadu.)
I wonder if this could be used as a lightweight container system for server-side web applications. It has a container system, and those containers can serve web pages and talk to other containers. Unlike, say, Docker, you don't have to lug around a whole Linux environment in your container. Being able to move your container to a new hosting service very quickly would force hosting services to be competitive.
I wonder if this could be used as a lightweight container system for server-side web applications. It has a container system, and those containers can serve web pages and talk to other containers. Unlike, say, Docker, you don't have to lug around a whole Linux environment in your container. Being able to move your container to a new hosting service very quickly would force hosting services to be competitive.
It's a cult creation technology: anyone who expends enough effort to launch a ship will have convinced themself that this is really cool, and that they are very much smarter than the average bear.
Actually, it's very easy to just launch a ship, if you're not up for compiling stuff.
`docker run -i -t --name ship-container yebyen/urbinit`
This is not a part of the Urbit project, but something that I maintain. I think there's a reference to this method of installing somewhere in the official setup docs.
If you don't like docker, there's also a set of instructions for building Deb packages that are somewhat less clear. (Sorry, I'm not doing this for my day job...)
NB: at the time of this post, I took a bite of my own dogfood and saw that I could not reach ~zod, the network's leader. It is possible that it is swamped by HN visitors, or also possible that there is something wrong with my local network, or something wrong with my docker which is a few versions behind tip...
Not sure... would be great if someone else can take a look, I'm on the clock.
From a purely academic standpoint, I find this project and it's goals intriguing... And I'd be interested in playing around with the environment, if only it weren't for the "Hoon" language. I'm fine dusting off lisp, or erlang, or any of the myriad imperative languages I know to play around, but I have no desire to learn an esoteric language that only works in one esoteric system! If someone ever makes a cross-compiler from a sane language let me know!
>We should note that in Nock and Hoon, 0 (pronounced "yes") is true, and 1 ("no") is false. Why? It's fresh, it's different, it's new. And it's annoying. And it keeps you on your toes. And it's also just intuitively right.
This kind of sums up the approach taken here. I'm out. I'm too dumb for this project.
The current urbit.org documentation has been considerably bowdlerised from the originals, which were 200-proof Moldbug, so strong it crawls up the side of the glass.
I met with the founders of Urbit over Dinner a few years back, part of a community meetup at the Thiel Fellowship Finalists Round in 2014 (yeah, bring out the pitchforks) and they seemed like pretty amazing guys.
What I don't understand, however, is the need to romanticize software. There's too much magic in this. I get the feel that this is something very important, but I don't understand what it is. Anyhow, best of luck to the team!
Thank you, that second video was very instructive. If people want to see Urbit working, jump to around the two-minute mark and there's a demo of networked program execution as well as distributed version control.
All I got from this is "we want to sell you a domain name that is unrecognized by all existing software and infrastructure, please give us money"
If I wanna join a cyber-utopian hacker-hippie commune, I'm not going to pay a cult to do so. Especially not an inarticulate cult. I'll build it myself? It sure doesn't sound understandable to regular people if the HN crowd doesn't know what they're talking about.
I was thinking the same thing. Pretty bold move for a startup with an aim to provide a digital identity for everyone in the world with their stack.
But if you really dig (and I mean really dig) into their site:
> In the general case, a ship is actually a 128-bit number.
...
> A comet's [128-bit] address is the hash of its initial public key. Anyone can launch a comet. Nothing stops you from using a comet as your identity, except that the name is a mouthful and everyone will assume you're a bot. [0]
So, I mean, there will be a market for names I guess.
> By keeping addresses scarce, we make spam and abuse expensive
The entire population of earth wouldn't be able to sign up. Just over half could. What happens in 50 years when all of those 4.2 billion people are dead? Does the entire infrastructure die along with those who were "lucky" enough to win the lottery?
A good summary of some unhappy comments is that its an experimental reboot of a vast new system. Most of the people who are unhappy went into it expecting something very small like "uber for the jvm" or "the twitter of functional programming" or similar.
Progress in IT / CS has dramatically slowed over my life. Urbit feels much like 80s home computing, where forklift upgrades of everything you know was "normal" in the extremely rapid transition from Altair CP/M to an Amiga to a SunOS box or whatever path. Nothing is new in the last 20 years, at least compared to the incredibly rapid pace of change in the first 10 years I was into computers. I'm just saying that however unthinkable a forklift upgrade is culturally in 2016, in 1986 it was considered to be "great fun" not a problem or a downside.
I haven't read the whitepaper, but from what I've gathered, urbit is a virtual machine that runs on a network, maybe with support for untrusted nodes? If that's the case then that's amazing, and it's understandable how esoteric it all seems. But that's quite an extraordinary development and I'm skeptical.
It would be nice if they would give a clear description of what is they've made. Does this enable running a server jointly with a partner you don't trust, with neither party having physical access, and without involving any third party?
The scope of the project is big. It mentions gmail, ifft and others. The api integrations for the third parties is measured in terms of years due to breaking changes and sheer amount of work. How is the urbit team managing that?
- the other members of the urbit team are not Curtis. 'Folks' is incorrect.
- this is an extremely biases article whose publisher's political views are directly the opposite of those being written about. This is not an objective primary source by any measure.
I'm afraid that doesn't say much. Are they striving for more centralized power (a dictator holding the power over everybody) or less centralized power (no one having the right to decide over other people)?
There is no reason to gensym all of your concepts like this. It is different just for the purpose of being different: apparently you can't sell people on a "revolutionary technology" without appearing to be extremely different.
Nock is also not a good virtual machine. Recognizing blessed sequences of bytecode and replacing them with opaque blobs of code is not a valid approach to optimization. No one can actually run a pure Nock VM, so what is the point of having Nock in the first place?
Someone else on HN gave the best summary of Urbit I've seen yet: an elaborate cup and ball game, meant to give the impression of innovation and technical excellence.
> We should note that in Nock and Hoon, 0 (pronounced "yes") is true, and 1 ("no") is false. Why? It's fresh, it's different, it's new. And it's annoying. And it keeps you on your toes. And it's also just intuitively right.
Focus on the intuitively right.
I think it makes sense for 0 to yes/true.
Think of the Unix exit codes, and golang returns.
No news is good (yes/true) news.
edit: as to the "annoying" that's just Yarvin being cheeky.
I really don't think it is too serious.
Zero is good for "absence of failure". It's a "yes" in the sense that "this worked". Whereas any one of the myriad non-zero codes is "no, it didn't work".
If we have a situation in which there are many numeric codes, exactly one of which means "success", it's probably best if that one is assigned zero.
Punning means that we have two types, and we somehow interchange them; use an object of type A, as if it were of type B. We thereby leave behind the type system and take responsibility for that being correct.
If a language doesn't have a Boolean type, and some other type serves for indicating truth/falsehood, then that is a representational technique distinct from punning.
A language with no Boolean type can be statically typed. It just means that the conditional operators work with some non-Boolean, like integer, yet according to well-understood rules. For instance if we have some "if expr foo bar" such that either foo or bar is evaluated based on whether or not expr is zero. This would be statically well-typed if expr has integer type and foo and bar have the same type.
Of course, we can't do pattern-matching whereby a value is classified as Boolean or integer in separate cases. (Unless we use the language's type construction ability to define a Boolean type which wraps integer, or whatever).
I don't want to quibble over terminology, so fine: Hoon doesn't pun integers and booleans, it conflates them. Whatever you call it, it's still a horrible hack, and IMHO if you put in your language you surrender your snark privileges when comparing your language to Lisp.
Common Lisp conflates all types with Boolean. NIL, the sole instance of the NULL type in Common Lisp, is false, and an instance of every other type is true. (Of course, this situation quite good).
I think that's debatable. Javascript, for example, does something similar but it takes all empty containers as false, along with zero and a few other things. (Not that I am necessarily holding up Javascript as an example of good language design. My point here is just that there is not a consensus on how (or whether -- c.f. Scheme) to conflate booleans with other types.)
The salient differences between CL's boolean conflation and Hoon's are:
1. CL's conflation design is justifiable in terms of how it simplifies coding recursion over a list, which is the single most important Lisp coding idiom.
2. Hoon tries to justify using zero as true by saying that non-zero integers can carry information about the nature of a failure. (At least I've heard some people try to justify it in this way. I'm not sure if Curtis does not not.) But now you are no longer conflating integers with booleans, you are conflating integers with multiple enum types, one for every possible set of failure types. Flouting convention is one thing. Flouting convention in order to enable conflation of integers and enums undermines Hoon's claim of being strongly statically typed (at least in any interesting way).
CL's conflation also means that can use NIL to indicate the absence of an object (in the usual situation in which NIL isn't a valid value). We can test for this absence glibly, using a conditional. This is at least as useful as the benefits in list recursion.
The Javascript design means that we cannot test for the absence of an integer in this manner, because a negative result could mean that a zero is present.
It could be handy from time to time. I propose a NAUGHT function for Lisp which returns true for empty sequences, empty hashes, zero (integer, real or complex) and any other nothing you can think of.
Sure, but if you want to make that argument then you have to deal with the fact that there is no false value which indicates the "absence of an object" in the case where the value being tested is a list. Having multiple false values really can be a feature. (They just shouldn't be integers!)
I'm sorry but this... madness. Referring to return codes, which are completely arbitrary in nature, as being intuitive and thus, 0 as true being intuitive, feels like a complete logic breakdown.
Yes, if this language were to exist in a complete vacuum, where no other languages are existing, and nobody learnt anything else before, then sure, 0 as true and 1 as false would be fine I guess. Still not more or less intuitive, but one could agree on using it. But in a world where every other programming language uses 0 as false (or some equivalent) and where the chance that someone would use "nock" as his one and only programming language is very much 0%, this feels like utter bullshit.
Also, the versioning scheme based on Kelvin temperature? Seriously?
And then look at the Hoon syntax. No keywords, just ASCII symbols. Just look at it.
> Referring to return codes, which are completely arbitrary in nature, as being intuitive and thus, 0 as true being intuitive, feels like a complete logic breakdown.
Does the opposite approach make sense though? Generally, something being "true" means a program can happily move along. Something being "false" generally requires more introspection on why exactly it's false, i.e. error handling, exception handling, with the resulting changes in execution flow.
It makes sense because it's de facto standard, and intuition is experience, and intuitive is what we're used to. I'm not sure how it can be explained more basically.
That's not an argument that scales well, as then we'd have a single programming language, anything else being un-intuitive. Every single non-obscure language that exists today had to go against the (previously acquired) intuition, introduce new concepts and then work its way up to the point where new concepts are the (newly acquired) intuition.
I think there is sound theoretical basis to pronounce inverting the traditional true/false bit identification a mistake. Yarvin as much as admits this in his Lambda Conference recording on YouTube.
Its ok to set yourself apart. But for some reason this feels very app.net in a way. Not to disrespect the projects or ceeators but the scope and ambitions were rather big.
App.net at least had a couple clear use cases in mind (twitter clone, notifications/pub/sub service), even if they weren't unique/compelling enough to sell people on the service.
AFAICT, this platform has a broad scope without any kind of clear vision as to what they want to do with it. Maybe I haven't found the right part of their website yet?
I'm not entirely sure I buy the arguments that "made-up words" are a bad thing. All words were made up at some point. And the English language on the surface seems extremely redundant in its vocabulary, but each element of a cluster of words can contain separate nuance, so we keep them around. Perhaps this doesn't have a place in computing, but perhaps it does. Consider, for example, the most jargon-y field of mathematics of all, category theory:
Proofs = Types = Categories; all related, all translatable in terms of one another, yet all different. New words in a technical vocabulary let you be both concise but also on occasion familiar.
I find arguments about obfuscation maybe a bit more credible. It's hard to say. I'll venture that getting people to pay attention to your ideas by throwing you off of previous convention could potentially work, but time will tell.
> Proofs = Types = Categories; all related, all translatable in terms of one another, yet all different.
Proofs = Some Types and Type Theories = Some Categories, so I think even taken as an attempt to paint the situation in broad strokes what you wrote is at best misleading. Many (commonly studied) categories and classes thereof are not interpretable as type theories, and type theories contain plenty of terms that would not be typically interpreted as proofs.
Also, even in category theory, papers and books take quite a bit of care to be clear when they introduce a new term. Urbit's documentation looks like it took every conceivable opportunity to invent new words for things where we already have words. The complaint is not about made-up words per se, but scores of unnecessary made-up words. It's like they expect to trademark the name of every little internal concept inside Urbit.
Although that's a benefit I didn't think of, my charitable interpretation is that it allows for the use of terms that plug in to a social setting. Of course this all derives from Moldbug's political philosophy. It's technology that's engineered for a certain set of behaviors based on his framework.
The terms put the technology in the context of the philosophy, giving hints as to how it solves the problems within it. I can see why that would turn some people off, but for its purpose I believe it's effective. It grounds previously ephermal concepts in metaphors that are tight, like land = server + addressing space or code = law. A lack of good metaphors (rather than an abundance of poor ones, as is the state of things) may be a part of the reason we've become so disoriented with how we handle privacy. When you think of privacy issues as, "my identifying documents are on someone else's land", suddenly it becomes more clear as to what the problem is. By contrast "The Cloud" is a shit metaphor: the cloud is just someone else's computer. It isn't public, universal, or neutral, like real clouds are. And where in the cloud is there a network?
If one were to derive Unix in terms of, say, the operations of an anarchist society, as opposed to problems that are mostly mundane or technical, I would be saying the same thing. Arguably the last time this happened was during the Mother of All Demos with Doug Engelbart's models of the knowledge economy. Since then we've either been footnotes to Xerox PARC's visions, or have been otherwise haphazard.
As for the Proof/Type/Category equivalences, you're right, they're only equivalent for some structures in all three theories. So they're only equivalent up to locally closed cartesian categories, or wherever. But moving between those three perspectives even under this constraint still leaves you with distinctions, like syntax vs semantics. So in that sense they maintain the notion that different views lead to different nuances even if they cluster around the same space.
I do not believe IQ is heritable -- racially, or otherwise. Full stop.
Intelligence is, by my estimation, entirely a function of nurture, not nature. Those exceptions are in situations where it is lacking due to handicap.
Which is to say, everyone's brain has the same chance at brilliance (barring fetal alcohol syndrome, neglect, etc.) if they are not intellectually handicapped, and are provided the same child-rearing and temperament.
But, if you believed IQ to be heritable, as many of you do, it would seem a fair guess that it would be distributed unequally among races, as is height, hormone levels, muscularity, and so on.
I don't believe this, as I don't believe IQ is heritable, but I don't see how one could possibly buy into the heritability of IQ while vehemently denying that it could be spread unevenly among races. You guys, to me, all seem to be grappling with two wildly incompatible ideas -- that race can't effect intelligence, and that intelligence is heritable.
You'll need to choose which it is.
I'm happy, even having read his (wrong) ideas on IQ, to entertain Urbit because he seems no more wrong than the rest of you. Cheers.
Heritability of traits can be measured by twin studies. Some people have spent quite a lot of time doing this for IQ. Others have even identified some of the genes that are correlated with high IQ, TOR1A being one of the more interesting ones. I'd be interested to hear your thoughts on this area of research.
Regarding population genetics, I don't think it matters as much as people seem to think. Han Chinese are short, but Yao Ming is tall, and there's no contradiction in that.
Regarding Urbit, I certainly hope that it isn't heritable, or for that matter infectious, because it seems totally opaque.
>"Heritability of traits can be measured by twin studies."
No, correlations between traits can be measured in twin studies, but they do not prove a causal link.
In the realm of intelligence, it could be (and I would argue) that looking like a nerd generally proceeds becoming one. People become social outcasts, and then become intelligent as a defense mechanism. That is,once in that nerd social caste, a greater percentage of people will find themselves using their brains more often. As a result, they get smarter, and score better on IQ tests.
Because twins are liable to look the same way, have the same temperament, same physique, etc. they are liable to be pushed into the same sorts of social groups, and as a result the same sorts of interests, ultimately pushing their IQs up or down together.
This would not mean that IQ is heritable, but that traits which can have a forcing effect on nurture (and thus IQ) are heritable.
How about torsion dystonia? Increased IQ can be observed before the onset of the disease, even when matched to comparable members of the same population.
One, Torsion Dystonia could just as easily be effecting something else which effects IQ.
Two, the studies on Torsion Dystonia decided to only study people who were not showing symptoms. This means they were selecting around heredity, since those who show symptoms and those who don't were not divided on gene expression -- there is only one allele in play as far as we know.
Three, the study was 14 persons. Removing a single person from the sample could have swung the data to say the opposite.
Yao Ming was potentially part of a breeding program, of a sort. This is an argument that population genetics does matter, in that you generally get whatever the past has selected for unless you make a deliberate effort otherwise. Otherwise it's a no show.
If you have any reasoning or research to back up your view, I would genuinely like to see it.
For sure, we don't understand everything about the brain, but we certainly have found correlations between physical characteristics of the brain and IQ.
BTW, try to convince a dog breeder that temperament is 100% nurture and 0% nature. I would like to hear that person's answer.
Yes, there are correlations between genetics and IQ.
There are undoubtedly hereditary factors which will effect nurture, which will subsequently effect IQ. That is not synonymous with IQ being hereditary.
For example, there are hereditary factors which will effect career choices: temperament, physical strength, race, gender, height, and so on. Thus, career choices will correlate with genetics. That doesn't mean that career is a hereditary trait. This is an important distinction.
Likewise, twins being adopted and raised by other families does not correct for nurture. People are still treated differently depending on their genetics: temperament, physical strength, race, gender, height, facial structure, etc. As a result, seeing a correlation between intelligence in separated twins is not enough to prove a genetic cause.
Asking me to see researching backing up my view is asking me to support the null hypothesis. That's not how science works.
Wikipedia:
"In inferential statistics, the term 'null hypothesis' usually refers to a general statement or default position that there is no relationship between two measured phenomena, or no association among groups. [...] The null hypothesis is generally assumed to be true until evidence indicates otherwise."
Until IQ Heritability researchers can prove that twin studies aren't just measuring nurture as a result of genetics, the burden remains on them.
Depends what you mean by produced. If you mean it happens in the brain, then the following would also be true:
- I prefer Strawberry ice cream. Preferences happen in the brain, which is a physical organ. Ice cream preference must be physical.
- I speak English, which is a language, and language happens in the brain, which is a physical organ. English must be physical.
If by produced you are just repeating that you think it is physical, that is begging the question.
Do you argue that every process in a computer is physical, since it all runs on top of hardware? Do you have no conception of software when it comes to the human mind?
Yes, I have a conception of software when it comes to the human mind. Even assuming the "software" is identical across humans (which seems unlikely), if one human has 10% or 20% better "hardware" due to genetics, wouldn't we expect differences in intelligence (accounting for differences in nurturing and other potential confounders)? Especially given how general human intelligence is -- it shouldn't arbitrarily cap out. It seems inconceivable to me that all humans would be born with exactly the same mental hardware, when that isn't true for any other attribute. I could go on...
I don't want to get into a lengthy debate -- plenty of other commenters have already addressed it all fairly sufficiently. I do appreciate your patience and well articulated comments, even if I disagree with them.
The left, even across cultures and vast geographic distances, has an unusual constitutional weakness against that specific mental disease.
Not implying, that the right has no unusual mental illness susceptibilities of its own, although that's a bit off topic.
Arguing with someone suffering from a bad case of Lysenkoism is about as effective as trying to argue my coworker out of being a type I diabetic. You can tell him all day long that its long term ineffective or environmentally wasteful or wrong or unhealthy or non-vegan or unpopular to inject insulin, but all that hot air talking is never going to wake up his pancreas. Ditto pointing out to people suffering a bad case of Lysenkoism that they're diseased is not an effective way to cure them. What does work is curing their political views, when that's possible, most of the time the patient regains their lost scientific reasoning capacity in that specific corner of genetic research.
I think you're making a good point overall. If a trait is heritable then it will vary by populations. People should understand the consequences of the position they're arguing.
However, why do you believe that IQ is not heritable? It's hard to believe there is no genetic component that is heritable. Humans evolved over time from other species, and in the process of evolving into Homo sapiens sapiens, their intelligence was inherited and selected for strongly. There's nowhere else for the foundation of human intelligence to come from in the first place other than inherited genetic traits.
As a peer comment asked, "If IQ is not heritable, then how did it evolve?"
I suppose it's possible that a large component of intelligence could come from society and nuturing. Humans have gotten more intelligent as their diet has improved, and as their diet has become more stimulating, and with writing and education. Still, the fundamental capacity to have a verbal or written culture, and to teach and learn, arises from our baseline intelligence which is possible due to our genetics.
To be fair, you might believe that all living humans have indistinguishably similar IQ at birth. There are reasons to believe it's not true, but it's a consistent belief. However, it would not be sensible to believe that IQ is stable over time, since we know that humans evolved IQ that was not previously present in proto-humans and ancestral species. Therefore IQ must be subject to genetics, and therefore must be partially heritable. From this position you might believe that IQ is heritable but does not genetically vary to a meaningful degree in modern populations. This position makes sense.
However, by comparison, it would be fairly incredible to believe that IQ is not heritable whatsoever, especially given that intelligence is affected by physical traits such as size of skull and brain, and by metabolic pathways and disease and disease resistance. Brain size for example is correlated with intelligence [1]. A number of diseases are known to affect intelligence, such as phenylketonuria. I understand in your comment you acknowledged that such diseases might impair someone and drag their intelligence down -- but just as different people have different potential as athletes, as measured by their VO2max, why wouldn't you believe that the physical systems underlying intelligence couldn't be just a little bit different in one population versus another, thus resulting in a little bit of an edge in IQ? Humans are so different in height, weight, muscle, skin/hair/eye pigment, that it would be incredible if we all were exactly, immeasurably different on some treat that also distinguishes humans from other species.
Science has explored this topic, and my understanding is that modern research on this topic suggests that IQ is heritable. Studies examine factors like twins who were separated at birth, and grew up in different households. Could you share the reasons why you think it is not?
> Various studies have found the heritability of IQ to be between 0.7 and 0.8 in adults and 0.45 in childhood in the United States. A 1994 article in Behavior Genetics based on a study of Swedish monozygotic and dizygotic twins found the heritability of the sample to be as high as 0.80 in general cognitive ability; however, it also varies by trait, with 0.60 for verbal tests, 0.50 for spatial and speed-of-processing tests, and 0.40 for memory tests. In contrast, studies of other populations estimate an average heritability of 0.50 for general cognitive ability.
[1] "Overall, larger brain size and volume is associated with better cognitive functioning and higher intelligence. The correlations range from 0.0 to as high as 0.6, and are predominantly positive." https://en.wikipedia.org/wiki/Neuroscience_and_intelligence#...
The change in the brains from early protohumans to homo was a change of kind not quantity. We evolved new mental apparatuses.
On the other hand, from homo habilis/erectus to human, I do not believe there was a drastic change in kind of mental capacity, but rather, in a slow build in the technology of language made possible by our new mental apparatuses, which finally provided us a means for expressing and exploring complex ideas.
The brain is a very complicated system. Like other complicated systems, it is very easy for it to break from small changes (and thus we see mental disorders). On the other hand, it is nigh impossible to make a very complicated and robust system more effective -- a small change here or there generally won't do it.
So, barring us finding major physical differences in the brains of the general populace, they would seem to me to operate with roughly a uniform capability.
The difference that people ignore between the evolution of the brain and visible traits like skin color, is that skin color is a very simple system. We can expect it to change quickly. On the other hand, for human mental capacity to change would require enormous amounts of time because of the complexity involved.
It's possible there are modicums of difference in their operation, but it is unlikely to make a measurable difference between humans when compared with differences in the software of nurture, of culture, of 'thought'.
As with computers, you'll get more out of a little software tuning than you will by shrinking transistors by a nm, and software tuning is much easier.
You've made a number of interesting observations that I never quite considered before. Do you have a blog/book/other source you would point to that encapsulates these ideas?
I also have a question: if what you're saying is true then why is general formal education so poor and ineffective? Is it that we're not tinkering at the right level with the 'firmware'?
I understand your point, and I think I understand that of the original parent comment. I don't have much to add, except that believing intelligence is not heritable strikes me as a well thought-out and, in my opinion, largely constructive decision. Up until this very moment, I have always been interested in the reality of this idea; that is, what is true and what is false. Just now it has occurred to me that there is also a political component to this idea, as well as very real moral issue.
After reading through the comments and trying to suss out the logical conclusions of the idea "intelligence is heritable", I have decided to make the same decision.
From here on out, my position is that intelligence is not heritable. This is what I will tell my children and grandchildren. When they ask why, I will tell them that I simply believe it to be true. That it "feels right" to me and the alternative feels truly wrong.
In terms of the science and what is actually happening in the physical world, what measurements and testing actually prove: I am not interested and will not take part in the discussion as I find the idea that intelligence is heritable to be morally wrong. Maybe there is some giant leap humanity could make, and maybe millions of lives could be saved, but I don't care; I think the idea is abhorrent enough that I personally do not want to take the risk.
I'm not trying to preach, but point out that there's more to it than the science and what is objectively real.
I would argue that it is dangerous to hold an idea just because it 'feels' right.
I'm on the not heritable side because I think that's what the science will show, given my understanding of how complex systems work.
It would be immoral not to understand intelligence heritability if it were true -- how could you help those who were disadvantaged if you were not aware of it?
You make a good point. If we were to understand how intelligence works and if it were to be heritable, it may be possible to ensure that everyone had some baseline of intelligence. That may be beneficial.
Maybe I've lived in the United States for too long, but it seems inevitable to me that we'll end up with a tiered system. Perhaps there will be some baseline intelligence level everyone is entitled to, but the very wealthy will surely be able to pay for even more intelligence. Likewise, if intelligence was this well understood, a measure would be applied to everyone. Perhaps it would be like your credit rating and this could easily be used to discriminate in a wide variety of ways.
IMHO, some people like to box others off and say: these are lesser than me. Should science support this opinion, this will only encourage the behavior. To me, this is the basis of the argument we're seeing on this post. Some see intelligence as heritable and, inevitably, somehow favoring one or more ethnic backgrounds.
In terms of the "just feels right", I think that the science on this issue is less than clear. Some things point towards heritability, others do not. The idea that some ethnic backgrounds may engender more intelligence than others is to me an idea so poisonous that I simply reject it on moral grounds.
I agreed with you at first but reading through Urbit's docs and the vox article linked in thread clarified the relationship between the system design and the author's political ideology.
Many of the design choices did not make sense to me until I understood that the author basically leads a thought current around a return to aristocracy.
With that, there does seem to be some reason to avoid the software simply because it may be designed to produce a particular political outcome.
Ah ha! I forgot an earlier comment of mine where one's political views shouldn't impact conference attendance.
In this case, when talking about a project, the specific nature, obfuscation, and seemingly random need to make things harder to understand than they should be, my opinion on the project is the opposite of my views on a speaking engagement.
Devil's advocate here. What nice things would we have if nobody gave money to people who believe that? Or, if that is what you meant, if the belief itself was extinct?
Just wait until you read the political views of those who are developing of which Urbit is an instantiation. It's a political philosophy designed to appeal to the Silicon Valley elite who want to imagine themselves as kings:
I readily admit I haven't even read the content in this case. I am just attacking the source: most of the times I have read their content on a subject that I am familiar with, it has been grossly misleading.
They definitely could be correct on this topic. I'm not familiar enough with it to say. However, I want to strongly encourage everybody to verify with other sources instead of lending this one even the credibility you'd give Wikipedia.
Yeah, but RationalWiki isn't an encyclopedia and doesn't pretend to have the claims of objectivity that Wikipedia has, opting for a Snarky Point of View policy instead of a Neutral POV, and is consequently critical of ideas it believes has a pseudo-scientific vibe. It was born in response to Conservapedia, and in its mission statement it intends to "explore fundamentalism and authoritarianism", whereas NRx has a neo-fascist appeal.
I picked that article because it seemed much more comprehensive than that of Wikipedia.
The software is heavily based on the author's political views - and allows the author to shape the network into whatever they want, being based on a system of "digital landlords" who have significant control over the users who work on their "land" - so it makes sense to look into said political views.
I'm not sure where you're getting this. The system is explicitly designed such that the author can not ultimately shape the network into whatever he wants. The only real control stars and galaxies have over the rest of the userbase is that if you want to send a packet to someone who you don't already have a direct connection to, you need at least one star willing to route your traffic. The ownership of these pieces of network infrastructure is intentionally fragmented to prevent the kind of control you're talking about.
What you're describing sounds more like the web as it currently exists under the benevolent rule of King Verisign.
If you can't tell me what you are doing in a single paragraph, then you've got a problem. If you can, why doesn't your homepage reflect that? Show me! I don't want to scroll through another website with 16 point font.
We all love the Unix philosophy and puppies, too.
Please take this as the constructive criticism it is.