Hacker Newsnew | past | comments | ask | show | jobs | submit | jove_'s commentslogin

But it's hard to argue the machine at the end is stateless. We can endlessly do this. You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.

There seems to be this weird idea in the functional community that the existence of some construction of one thing in another shows that one of those things is "more fundamental" than the other, when in reality this is often a circular exercise. e.g. Functions can be formalized as sets and sets can be formalized as functions.

Even worse in this specific case, the Church-Turing thesis tells us that they're equivalent, which is the only sensible answer to the question of which is more fundamental. There's an oft quoted phrase of "deep and abiding equivalencies" and it bears pointing out how deep and abiding these equivalencies are. From a formal perspective they are the same. Yes, there's arguments could be made that typed lambda calculus and its relation to logic are important, and that's true but it's not a formal argument at all and I think it's best to be clear on that.


> You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.

I realize that these models of computation are equivalent. My point was rather that the imperative paradigm collapses into the functional paradigm in practical programming when I disregard the admissibility of arbitrary side effects.

> e.g. Functions can be formalized as sets and sets can be formalized as functions

I can derive functions from set theory in my sleep, and I can kickstart set theory without functions, but I wouldn't know how to define the concept of a function without sets. And even if I did: I can't even specify the characteristic function of a set without resorting to the inclusion relation.

> But it's hard to argue the machine at the end is stateless.

I'm not really that interested in the relationship between the various paradigms and the machine. What interests me most is how well I, as a human being, can write non-trivial programs. To me, it is immediately obvious that the composition of purely functional program units is conceptually simple enough to be done by a child, while unrestricted side effects can very quickly make things very complicated. However, I don't want to get involved in the discussion on this topic. I have accepted that others see it differently, although I find that completely baffling. I don't want to take away anyone's for loops, etc. To each their own.


> I realize that these models of computation are equivalent. My point was rather that the imperative paradigm collapses into the functional paradigm in practical programming when I disregard the admissibility of arbitrary side effects.

But in practical programming with imperative languages, arbitrary side effects can't be disregarded, so they don't collapse into the functional paradigm. In fact, from a physical perspective, every possible CPU has states, so the most physically fundamental model of computation (something like register machines, or GOTO programs) is imperative and more fundamental than functional models, like untyped lambda calculus. Functional models might be more mathematically elegant though.

> I wouldn't know how to define the concept of a function without sets.

Whitehead and Russell showed how to define functions just in first-order logic with identity, without requiring any set theoretical axioms, by defining an n-ary function via an n+1 place relation. See here top right: https://mally.stanford.edu/Papers/rtt.pdf

This is quite natural, because predicates (properties and relations) already occur in natural language, while sets do not; they are a mathematical abstraction. For example, sets can be empty, or arbitrarily nested, or both arbitrarily nested and otherwise empty, which has no analog in natural language.

> I can't even specify the characteristic function of a set without resorting to the inclusion relation.

If you try to define sets by using functions, functions are in this context assumed to be more fundamental than sets. Then you don't need to define functions. Then the inclusion relation is simply defined via the characteristic function. You don't need to define that function. Just like you, in the reverse case, don't need to define sets, if you want to define functions via sets.


> But in practical programming with imperative languages, arbitrary side effects can't be disregarded, so they don't collapse into the functional paradigm.

I'm sorry to have to say this so bluntly, but I think you understand as well as I do that in a language such as C#, it is entirely possible to write large amounts of purely functional yet useful code, just as you would in Haskell. That's why it's possible in SICP to wait until Chapter 3 to introduce the special form set!. That is the issue I was concerned with.

> from a physical perspective

I already mentioned that this is not the perspective that interests me. I don't care at all about the physical substrate for computation.

Thanks for the paper. I might take a look at it, although I've already been given a good tip elsewhere with map theory. I'm not convinced by the claim that properties and relations occur in natural language but sets supposedly do not.

The last paragraph isn't very helpful either. I'm not sure who is misunderstanding whom here, but we don't need to hash it out. This isn't a conversation I'm enjoying.


You're looking for Grue's map theory for formalizing sets in terms of functions.


Thanks! It seems that in the metatheory, one can resort to type theory in order to avoid having to fall back on set theory in a circular manner. Unfortunately, I don't know anything about that, but I'll take a closer look at it.


Also, while I haven't read any further yet, based on the table on page 3, their big O performance is pretty bad.


As everyone has pointed out, this does not count. Note that the idea that regex can't parse html is specific and proven. What it means is that you can't write an expression that matches both the opening and matching closing tags. There's no way to handle nested tags within a single regex. It's only possible to write a regex that matches up to a finite nesting limit.


I think this is the difference between the theoretician and the practitioner. You see your interpretation is the obvious one for the former. But as any practitioner can tell a regular expression can't even parse a regular language!

See, normally the whole point of parsing something is to get data out right. And the way a regex gets data out is through capture groups. But herein lies the issue, a capture group can only capture one piece of information!

Consider a simple regular language: a non empty sequence of comma separated positive integers. We would like to get the integers out. An attempt

  (\d+)(,(\d+))*
The first group captures the first number, the second group is just something we introduced for the purpose of writing the regex, we don't care about the value. The third (inner) group should ideally capture all the subsequent numbers separately. But it doesn't! If you try to run that regex on 1,2,3,4,5,6,7,8,9 you will find that group 1 matches 1. And group 3 matches 9. Where did all the other numbers go?!

So really, you have to give the regex some outside help, maybe an outside loop, maybe splitting on a regex rather than parsing with one. Even for this simple language!

And when you are already doing that, why the step to giving it a bit more help, perhaps a stack, is quite small.


That is true of "theoretical" regexes, not of the ones actually used by modern languages.


If you aren't aware that contra and covariance are typically taught under the banner of the "liskov substitution principle" rather than with the name variance you don't know OOP better than OOP programmers. You just convinced yourself you did because you use different terminology. I think maybe the guy you're responding to is correct.


Size also matters for this problem. The effect of it is to slash your processor's cache size. If your problem is small enough that it fits in the cache anyway you won't see the effects. That said, unless the cache is hot, it shouldn't make a difference in the first place. Moreover I'm pretty confident in guessing that with a cold cache, the difference between the algorithms would vanish.


Yeah, DDA got started when some people from the dwarf fortress forums forked the original Cataclysm by Whales. There's also Bright Nights which forked from DDA a few years back and focuses on fun over realism.


So, if people go out and protest outside a theater that's cancellation and it's good? But someone makes a post online saying they don't like it that's censorship and it's bad?


Really good. I'm a computer scientist who takes a passing interest in the subject and usually I'm just disappointed by the computer science errors and the wild claims that it leads to. This, however, cuts to the heart of the issue, how do you measure consciousness? It's the same question Turing tried to answer with the Turing test but now that language models are starting to pass it, despite being way below a dog according to our intuition, I think it's a question needs asking again and not an easy one.

There is one mistake in there, more of an oversight really and one that's tangential to the main point but I think it's important to understanding the issue in general. To say the brain computes a function is meaningless. When I say my computer is computing a function, let's say 1+1=2, implicit in that is the idea that high electric on a wire means 1 and low electric means 0 and we can arrange these into binary to make a 2. Computer science is pure mathematics. It's only capable of symbolic manipulation. It can do 1 and 0 but not electric or neuron. Any relation from the second to the first must be defined and this is true of any bringing mathematics into practice. For most of science and engineering this means units of measurement. We use bits of information but it's not at all clear how that should be applied to a brain. The first question is what properties of a brain matter for the calculation we care about? As one of the other commenters said, "There's no "pure information", it's always information over the space of events we care to distinguish as different."

Worse, any relation is arbitrary. Computers are all very neatly arranged but they don't need to be. I could watch any physical process and map each of its states to the states in any Turing machine and call it a computer performing a computation of my choice and from a theoretical perspective there's nothing to distinguish that from me declaring my computing is computing 1+1=2. Moreover, humans have a pressing need to keep things practical, ordered and understandable for other humans. Evolution needs only practical.

This actual comes up in practical computing more than you might expect. Imagine my 1+1=2 is part of a program keeping track of flea populations. It's very hard to track fleas individually so we round to the nearest thousand. Did I calculate 1+1=2 or was it ~1000+~1000=~2000? It's both really, there's a symbolic level where it's just 1+1 and a practical level where it's a thousand fleas. To further confuse things, if I got my C compiler out and wrote some natural code that included a function that given a person would return their age (as references in both cases for the pedants) and then wrote a different function that given an integer will add 4 to it. There's a very real chance those two functions will produce identical machine code.

The computational stack leading from metal to firefox is presented as big question marks but it's not. It's very well known, we literally made it up ourselves. Humanity knows it as well as Tolkien knew middle earth. Not only could we describe its makeup and workings in all level of detail, we prescribed them in meticulous detail. The same can't be said of brains.

I've heard a lot of this referred to as the triviality argument. That is, it's trivial to say something is a computer doing a computation, but then I find a lot of the conclusions drawn from this to be off. It's presented as if it undermines the idea of the brain as a computer, when on the contrary it confirms it. The computer science behind explaining this gets a bit heavier, but I think even without it you can smell something is off here. It's the same as with Xeno's paradox, even if you know nothing about math, you know the tortoise moves. Brains are not merely a trivial computer. They look like some maybe very alien but still genuinely structured and practical information processing systems and these workings are clearly strongly related to our behaviors and consciousness.

Going into the heavier comp sci. Brains are definitely not Turing machines. Neither is my computer. Turing machines can't exist. The part where it says "infinite tape" is a hard requirement. Any finite limit implies that a system has no more computing power than a finite state automaton. Fun fact: Such systems are incapable of doing basic arithmetic. If you're having trouble believing that your computer can't do arithmetic try adding something to 2^n where n is the number of bits of memory you have. This may sound bad but it's great. They're still powerful enough to do whatever they like within their finite bounds and things like the halting problem, Godel's incompleteness theorem? Not applicable. Given some assumptions like the universe is finite and consciousness can be represented symbolically, it becomes very difficult to argue that a relation between the two couldn't be represented computationally.

But it's not enough to know that such a mapping must exist or that it must be representable with simple computation. The idea that there are a vast number of possible mappings is still somewhat relevant, but no more than saying to a physicist "I could invent a world where gravity works differently" or even "There are many different models for reality" and like the physicists it's going to take experiments to sort out which one is the correct one. The key to any experiment is good measurement and so we come inexorably back to the heart of the issue. How do we measure this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: