Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Short answer: A graph.

All data structures are simplified graphs.

The state of the physical universe is a massive graph in which interconnected objects are themselves massive assemblies of graphs of atoms and the atoms are graphs of subatomic particles. It's graphs all the way down. The properties of all systems - physical, chemical, economic, biological - emerge from the interactions between simple connected elements.

My opinion is that all knowledge is representable as a connected graph. The disconnect between our computers and our minds arises from the fact that brains are categorically not numerical machines but graph processing and pattern recognition engines. Neural networks are the underlying hardware and, with the typical elegance of nature, these are also graphs.

It should be possible to build a graph based language. The basic "Elements" [SICP] are easy to realise:

1. Primitive Expressions are graph nodes. They have identity and not much else.

2. Means of combination. Graphs can be added, subtracted etc.

3. Abstraction. A graph can be abstracted into a single node. We have no problem looking at a complex assembly of components as a single entity.

Since Google and Facebook are two massive platforms whose value arise from direct interaction with planet-scale graphs with billions of nodes, would these platforms be easier to build if our computers were more graph oriented? I would like to believe so.



And a graph is a relation, and a relation is a function, etc. Just because you can model everything with graphs doesn't make them special. You can model everything with lots of things.


A correction. A relation is not a function but a function is a relation. A function is a restriction of a relation such that for each thing_a in A and thing_b in B, for a pairing of (thing_a's,thing_b's) by some relation f,each thing_a can only be paired with one thing_b in B.


A relation between X and Y is a function X -> Y -> Bool.


Yes, sorry, you are right. You were talking about representable/modelled by - which I was conflating with Equivalent as Is. A subtle distinction I missed.


I believe we are considering an abstract data structure here.


It should be possible to build a graph based language.

Gremlin is a graph-based language (http://gremlin.tinkerpop.com/).


How do you account for the fundamentally non-deterministic nature of the universe?


Actually representing the entire state of the universe sounds rather ambitious. However, the very action of 'imaging' will force the entire system into a consistent state by quantum collapse. The universe will be represented but changed, the original multitude of super-imposed states will be lost.

Heisenberg's uncertainty principle implies that some things are just unknowable. However, only the most basic constituents of physical reality demonstrate noticeable quantum behaviour. The macroscopic universe is fairly deterministic. You don't need to jump in front of a moving bus to learn that Newtonian mechanics applies 100% of the time in a very deterministic way.


This may be a nitpick, but how can something be "fairly deterministic"? Is it possible for there to be degrees of determinism? I would consider determinism to binary, either something is deterministic, or it is not. If a thing cannot be demonstrated to be deterministic 100% of the time, then by it's very definition it is non-deterministic. By that logic, I would actually conclude that the entire universe does in fact behave deterministically. If it didn't, then I don't see how science would even be possible.


I was avoiding an absolute statement because macroscopic objects are entirely capable of behaving in unpredictable ways, however the odds against are so high that the chance of this happening is infinitesimal.


But Newtonian mechanics doesn't apply 100% of the time, right? It applies most of the time, from your perspective, because most of the sizes, distances and speeds you deal with are very large, short, and slow compared to their relevant universal constants. But there's a very long tail of very small, far, fast things that do concern you, and there, rarely, your "100%" approximation falls down.

You say that the "macroscopic" universe is fairly deterministic, meaning things that are about the same size as your brain. But is that an observation about the universe, or about your brain?


It's an observation about the universe. Yes, things about the size of his brain appear to to have deterministic behavior, but so do mountains, oceans, moons, planets and stars - which are, needless to say, nowhere near the size of his brain.

The appearance of "randomness" at very small scales can be explained as non-determinism, or as a deterministic effect of some property that we haven't yet detected.

The point being that the universe has not been shown to have a "fundamentally non-deterministic nature".


Sorry, but your brain (~10^-1m) is way, way closer in size to the Sun (~10^9m) than it is to the Planck length (~10^-35m).

Never mind that, though... So you have a general solution to the n-body problem? I'm being facetious, of course. The hardness of the n-body problem isn't necessarily an expression of fundamental randomness rather than technical uncertainty. But one way or another, aren't they both an expression of the same thing?

Modeling an n-body system in the physical universe exactly means modeling every piece of information in the universe. If you don't do that, the unknowns will multiply into significant divergence at some t, however distant. Quantum theory suggests that even if you did have a computer the size of the universe that didn't affect the universe, it is intrinsically impossible to make an accurate prediction.

I can't know where a particle will be in a second, or even if it will exist at all. I can't know if the Earth will be hit by an asteroid in a hundred years. I can't know who will win the Presidential election next year. The more accurately I model these systems, the more their outcome (or rather the outcome of the abstract macro-system of which I become aware) becomes dependent on the few things I don't know-- to the point that simply the act of checking the accuracy of the prediction has an unpredictable effect.

At that point, it seems like splitting hairs to say "Yes, but it's still really deterministic." What "real" are you talking about? Certainly none that I have experience with. But that doesn't matter either, because even granting that:

The universe might be deterministic, and it might be nondeterministic. It's unpredictable. So how does determinism become the default?


In advance, sorry about the late reply.

> The hardness of the n-body problem isn't necessarily an expression of fundamental randomness rather than technical uncertainty. But one way or another, aren't they both an expression of the same thing?

No, they are not. Randomness is randomness, and uncertainty is uncertainty. It's entirely possible to be have bounded uncertainty about the amount of randomness in a contrived system.

> The more accurately I model these systems, the more their outcome (or rather the outcome of the abstract macro-system of which I become aware) becomes dependent on the few things I don't know

No. The accuracy of your model of a system has no effect on what influences the system.

The uncertainty in the outcome of your model depends on the uncertainty in the inputs, but that's axiomatic.

> to the point that simply the act of checking the accuracy of the prediction has an unpredictable effect.

If and only if your checking mechanism is part of the system. Now, every checking mechanism we're likely to deal with is part of the universe, but that doesn't make the universe nondeterministic, merely impossible to isolate.

> Modeling an n-body system in the physical universe exactly means modeling every piece of information in the universe. If you don't do that, the unknowns will multiply into significant divergence at some t, however distant.

Yes, modelling a deterministic system requires modelling a deterministic system. If you model it except for the influence of some parts, your results will be what the model would have been in the absence of those parts.

If you don't model everything, you won't model everything. This is not an argument in favor of nondeterminism.

> Quantum theory suggests that even if you did have a computer the size of the universe that didn't affect the universe, it is intrinsically impossible to make an accurate prediction.

That is one interpretation; there are a number of interpretations of quantum theory that are deterministic.

> At that point, it seems like splitting hairs to say "Yes, but it's still really deterministic." What "real" are you talking about? Certainly none that I have experience with.

Consider a system with only two values, which we'll call "N" and "time" for the sake of my sanity. We can imagine a purely deterministic system where N = time * 1000. We can also have a nondeterministic system where N = Nprev + (1000 +- 1). If our measurement apparatus can only measure N's value relative to the previous value with an uncertainty of 10, it will be unable to distinguish between the two systems. This does not imply that the systems are the same, and it is not splitting hairs to consider them different. The first is "really" deterministic, and the second is "really" nondeterministic.

If our measurement apparatus improved such that we could measure with an uncertainty of .1, we would be able to establish that the first system was consistent with both nondeterminism and determinism, and the second system consistent only with nondeterminism. Nondeterminism can never be epistemologically ruled out; this does not mean we should conclude that it exists in reality.

> So how does determinism become the default?

Because for all the cases where we have enough data and processing power to test, determinism has been shown to be consistent with the data. If that were not the case, it wouldn't be the default.

Besides which, earlier you asked about the 'fundamentally non-deterministic nature of the universe'. My point is merely that no such nature has been demonstrated, nor is there any evidence to suggest it. There could be - if things often seemed to happen without cause, there would be plenty of evidence of cases where determinism is inconsistent with data. The closest we get is with quantum measurements, and it's far from demonstrated that these are genuinely a case of non-determinism.


First, non-determinism would have to be demonstrated.


The problem is, all the things we don't understand could be labed as non-determined (i.e: quantum particles moving "randomly")


I don't see how. "Not predictable given what I've observed so far" is a long way from "not predictable".


Wierd... because i think those are the same.


They only appear non-determined when you think of them as particles or waves. They're neither.


Explanation is fair enough. Moreover, it is comfortable to model it as a graph.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: