Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mathematical proof is a social compact (quantamagazine.org)
179 points by digital55 on Aug 31, 2023 | hide | past | favorite | 216 comments


> But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean.

Having studied CS in school I'm sort of triggered by this. Might be the editorialization, but this is a statement I have a problem with.

I am not one of those who think that computers will save us all. My point of view is that computers are meaningless machines that do meaningless operations on meaningless symbols unless proven otherwise. This is what gets drilled in your head in any semi-decent CS program and it's a point of view I came to agree with completely.

But we have proven otherwise.

Proof checkers like Lean, Coq, Matita, Isabelle and the like are not like normal code, they are more similar to type-systems and code expressed in their formalisms is directly connected to a valid mathematical proof (see the Curry-Howard isomorphism).

They are usually constructed as a tiny, hand-proven core, and then built on their own formalism, ensuring that what is exposed to the end user is perfectly grounded.

If a program is accepted by the proof checker, then by construction it must be a valid mathematical proof.

Of course, computers are physical machines that can potentially make all sorts of mistakes. Hardware failures, cosmic rays, whatever. But the probability of that happening on a large scale is the same probability that seven billion people collectively hallucinated elementary theory and it's in fact not true that there are infinitely many prime numbers.

edit: Just a clarification: it's only this particular statement I have a problem with. The article is very much worth your time. Do not get derailed by the title, it's not some sort of "math is racist" nonsense.


Once upon a time, the professor of a class on logic I was taking pointed out that, if you proved some famous open conjecture (I think Fermat's was the example) using some simple proof that everyone had just missed so far, mathematicians would pat you on the head, maybe give you an award or something, and you (and the conjecture) would become another forgotten footnote.

Major mathematical advances come from difficult problems when a new branch of mathematics is invented to solve them. Think of calculus, non-Euclidean geometry, and Turing machines (and their equivalents).

The problem I see with Coq, the proof checker I've worked with the most, and the others I've seen is that the proofs they result in are neither good, understandable programs nor good, understandable proofs. They are as complicated as complex programs, but written in something that isn't a half-way clean programming language, but they're also not written as a traditional proof. That goes back up to the article's point that the purpose of a proof is to convince and enlighten others.

Sure, I'd have to say a proposition proven in Coq is true, but how do you get from that to the ideas in the proof letting you go on to other problems?


I don't think the point is to either have good math or good code.

When you develop software in Coq and the likes, it is library code. It is high impact code that is guaranteed to be correct. It is written once and not maintained – You would never build the new Twitter in Coq.

When you develop math on Coq, you build abstractions. You just need to know that theorems that you are working on. Then you pull in all the lemmas in coq to help you solve that theorem. You do not need to understand the lammas coq uses to help you prove your theorem – coq does that for you. So your math is extremely concise. This is in opposition to normal math, where you need to be able to explain the hierarchy your theorem works on top of.


"It is written once and not maintained..."

Be careful, this is how you get the Ariane V first flight.

"You do not need to understand the lammas coq uses to help you prove your theorem..."

If I wanted to generalize your theorem or move it to some other base abstraction, the first thing I need to do is to read and understand your proof. If that proof is made up of Coq tactics chosen based on "whatever works", I'm going to have a bad day.

Now, many results especially in CS are made by reducing a new problem to an existing theorem, but I don't know how well that works as a general case.


Normal math also concisely cites pre-existing theorems


Ah but normal math is a social compact in a way that computer-aided proofs are not. Very few handwritten proofs are actually written down in detail, and there are various cases where attempting to do so actually failed or required new work to actually close the gap.


Proof assistants assist with normal math. They just make the normal formal and realiable.


I guess the hope is that eventually there will be a language which is roughly as easy to read/write as normal proofs in English but can be machine checked.

Lean comes closer than Coq, but is still far from this ideal. It may be that AI will need to be involved (as in, there could be a tactic that generates proof terms using an transformer or something).


> I guess the hope is that eventually there will be a language which is roughly as easy to read/write as normal proofs in English but can be machine checked.

That's what "declarative proofs" are for (as seen in Mizar, Isar, and formerly in the Coq plugin C-zar). Most proofs in interactive proof assistants are written as "proof scripts", which cannot be understood without playing them back in the proof assistant (in fact, specifically the version of the proof assistant that the "script" was originally written for) and checking how each step in the "script" relates to the current proof state.

Declarative proofs are more self-contained, in that the assumptions and conclusions involved in each proof step can be ascertained by just reading them in the code. They can also be naturally simplified into structural "proof sketches", where missing steps in the proof could potentially be filled by a proof-searching "AI".


> in fact, specifically the version of the proof assistant that the "script" was originally written for

Are Isar updates guaranteed not to break proofs?


Like Cobol?

Coqobol?


I think you'll get machines that can check human proofs first.


I think the only way to do this that anyone will find convincing is to have an LLM turn the human proof into a machine checkable one, so all the research into Lean etc. will still be useful.


There is something similar Nima Arkani-Hamed talks about Feynman equations during his amplituhedron lecture, pages and pages of equations being reduced to simple representation.

@56:26 in https://youtu.be/sv7Tvpbx3lc


Ironically, I consider the amplituhedron to be very unclear.

Every presentation by its author goes: "here's a triangle, here's a simplex, and now... ta-da... the amplituhedron is the same!"

He skips 15 steps every time. It's literally the "here's a circle, then another circle, and now just draw the rest of the owl", but in terms of theoretical physics.


He says in the lecture that physicists arrived at the simple representation by studying Feynman diagrams and looking at how terms cancel each other. And amplituhedron also gives rise to those same representation. This is the reason why he considers amplituhedron to be a good research path. He cites similar interpretation change that happened before in physics[0]

[0] : https://youtu.be/sv7Tvpbx3lc?t=2664


I think the quote is about the fallibility of the humans who convert theorems into statements of type theory. You could end up with a valid theorem, but not the theorem you meant to prove. This would be a bug in the statement of the theorem, not the proof.

For example, you might want to prove that a certain sorting algorithm is correct. You formalize the specification as "for every two integers i, j, output[i] <= output[j]" and prove that outputs of the algorithm satisfy this spec. However, this is not a correct characterization of sorting, since the algorithm might return the empty list.


1. Sure, but verifying a spec is still much easier than verifying a whole proof (as we traditionally do).

2. In that example that you gave, you would have additional evidence: if you use your system and every sort returns an empty list, you'd probably notice it quite quickly. You can also do manual or automated testing to see that it does the right thing for example inputs. If you then consider it to be unlikely that someone writes code that would exactly only work for example inputs and only satisfy the specification in the dumbest possible way, then you get some really good evidence that your algorithm is actually correct. 100% certainty is never possible, but there are still degrees.


Agreed. I think the interviewee is a little pessimistic when he says "I’m just not sure it’s any more secure than most things done by humans." about computer assistance. If the ABC conjecture were to be proven in Lean, I'm pretty sure it would be accepted as true by an overwhelming majority of the mathematical community.


The ABC conjecture "proof" is rejected because it uses definitions and arguments no one understands.

It would not necessarily be hard to program in those incoherent definitions and get them to prove a result. That wouldn't mean much.

This is different from just programming in the conjecture and letting Lean find a complete proof.


The proof is not rejected because people don't understand it, it's rejected because people don't think the proof is correct.

If the proof was translated to Lean, then it would mean a lot if it managed to prove the ABC conjecture because it would mean the proof was correct. It doesn't matter if the proof used crazy definitions. The statement of the ABC conjecture would still be understandable and so the ABC conjecture would be solved.


> The ABC conjecture "proof" is rejected because it uses definitions and arguments no one understands.

I would like to point out that it seems like the general consensus now is that Mochizuki's proof is incorrect.

And, as people pointed out early on, one of the problems with the Mochizuki paper is that it provides no new tools or results for other problems in the space. This is quite unusual and was a significant part of the skepticism about the proof being correct.

Contrast this to Wiles proof of the Taniyama-Shimura-Weil conjecture for semistable elliptic curves which proves Fermat's Last Theorem as a consequence and then led to a full proof which later became the "modularity theorem".

If Mochizuki used the tools from his paper come at something already proven from a different direction, that would have lent a lot more credence to his claims.


> 1. Sure, but verifying a spec is still much easier than verifying a whole proof (as we traditionally do).

I agree.

In the Metamath community, we also check all proofs with a collection of verifiers written by different people in different languages, all based on a very small kernel. That provides extremely high confidence in its correctness.


It's an important problem and people in formal verification are aware of it. One example of tackling this is that the Lean LTE team accompanies the proof with "several files corresponding to the main players in the statement" and "[they] should be (approximately) readable by mathematicians who have minimal experience with Lean [...] to make it easy for non-experts to look through the examples folder, then look through the concise final statement in challenge.lean, and be reasonably confident that the challenge was accomplished".

Details in this blog post: https://leanprover-community.github.io/blog/posts/lte-exampl...


I'm with you almost everywhere, but it's worth pondering this:

> If a program is accepted by the proof checker, then by construction it must be a valid mathematical proof.

I'd phrase it as "If a program is accepted by a _correct_ proof checker, then ...". That makes it clear that we can't escape having to consider how and why the proof checker is considered correct.

Edit: this is not entirely academic. from what I understand, the unsoundness of Hindley-Milner in the presence of polymorphic references was not immediately known (https://en.wikipedia.org/wiki/Value_restriction#A_Counter_Ex...).


To be ruthlessly, uselessly pedantic - after all, we're mathematicians - there's reasonable definitions of "academic" where logical unsoundness is still academic if it never interfered with the reasoning behind any proofs of interest ;)

But: so long as we're accepting that unsoundness in your checker or its underlying theory are intrinsically deal breakers, there's definitely a long history of this, perhaps somewhat more relevant than the HM example, since no proof checkers of note, AFAIK, have incorporated mutation into their type theory.

For one thing, the implementation can very easily have bugs. Coq itself certainly has had soundness bugs occasionally [0]. I'm sure Agda, Lean, Idris, etc. have too, but I've followed them less closely.

But even the underlying mathematics have been tricky. Girard's Paradox broke Martin-Löf's type theory, which is why in these dependently typed proof assistants you have to deal with the bizarre "Tower of Universes"; and Girard's Paradox is an analogue of Russell's Paradox which broke more naive set theories. And then Russell himself and his system of universal mathematics was very famously struck down by Gödel.

But we've definitely gotten it right this time...

[0] https://github.com/coq/coq/issues/4294


> there's reasonable definitions of "academic" where logical unsoundness is still academic if it never interfered with the reasoning behind any proofs of interest ;)

I wouldn't really so. If you were relying on the type system to tell you about any errors you made in your program, the revelation that your type system was unsound is not "only academic" simply because your program coincidentally didn't have errors. Just like it wouldn't be "only academic" if you discovered that your car's airbags weren't functional for the last 10 years even though you never had any accidents.


I'd also note that the correctness of the proof checker not only lies in the high level software code, but also the full stack of hardware down to the nanoscale.

Eg.

- Are you sure the compiler is correct?

- Are you sure the OS is behaving correctly?

- Are you sure the CPU isn't randomly glitching out?

There's a difference between believing that these things are correct 99.999% of the time, and claiming that it's mathematically proven to behaving correctly. Mathematical proof isn't statistics, so either you're willing to claim and defend the claim that the whole stack of the proof checker is correct, or you don't have a proof at all.

How one would prove that a physical modern CPU conforms to a formal spec I have no idea TBH. I presume the QA of chip manufacturers uses statistical methods.


> Are you sure the compiler is correct?

Use multiple different compilers and compare results

> Are you sure the OS is behaving correctly?

Use multiple different OSes and OS-versions

> Are you sure the CPU isn't randomly glitching out?

Use multiple different CPU types.


This is a great point. In a computer proof system, you produce judgements like "the term x has type is T", where the type "T" encodes the statement of a theorem and "x" is the proof. We must be certain that whenever the type checker verifies that "x has type T", it means that the theorem "T" is actually true. Such a meta-proof must be done on pen and paper, before any code is written at all. Any bug in this meta-proof would destroy our ability to rely on the computer proof system.


> Such a meta-proof must be done on pen and paper, before any code is written at all.

Not so fast; it could be done on a _different automated system_ (say, a smaller one) than the one you're building, in which case you're now relying on that system's correctness. Or it could even not be proven at all. This is not too different from what mathematicians do when they (often) say "assuming P != NP", or "assuming GRH", or "assuming FLT", etc etc. It's simply that it's worth being careful and precise.


> the same probability that seven billion people collectively hallucinated elementary theory

Just for fun, it's not unprecedented that the entire human race is disposed to mislearn basic truths about the universe which have to be unlearned when you get to the point where it matters. See quantum mechanics.

One hopes that conscientious mathematicians are careful to select axiomatizations (such as ZF set theory or the type system of Lean) which faithfully reflect reality. And, within such a system, we can be extremely confident in what we're doing. But the distinct possibility remains that the axiomatization doesn't actually match our universe and, say, admits a proof of false or implies something not physically realizable, such as that a certain computation should terminate, when no real life physical machine obeying the laws of physics can ever do so.


Be careful with things like "faithfully reflect reality". :-)

Euclidean geometry is intuitively good and seems to work on relatively small areas of the Earth's surface, but non-Euclidean geometries are inconsistent with it and more faithfully reflect a reality that lacks infinite flat planes.


This is not true.

They are incompatible sets of axioms, but they can describe the same set of shapes, although one system is going to be a lot more inconvenient than the other. The differences are logically interesting, but about as incompatible as choosing to make the origin of a euclidean system one corner of the room instead of another.

In contrast physical theories contradict each other by making different predictions which can be falsified.


At that level of pedantry even the halting problem is solvable. Just choose a suitable representation for the Turing machines in question.

Describe/represent the ones that halt using 1; and the ones that don’t halt using 0. This produces the pairs (TM1, 1), (TM2, 1), (TM3, 0) etc.

Using this encoding the problem becomes trivial. It’s all other encodings which are unwieldy, complex and inconvenient.


That's no longer the halting problem: formally, the input to a halting oracle is an index for some fixed choice of admissible numbering of the set of Turing machines, meaning one that can be computed from the standard numbering induced by some universal Turing machine. Your encoding is not admissible.


>That's no longer the halting problem

Just your luck. Under Univalence identity is equivalent to equivalence and it's computationally decidable.

So you should be able to produce a decider which determines whether any alternative encoding/representation is equivalent to the halting problem; or not.

Formally speaking Turing machines are formal language recognizers, so in which formal language is the formal statement of the Halting problem expressed in?

I'd also like to see the decider for "admissible" vs "inadmissible" encodings. Input validation is an Interesting problem-domain for sure.

(Of course, this is all for the sake of maximum pedantry)


> Just your luck. Under Univalence identity is equivalent to equivalence and it's computationally decidable. So you should be able to produce a decider which determines whether any alternative encoding/representation is equivalent to the halting problem; or not.

Moving to a constructive setting lets you say things like "all functions are computable" because it has a restricted notion of function. It does not give you any new information about classical objects.


You say “restricted” I say “better defined”. Either way - A constructive setting is more expressive.

So express your English adjectives in Mathematics.

What do you mean by “restricted” when you are characterising a function?

Show me the decider… for “classical” and “non-classical” objects.

That is the definition of information; is it not? The answer to a yes/no questions

Is the object classical? Yes/no.


> You say “restricted” I say “better defined”.

Sure, whatever - the point is that they're different objects, and results about one are not results about the other.

> What do you mean by “restricted” when you are characterising a function?

I mean whatever the word "function" means in a given setting. A classical function X -> Y is a relation such that if `(x, y1)` and `(x, y2)` hold then `y1 = y2`. A function in intuitionistic type theory is a well-typed lambda term. A function in the categorical semantics of a type theory is an exponential object. And so on.

> Show me the decider… for “classical” and “non-classical” objects.

No such thing: it's a metatheoretical judgement, not a theorem. Same story as type errors: within the language, `stringToUpperCase (5 :: Int) :: String` isn't a false statement, it's just inexpressible nonsense. There is no such object as `stringToUpperCase 5` and so nothing can be said about it internally. When we talk about it, we're talking, from outside the language, about a syntactic construct.

> That is the definition of information; is it not? The answer to a yes/no questions

No. Self-information of a given outcome with respect to a random variable, which is probably the most common sense of the word, is the negative log of its probability. Shannon entropy, also often called information, is the expected self-information of a random variable. Mutual information is the KL divergence of a joint distribution and the product of the respective marginals. There are other notions.


>Sure, whatever - the point is that they're different objects, and results about one are not results about the other.

Sorry, I don't understand what you mean by "same" and "different".

By "same" do you mean =(x,y). And by "different" do you mean ¬=(x,y)

Or do you mean something like same(x,y) = { 1 if ??? else 0 }

>A classical function X -> Y is a relation such that if `(x, y1)` and `(x, y2)` hold then `y1 = y2

You seem to be confusing syntax and semantics here, and the infix notation isn't helping...

What does =(y1, y2) mean? What does =(x,x) mean?

>No such thing: it's a metatheoretical judgement, not a theorem.

What do you mean? Judgments are first-class citizens in Univalent Mathematics. a:R is a judgment that object a is of type R. This literally means "a proves R", and the particular expression "first class citizen" has well-understood meaning in Programming Language design ( https://en.wikipedia.org/wiki/First-class_citizen ).

>`stringToUpperCase (5 :: Int) :: String` isn't a false statement, it's just inexpressible nonsense.

So I must be a miracle worker then? Expressing the inexpressible nonsense...

   In [1]: def stringToUpperCase(x): return(str(x).upper())
   In [2]: stringToUpperCase(int(5))
   Out[2]: '5'
>No. Self-information of a given outcome with respect to a random variable, which is probably the most common sense of the word.

Speaking of randomness in a classical setting, this function exists, right?

f(x) = { 1 if random(x), else 0 }


No, the comment above mine is the one being pedantic. We can absolutely talk about physical correspondence of mathematical concepts, even though there are multiple equally good logical systems for describing them.


Correspondence is a relation.

Relations are functions.

Can you tell us more about this function which relates Mathematical concepts to physical reality?


> Relations are functions.

That is how they are defined in set theory based math. The mathematical definition is not the only meaning of a word. When when start to talk about the physical world, we need additional concepts. You are welcome to read the wikipedia page about correspondence theories of truth.


The fact is you are talking about the meaning of words. Be they English words; or Mathematical words.

If you are going to talk about the correspondence theory in the same breath as you are talking about any language - you need to solve the symbol-grounding problem.

What function/mechanism/relation (or if you don't like any of those words - pick your own) grounds symbols?

https://en.wikipedia.org/wiki/Symbol_grounding_problem


What is the sum of the interior angles of a triangle?


Let's recall your original comment. You said we can't talk about reality correspondence of math because multiple logical systems can describe physical space equally well.

A triangle in a euclidean system is not the same shape as a triangle in spherical, etc. They are only equivalent in that they have an analogous logical definition in their separate systems.

They don't look the same. They don't describe the same physical phenonoma either. In other words, the physical correspondence is not the same, so there is no contradiction.


What? I said nothing of the sort.

In what we laughingly call "normal" geometry, given a line L and a point P not on L, there is exactly one line L' through P parallel to L. In this case, the interior angles of a triangle sum to 180 degrees.

In hyperbolic geometry, there is more than one line L', L'', etc., through P that are parallel to L. In this case, the interior angles of a triangle sum to <180 degrees.

In elliptic geometry, there are no lines L' through P parallel to L. (You also have to alter Euclid's 2nd axiom, defining a line as extending indefinitely.) And in this case, the interior angles of a triangle sum to >180 degrees.

Triangles are triangles; they have three points and three line segments. Their properties, however, are inconsistent in the three axiomatic systems.

If you are operating in a relatively small area, say a bronze-age farmer's field, Euclid's axioms satisfy all your geometrizing needs. If, however, you are on the somewhat-spherical Earth and are dealing with a large enough area, you can make a triangle with three 90 degree interior angles. You can tell which geometry your "physical space" is dealing with, and if you use the wrong one then you will get wrong answers.

For more than 2000 years, geometers assumed their space was essentially Euclidian. It turns out that is not really true, and it also turns out that there are (now, common) situations where the difference matters. So when you do any applied math, you assert that your axioms "faithfully reflect reality". And you may be wrong. And you may not have any way of knowing you are wrong. And you may not even have the mathematical tools to know what "wrong" means.


> Be careful with things like "faithfully reflect reality". :-)

Let's be careful with our terms!

There is the universe, and then there's reality. They are not the same thing, they only seem that way because that is the model everyone's been raised on, and that "fact" is reinforced and defended daily.


Or even Newtonian Physics, which faithfully reflects reality—as long as you're not too large, or you're not traveling too fast. And then it starts to break down.


I think his argument is exactly that, those formal systems can suffer from errors (even though they've been checked), because the compiler could have errors, the processor could have errors, etc

So in the end, it's always a question of "community" and "belief that things seen by sufficiently many people are right", eg that after 10 years of using a CPU, we've identified and fixed most bugs, and so the program is even more likely to be valid


You're still performing meaningless operations on meaningless symbols. All you've done is picked a subset that looks - to you - as if it's a little less meaningless.

It's a problem of epistemology, not logic.

Logic itself is subjective. We have a set of heuristics we use to decide if something is "true" in a mathematical and scientific sense. But fundamentally they rely on concepts like "truth" and "logic" which we experience subjectively.

We label a certain set of psychological experiences "true", a certain related set "logical", and a different related set "experimentally valid."

But all we really have is a set of experiences that are subjectively consistent (more or less) and shared.

IMO it's quite likely we're missing a lot. Far from having god-like insight into the ultimate nature of truth and reality our minds are actually quite small and limited.

Many relationships, operations, and elements of objective reality are literally invisible to us because they're far too complex for us to imagine, never mind experience and understand.

This matters because potentially AI may not have the same limitations. We're just getting started with associative cognitive modelling based on ML data compression as opposed to mechanical machine building - which is what traditional code is.

That's a fundamental difference of approach, with the potential to work with a much larger conceptual and symbolic space than we're capable of.

I suspect we'll see some quite interesting outcomes.


> Logic itself is subjective. We have a set of heuristics we use to decide if something is "true" in a mathematical and scientific sense. But fundamentally they rely on concepts like "truth" and "logic" which we experience subjectively.

I agree that it is this way, but does it have to be? Is it physically necessary to perceive defection from this dilemma as "pedantic" (see below, how surprising!), or might it only be culturally necessary?

Take your claim for example: you are not distinguishing between logic itself and our experience of logic. You are objectively correct that there is in fact no distinction on one level (experience), but that is not the only level that exists.

How are questions of this nature (essentially: existence) answered in the physical realm, where we have high levels of skill and experience?


> Logic itself is subjective

Not really, by your logic (heh) everything is subjective: including your own argument about logic being subjective (which in turn uses logic).


This problem is why I strongly dislike social relativism arguments. It's sloppy reasoning and I can't make heads or tails of what they're asserting. I also disagree with the interviewee for this reason, I think his argument is also sloppy, he is conflating two different issues.

(One would think I'm some kind of conservative; I'm a gay, Asian American leftist who happens to have a STEM background, and so I hate reading about "socially constructed" arguments--which is what this article is a variation of.)


OK, so you dislike relativistic arguments.

Relative to which objective standard for reasoning are you asserting that any given argument is "sloppy"?

It's a turning point in one's life when we come to acknowle that we don't have to like the fact that relativism is true.


For literal centuries we have known absolute epistemological relativism is childishly easy to self refute.


Maybe (spot the computational/monadic joke).

In what objective system for evaluating proofs will you be presenting this “refutation”?

This whole “we have known” smells of a bandwagon fallacy already. Who do you represent?

I find it mildly ironic that I have to explain evaluation (and it’s dependency on a meta theory) on a forum full of computer scientists…


Let’s think in nano baby steps:

A. In what system are you presenting your argument here in your comment? Is that system objective or subjective.

B. If subjective, then I declare your argument to be false for me. End of story.

C. If objective, then congrats, you refuted yourself.

I find it wildly ironic that you condescend and resort to veiled ad hominem on top of a Logic/Philo 101 fallacy known for literal centuries.

Also, the statement “there is an objective system X” doesn’t imply “we fully know the system X”

I represent scientists and logicians and philosophers that have known this for, once again, centuries. People call X background or classical logic.

https://plato.stanford.edu/entries/logic-classical/


A. I am not presenting arguments. Only observations and inferences based on my understanding on the evidence.

B. If you declare my “argument” subjective (and therefore false for you, but true for me) you agree with my overall claim Q.E.D

C. Is it this one then? How would you prove this?

D. It is mighty presumptuous of you to assume that I subscribe to classical logic; or even a consistent logic for that matter. Or that I subscribe to the same logic at different times of day.

E. I am not impressed neither by your religion; nor by the writings of your prophets. Nullius in verba


> I am not presenting arguments. Only observations and inferences based on my understanding on the evidence.

That is what counts as an argument! An argument is a technical term from logic and includes inferences and observations (observations are single step inferences). My bad on using logic jargon, but yes, inferences/arguments/proofs all apply here. So what system are you presenting this inference in? Is that objective or subjective?

> B. If you declare my “argument” subjective (and therefore false for you, but true for me) you agree with my overall claim Q.E.D.

No, the real beauty of subjectivity is that it allows me to disagree at any level. And I disagree with you on your point above, invoking the full power of subjectivity.

In fact, let me do 100% subjectivity on the whole set of comments here, and declare all your statements false for me.

(Also, the above is proof by cases. And step B/C's conclusions alone don't go out to the outer level alone as you seem to assume. You have to combine them both).

Now, I should be getting paid for doing a logic 101 in the comments ;)


You can’t tell if my inferences are objective or subjective? Why does the distinction even matter then?

The more of my comments you declare false - the more evidence you are providing in support of relativism. It is a peculiar way to agree to my point (via disagreement).

Is that what you call “logic”?

It looked like you were making up the interpretation as you go along…


Sounds like you are on the path towards understanding that what we call “objectivity” is an inter-subjective construct.


I know you are being sarcastic, but this isn't a strange concept and this something philosophers have known for generations.

Let us say you claim "Nothing is objective". Then it doesn't take a genius to assert that your claim "Nothing is objective" isn't objective.


I have no idea what your metric for "strangeness" is.

Using similar reasoning we can also assert that "The speed of light is objective" is not an objective claim.

And we can further assert that both the inner and outer statements are true without arriving at a contradiction.

It doesn't take a genius, merely a very thick skin and a grasp of contextuality, to hold this perfectly reasonable conclusion in a society dominated by naive understanding of non-contradiction.

Objectivity, being an unattainable philosophical ideal, it doesn't prevent us from using the term in more pragmatic/cultural/inter-subjective sense. A white lie - if you will - that only a determined pedant would scrutinize to such philosophical extent.

Just as well, such overt pedantry deserves mockery and contempt.


Logic is subjective because it takes a lot of focused effort to align it with reality.


Here we go in circles. Is your statement about logic subjective or objective? Do you use logic to arrive at that conclusion?


The thing that these systems offer is "having to check less code". In some situations, that's huge. In others, it's not worth the effort it takes to get there and maybe doesn't actually reduce the complexity of checking that it all makes sense.

In other words, maybe you proved something beyond reasonable doubt, but that's completely useless if you misunderstand what it was that you proved.


"Beyond reasonable doubt" is the level of proof required for incriminating somebody.

Math generally has a higher bar (for better or worse..)


The way I would phrase it, the difference is in what kind of doubt is considered "reasonable".


For me, the entire article can be summarised in the following quote from it:

>> So there’s a big mess. We are limited in what we can do.

We are limited in what can do. All our formal systems are flawed. That's including arithmetic and Gödel proved that any system powerful enough to represent arithmetic will always be flawed. The machines we have that compute our formal systems with us -and it never stops doing my head in that it's possible to create physical machines to calculate what exists only in our thoughts- our computers, they are flawed, too, because they are based on the same flawed formal systems, the same limitations as ours. We cannot use such machines to escape their own, our own, limitations.

>> But we have proven otherwise.

Have we now. To whom?


> We are limited in what can do.

It's not wrong that we are limited in what we can do, but we are limited in the same sense a Turing machine is limited by the halting problem. We are limited in a very specific and precise sense.

> All our formal systems are flawed.

This threads dangerously close to the pop-culture take that math is flawed. I can accept this statement only if we take 'flawed' to mean 'incomplete' in the Godel sense. We don't say that integers are flawed because they aren't a field, that's how integers work. We should avoid needlessly loaded words.

> Have we now. To whom?

We don't prove things to someone. Proofs just are.

There's a sleight of hand at play here. A proof in the technical sense (let's call them Tproofs), as a series of deductions, is a mathematical object, not unlike the number five. A proof in the common sense (let's call them Cproofs) is an informal argument to convince other human beings that there exists a Tproof with certain characteristics. In this sense it's easy to see how Cproofs are social phenomena. They are basically ways to communicate with other human beings. Not unlike how banks create money when they issue loans, backed by nothing by the community's trust, a mathematician can write a Cproof standing for a Tproof, backed by the trust of his colleagues. Mathematics could not meaningfully function without this socially constructed web of trust, exactly like the economy could not function if our only mean of exchange were physical coins.

I have no problems in accepting that, in this sense, mathematics is a community effort and is strictly dependent on such community existing.

But there lies my argument. A proof-checker accepting a proof offers a very strong guarantee that an equivlent Tproof exists, stronger than any number of peer-reviewers could offer.

Therefore, to the extent that we deem epistemologically possible to mathematically prove a statement, a proof being accepted by a proof-checker is a mathematical proof.

The radical skeptic's argument that it's not, in fact, possible to mathematically prove statements in any meaningful sense is as always both perfectly irrefutable and perfectly uninteresting.


>> We don't prove things to someone. Proofs just are.

I think you need to do a bit of reading, to see what others have said on this subject, because you seem convinced you have the right attitude and the right frame of mind- but you only manage to trip yourself into making absurd statements like the one above. Proofs just are? Where?

I suggest you start here:

https://www.cs.umd.edu/~gasarch/BLOGPAPERS/social.pdf

And here, for a bit more context:

https://m-cacm.acm.org/magazines/2021/7/253448-program-verif...


Program verification, while interesting in its own regard, has nothing to do with what we're talking about here.

Proofs as studied by mathematical logic are mathematical objects, not something you "prove to someone", that's the entire and only point, and it's extremely well-grounded in existing theory. It's not even a contentious point, it's not something reasonable people can disagree on.


>> Proofs as studied by mathematical logic are mathematical objects, not something you "prove to someone", that's the entire and only point, and it's extremely well-grounded in existing theory.

Which theory is that? You're saying there's some theorem that proofs are mathematical objects that "just are"? What theorem is that? Can you point to it?


A tree of derivations is a mathematical object that formally represents a proof, which is connected to the way programs are written in proof checkers via the Curry-Howard isomorphism.

When we say "proof" in logic we mean that.

In common parlance, a "proof" is instead an informal argument.

I know that you know that, because I read your messages before and I know you are not an idiot. You are just dwelling on a rhetorical flourish for no particular reason.


I can't say whether I'm an idiot or not but, indeed, my background is in logic programming, first order logic and automated theorem proving (and other fields, not related to our discussion). Thus, I know that there is no formal concept of "proof" in logic. Even in automated theorem proving we formally define derivations, or deductions (which can be represented by the "mathematical objects" in your comment), but not proofs. As far as I know there is no mathematical or computational procedure that can construct "a proof", only arrive at a result that can be considered a proof, when the result is inspected by a human mathematician or computer scientist.

The concept of "proof" is, to my knowledge, only an informal concept, that only makes sense, and is only communicated, between humans. In mathematics that is even more so than in computer science. Certainly I wouldn't expect any expert on logic and proofs, to say that proofs "Just are". That sounds mystical, practically Platonic.

To give a concrete example (and you will excuse, I hope, my penchant for rhetorical flourishes, or at least terminological clarity, when I speak of the matters of my expertise) in Resolution-based automated theorem proving the derivation of the empty clause during a Resolution-refutation of a goal clause can be considered as a proof-by-refutation of the complement of the initial goal. That is so because we know that, if a set of clauses, Π, is unsatisfiable, then the empty clause is a logical consequence of Π. Therefore, if we can derive the empty clause from the union of a goal, G, and a set of clauses, Σ, we can assume that G is false, with respect to Σ (or, more precisely, that G is not a logical consequence of Σ) and that, therefore, ¬G must be true. We can do all that because we have fuzzy, human-readable proofs that convince us of the relation between Σ and G, and the derivation of the empty clause from their union; for example, in the Resolution theorem by J. Alan Robinson [1]. It would not be possible to set up an automated theorem proving system without such a human-readable, and human-convincing proof. Because nobody would trust it.

In fact, the Resolution Theorem, essentially a statement of the soundness of Resolution, is a statement of such irrefutable simplicity that there can be no doubt of its truth and, as far as I know, it has only been proven once, by Robinson, then his proof cited by anyone else interested in the matter. The completeness of Resolution on the other hand is more complicated to show and there have been many different proofs of it, in different forms (i.e. the Refutation-completenss of Resolution, and the Subsumption Theorem, proved by Robinson and others in different ways). Given that there are now many different proofs of the Completeness of Resolution, arriving at it from different directions, we can also be reasonably secure in the knowledge that Resolution is complete. We may still be wrong. Some proofs have been shown to be wrong [2].

That is how proofs work in practice.

I commented to help you understand where you are wrong, and because I, in turn, was "triggered" by your arrogant insistence about things you clearly do not know as well as you think you do. I suggested you do a bit of reading to avoid polluting our common space on HN with bullish ignorance. You reacted with offense. That is a natural reaction, and I have experienced it myself, but I hope that you can nevertheless take something useful from our exchange. I have studied long and hard to acquire the knowledge that I have and I consider it my responsibility to share this knowledge. Perhaps I should refrain from doing that in the future though, if it ends up being more "triggering" than informative.

________________________

[1] A machine-oriented logic based on the Resolution principle, J. A. Robinson, journal of the ACM, Jan. 1965.

https://dl.acm.org/doi/10.1145/321250.321253

A text that I recommend to anyone intersted in automated theorem proving, and proofs in general.

[2] https://homepages.cwi.nl/~rdewolf/publ/ilp/ilp95.pdf


You say your intent is to teach and to inform.

I don't doubt that this is the case. The reason why I found your response triggering is that you did not come off, in this exchange, as genuinely having that intention, from the very beginning.

To be completely frank it just looked as though you were looking for a fight.

I recognized your username among the responses. I knew you had that kind of background and wanted to hear from you.

I am not an expert nor I claim to be, mine are just fade memories, but as far as I know using "proof" to mean "derivation" is not uncommon, see for instance[1][2][3]. I was very particular about separating the "formal" and "informal" kind in any case. I don't think I can be accused of playing fast and loose with definitions.

If you are trying to correct me on something else, I don't understand what it is.

Since you brought up that point yet again, by "proofs just are", an unfortunate choice of words, I do not mean anything mystical or platonic. I never had any intention of making such claims and I certainly don't stand by anything like it. I wanted to say that derivations (I used the "proof-in-the-formal-sense" language to which you object, I am amending it here) are mathematical objects, proofs (meant as you do, informally) are social processes.

I do not understand your response as saying anything different.

As far as I understand, all you're objecting to is my use of language. This is perhaps fair, but not terribly interesting. If there's something else, I'm afraid I failed to understand what it is.

In any case, thank you for this exchange.

I would like to encourage you to keep sharing your knowledge if you can make the time. I am sure this common space would benefit from it.

---

[1] https://www.ams.org/notices/200811/tx081101395p.pdf [2] https://link.springer.com/referenceworkentry/10.1007/978-3-0... [3] https://monks.scranton.edu/files/courses/Math299/math-299-le...


Thank you for your kind comment and for your patience. I think my comments above were awkward and didn't quite get my point across, and that's obviously not your fault but mine. Hopefully we'll have the time to discuss this issue again in the future. I'm sorry that we started off on the wrong foot.


> The radical skeptic's argument that it's not, in fact, possible to mathematically prove statements in any meaningful sense is as always both perfectly irrefutable and perfectly uninteresting.

Interesting to me is whether this "the" "radical skeptic" you're mocking is as it seems to be. I'd say it has the odor of a very convenient rhetorical strawman.

When practicing logic, don't forget the importance of ontology, as one could learn from epistemology. But then, that's a lot of work, and that sort of thinking "is" "pedantic".

Welcome to Planet Earth, circa 2023, {the culture you live in}, please enjoy your stay.


> But the probability of that happening on a large scale is the same probability that seven billion people collectively hallucinated elementary theory and it's in fact not true that there are infinitely many prime numbers.

That's the fundamental problem. We're building the machines to verify an internally consistent idea that we're all pretty pleased with. For example, type systems don't tell you that your data is true or correct, they just provide a framework of assumptions to work from that turn out to be really useful. Each type system will provide a different view on your data/process/etc.


Most senior, non-applied mathematicians in academia (including those with fancy titles at prestigious universities) outright refuse to adapt even simplest technological advances, like version control, let alone think about possible large scale transformation of mathematics. I wouldn't take take anything that they say about technology too seriously.


That particular quote isn't questioning Lean, but "convert[ing] the proof into inputs for Lean." GIGO


That's not how it works. If you input garbage in Lean or other proof checker they will just tell you it's wrong. You would have to input garbage in the statement you want to prove for them to accept a wrong proof (which must still be correct for that garbage statement!). To rule this out you just have to manually check the translation of the statement, which is most often much much smaller than the proof itself.


You disagree then agree.

Here's an elaboration of the point: https://news.ycombinator.com/item?id=37341439


>If a program is accepted by the proof checker, then by construction it must be a valid mathematical proof.

Huh? How? Truth is an empirical fact, you can't possibly define it into existence.


You have a point, but to be fair the person you replied to didn't use the word "truth". They said "valid mathematical proof". An in the context of type theory or formal logic "valid proof" is indeed something that has been "defined into existence" (in type theory "valid proof" ≈ "well-typed term"). Whether it represents "truth" is a different matter. Though such systems can (ideally) be proven sound in and with respect to "normal" mathematics. At that point I guess the "truth"ness depends on you overall belief in math :)


All mathematical "truths" are defined into existence as you call it. All mathematical proofs are essentially of the form If A Then B. B is only a "truth" since we defined A a certain way and accept it as True. Due to its very nature, there are no empirical facts in mathematics.

You can argue that the is no Truth is mathematics, which is a perfectly reasonable position to take.


Yeah any statement that goes back and forth a bit too fast between Lean and ChatGPT raises my suspicions.


It’s unfortunate that semiotic has become a joke word. C.S. Peirce did seminal work tying it to logic.


Compilers can have bugs. Hardware can have bugs too. Why are formal systems that run on a computer assumed to be bug-free?


They aren't assumed to be bug-free. It's simply known that the vast majority of bugs that occur in practice and that you have to care about aren't compiler or hardware bugs. So formal systems have a lot less bugs.


I will always remember when I first got into serious proof-based math in the first semester of college, and I had to work hard to develop my sense of what counts as a sufficient proof. I would read the proofs in the textbook and hit the QED at the end and not understand why it was enough. Eventually I came to understand something like the framing here, which is that a proof is about persuasion, persuasion is about judgment, and judgment (by definition, maybe) can't be pinned down to clear rules.

The University of Chicago has a special math class format called Inquiry-Based Learning designed around this idea, where you work together in class to put proofs together and work out a shared understanding of what is sufficient. I didn't take it but I wish I had. You can read some people's experiences with it here[0].

[0]: https://www.reddit.com/r/uchicago/comments/i1id9e/what_was_y...


I recently read this book review [0] where a mathematical proof was described as a dialogue between two semi-adversarial but collaborative actors, the Prover and the Skeptic, who together aim to aquire mathematical insight.

I thought it was an interesting perspective.

[0] https://jdh.hamkins.org/book-review-catarina-dutilh-novaes-t...


This is Game Theoretic Semantics. See Hintikka's Principles of Mathematics Revisited and his framing of Independence Friendly Logic:

https://plato.stanford.edu/entries/logic-if/


This also what philosophical discussion/debate is.


It definitely takes a second to develop that intuition, I remember being quite confused as well. As I got more into CS, I began to think of proofs similar to how I do unit tests: you externalize or “fix” as many as the inputs as you possibly can, and then abuse the remaining degrees of freedom as much as possible to show that your solution is complete.


I was lucky enough to take two IBL courses from Dr. Michael Starbird at UT Austin. Both were wonderful courses, very engaging and very fun. I never collaborated so much with other math students as I did in those courses. In particular this was my intro to topology and I’ve been hooked on it ever since.


By late-undergrad, it was intuitive to me that "proof" means "all mathematicians who reads this agrees with it". Mathematics is unique in that, mostly, the field can achieve consensus on results, which we then call "proofs".

But similarly, it makes sense that, even if a result is is "true" in a universal, objective sense, if the mathematician cannot communicate this in a convincing fashion to the rest of the mathematics world, I don't think we can call that result "proved".


Not that you were, but I don't quite understand why people get so caught up on this fact. There are objective facts about the nature of reality, and we are all (or at least competent practitioners in the field) are thoroughly convinced that we have identified a subset of these facts.

These presumed facts have helped us do things like go to the moon and build skyscrapers, but then someone comes along with the old "but how do you actually know" argument of a college freshman, and then we get into a conversation about the potential social relativism of math.

All the while, people will see a half-assed psychology study with a questionable procedure, weak at best, erroneous at worst statistics and therefore tenuous at best conclusions, and this study is taken to be "true" and might legitimately impact notable institutions. Yet when we're talking about extremely complicated topics that exist on the edge of the horizon of human intuition, no matter how obvious the impact some people just refuse to accept things as objective simply because they fail to intuitively understand them.

Foundational fields like mathematics and physics are as objective as we can get. If you don't accept that, your belief about what is objectively true ends at cogito ergo sum and that's that. This has always been such a pointless conversation in my mind.


> There are objective facts about the nature of reality

This is a pretty bold claim and you would have to do a bit of work to make it more convincing. Besides, it's not really how science works. Different theories wax and wane over time all the time. What we're really doing with science is coming up with ever better models that give us greater predictive power.

You could argue that at the macro scale we're asymptotically approaching some kind of objective truth with the whole endeavor of science, but you can't simply tunnel across that gap and make the leap to say that we know there is an objective truth.

The best that we can do is probably say something along the lines of "these are the best methods of getting closer to the truth that we have available - if anyone claims to have better methods, they are very likely wrong", but you still need to have the humility to accept that even the best models that we have to date are not infallible. They do not give us the objective truth, nor do they promise to, but they are the most effective tools that we have available to us at the current time. This is critically not the same as claiming that they give us the objective truth.

You could say that for all intents and purposes/everyday usage, "sure, these are objective facts about reality" - but I don't actually think that helps anyone and it serves to foment mistrust towards science and scientists. I think the best chance we have at preserving the status of science and scientists in our society starts by being honest about what it actually is giving us - which is quite a lot, but let's not oversell it for the sake of convience or whatever.


As Heissenberg said, "What we are seeing is not nature, but nature exposed to our mode of questioning."

And the mode -- we invented it as it is because of a whim of history, because it is a game, and we like the game, and it's useful for us. But as far as facts go, Nietzsche summed it up the most concisely: "there are no facts, only interpretations."


If by "objective truth" we mean the qualities of nature that exist irrespective of any individual's perception, then I think the continued reliance of our scientific knowledge in producing effective and consistent results are at least some measure of that.

> The best that we can do is probably say something along the lines of "these are the best methods of getting closer to the truth that we have available - if anyone claims to have better methods, they are very likely wrong", but you still need to have the humility to accept that even the best models that we have to date are not infallible.

This last sentence slightly conflates the scientific method with the models they produce. I am not claiming that the models are "true", I am claiming that the scientific method is the only reasonable means of gaining a reliable understanding of the objective nature of reality, assuming it exists; and that you cannot pick and choose what you believe in based on your intuition.

Quantum principles have been proven in experiments that have as tight a margin of error as measuring the width of the United States to one human hair, producing shockingly consistent and effective models that were absolutely critical to the development of modern technology. Yet some people somehow refuse to accept these models as an "accurate" reflection of reality, whereas they'll take, at face value, psych/sociological/economic studies that are frankly nothing short of pathetic in comparison.

In regards to science, I am saying that there is a hierarchy of belief. You can draw the line wherever you like in terms of what you think is "true", but you cannot reorder this hierarchy and believe these sorts of psych studies while at the same time questioning the physical models that power the technology that is used to publish them.

And this isn't speaking about math, which is a particularly special case given that it is not scientific but still produces shockingly effective results.


I think I would agree with pretty much all of the above. There is a sort of hierarchy of belief, and quantum mechanics is probably close to the top of that. However, even so, it is still not infallible. It is possible for us to discover a regime where it breaks down and we need a new theory to supplant it.

This is basically what I'm arguing - no matter how accurately our theories line up with observation, we can never be sure that we have reached "the final theory" AKA the truth. I think this is where a lot of misunderstanding and mistrust for science originates. It will never deliver to us the truth - if it did, how would we ever know?

It is a method of getting closer and closer to what we believe is the truth. But there is still a gap there, however small it might be in the case of quantum mechanics. The scientific method by it's very construction is unable to bridge that gap.

Still, to date there does not exist a more effective method we know of as a species at getting closer to what we believe to be the truth. I think the above is a maybe subtle distinction there that is worth pointing out and educating people on. Just sort of making that distinction between the process and the results. That it is the best process we have, but even so, it cannot cross that gap and definitively say "this is the truth". That that is a gap we have to choose whether to cross ourselves with a leap of faith (or sometimes a very tiny hop of faith in the case of quantum mechanics). I think that might help people cement their faith in the process even if they dont necessarily place their faith in the results (in the case of questionable psych studies for example).


You're conflating "objective reality exists" with "we fully know what objective reality is".


If objective reality exists, which is still a pretty big if last time I checked, not only do we not know what it fully is, we don't even know what any part of it is. The best that we can do is get better and better at modeling it in ways that are useful to us (which is what science is doing for us).


"Science" (scientists) studies the physical realm almost exclusively, and where it does venture somewhat into the metaphysical, it brings a lot of baggage that works excellently in the physical realm, but is often detrimental in the metaphysical. Plenty of scientists, I'd bet money even most, find[1] metaphysics to be ~silly...except of course when they are whining about it (politics, economics, society, etc..."the real world").

[1] The context, or set and setting, makes a big difference in how they will behave. But the range of behavior is actually pretty simple, far simpler than much of the programming problems we breeze through on a daily basis, without even thinking twice about it. I'd say the problem isn't so much that it's hard, it's more so that it is highly counterintuitive, a lot like Einstein's relativity when first encountered (or even after understood).


> This is a pretty bold claim and you would have to do a bit of work to make it more convincing. Besides, it's not really how science works. Different theories wax and wane over time all the time. What we're really doing with science is coming up with ever better models that give us greater predictive power.

Yes, but is mathematics like that? Is it even science?


> Besides, it's not really how science works.

You reject objective facts but respond with a claim about objectivity.


I wouldn't reject objective facts, but I also wouldn't believe they exist any more than I would believe Santa Claus exists, unless someone can successfully argue for their existence. AFAIK this has yet to be done by philosophers (though there have been many attempts).

E: I should mention that it's not just a binary yes or no here, there is a 3rd option of "I don't know" and I would rabidly defend the "I don't know" camp until someone can convincingly argue one way or the other. All of this has nothing to do with the actual usefulness of science which is unquestionable in my opinion.

This is strictly talking about science "overreaching" into the philosophical realm if you will, where even from the start, methodologically it doesn't have the right tools to answer these questions. You don't prove scientific theories "true", you just accumulate more and more supporting evidence. It never hits a magical moment where the neon lights turn on and a sign says "Your theory is now true! Congratulations!". And even if it did, it would be fleeting anyway because there are no sacred cows here - your theory can just as easily get supplanted by a better theory in light of more evidence.


> All of this has nothing to do with the actual usefulness of science which is unquestionable in my opinion.

That's a neat trick(s)! ;)


Is it a bold claim?

On that account, do you lean more towards flat earth theory?


Yes it's a bold claim philosophically. How would you justify it?

No, flat earth "theory", if you can call it that, has close to zero supporting evidence and AFAIK has no actual predictive power. Stick with consensus science if you want actually useful theories, but that is very different from claiming they are giving you objective truth.

Let me ask you this, when a theory that was previously accepted as consensus science loses support in light of new evidence and gets supplanted by a new and better theory, does that mean that the objective truth changed?


It’s frustrating when nearly every discussion of hard math in an open forum devolves (in part or whole) into endless, pointless, off-topic epistemological navel gazing.


What? The objectivity of math is literally what the original article discusses. Why are you even in here if you're not willing to discuss it?


This fact is interesting because many people grew up with an idea of mathematics as having discovered fundamental truth, and some of us, when we get deep into the field, realize that Math is made up of human ideas, exactly the same as a very complex boardgame we play in our heads, and that the belief that these boardgames represent something fundamental about reality is it's own leap of faith.


How is it a leap of faith when mathematical models that describe reality demonstrate continued reliance in making predictions? Does this not demonstrate that there exist mathematical relationships between certain physical quantities?


> All the while, people will see a half-assed psychology study with a questionable procedure, weak at best, erroneous at worst statistics and therefore tenuous at best conclusions, and this study is taken to be "true"

...

> Yet when we're talking about extremely complicated topics that exist on the edge of the horizon of human intuition, no matter how obvious the impact some people just refuse to accept things as objective simply because they fail to intuitively understand them.

I think the intersection of these two groups is the null set.

> Foundational fields like mathematics and physics are as objective as we can get.

Objective? Yes. But objective does not equate to "true"[1]. One requires data, and the other lives only in the mind. It is not at all problematic to ponder over whether mathematics is "true" - most mathematicians have an opinion one way or another, and they are not unanimous in their opinions.

[1] True w.r.t reality, not true in the logic sense.


Quite the opposite. The majority of people I will believe some random bogus psych or nutrition study the news, while denying quantum mechanics results.


I have no idea what "true" means if it does not mean "a quality of a statement as being an accurate description of objective reality". While mathematics itself may be a human invention, it clearly describes a fundamental truth as evidenced by its continued reliance in producing effective models that describe the measurable world, which must be taken as a reflection of objective reality. Again, you can feel free to deny this, but all you are left with is cogito ergo sum.


> While mathematics itself may be a human invention, it clearly describes a fundamental truth as evidenced by its continued reliance in producing effective models that describe the measurable world, which must be taken as a reflection of objective reality.

It often describes the fundamental truth, because much of its origins was for that purpose, not because all mathematics is fundamentally true.

There's plenty of mathematics that is not modeled by the measurable world - almost by design. Just pick a different set of axioms and you'll get mathematical truths that may conflict with reality.

According to many mathematicians, uncountable infinities don't exist, and they focus on doing math without relying on them. There goes the real number line. From a scientist's perspective, either they exist or they don't. Both cannot be true. From a mathematician's perspective, both are truths.

Similarly, either the Axiom of Choice is true or it isn't. Yet the bulk of mathematics is fine with or without it.


> There are objective facts about the nature of reality

Such as?

> "but how do you actually know" argument of a college freshman

Epistemology has been studied by some of the greatest thinkers since the ancient greeks ( probably even before ) and not just college freshmen.

> no matter how obvious the impact some people just refuse to accept things as objective simply because they fail to intuitively understand them.

If you have to intuitively understand them, it isn't very objective is it?

> Foundational fields like mathematics and physics are as objective as we can get.

What in math are objective facts about the nature of reality? Where in nature is the number 1? Also do you realize that many mathematicians don't even accept the 'reality' of real numbers.

I think as you think more deeply about these topics, you will change your tune.


> If you have to intuitively understand them, it isn't very objective is it?

They were talking about things that you don't have to (and indeed can't) intuitively understand.

> What in math are objective facts about the nature of reality? Where in nature is the number 1?

In all sorts of places, not least counting things. Mathematics is about equivalences; the point of saying 2 + 2 = 4 isn't to make some funny marks on paper or pray to the platonic void, it's to say that if you have something that's 2-like and combine it with something that's 2-like in a way that's +-like, then the result will be =-like to something 4-like, in the same sense of "like".


> They were talking about things that you don't have to (and indeed can't) intuitively understand.

He was talking about objective things that people fail to to intuitively understand.

> In all sorts of places, not least counting things.

So where? Just one. Give me where 1 exists so I can check it out.

> Mathematics is about equivalences;

No. Mathematics is about theorem generation. Going from axioms to theorems via proofs. Though some go from theorems to find axioms.

> it's to say that if you have something that's 2-like and combine it with something that's 2-like in a way that's +-like, then the result will be =-like to something 4-like, in the same sense of "like".

What you are describing is just arithmetic. That's the interpretation or model for one particular set of axioms, theorems, etc. What's objective about 2-like? Feels abstract.

Funny I asked a simple question and yet you rambled on about nonsense you don't even understand.


> So where? Just one. Give me where 1 exists so I can check it out.

Like I said, multiple objects objectively exist in nature.

> What you are describing is just arithmetic. That's the interpretation or model for one particular set of axioms, theorems, etc.

That the theorems are true under those axioms is an objective fact.

> Funny I asked a simple question and yet you rambled on about nonsense you don't even understand.

As the great philosophers say, no U.


> Such as?

I'm not specifying them, I'm referencing their existence. You can take them not to exist, but in that case good luck explaining the measurable consistency that every human on earth observes. In fact, if you truly believed this, there would be no point in even speaking leaving a comment because then you could not believe in the ability of language to communicate ideas or even in the existence of language itself.

> Epistemology has been studied by some of the greatest thinkers since the ancient greeks ( probably even before ) and not just college freshmen.

Yes but it is somehow seems to always be the college freshman that raise this question for fields that have contributed the most to humanity like mathematics and physics, all the while not feeling the need to question any number of academic fields built on an absolutely pathetic scientific process in comparison. It is simply because of their failure of intuition and understanding of mathematics and physics that they raise these questions - I'm not saying that these questions are not part of a worthwhile conversation in general.

> If you have to intuitively understand them, it isn't very objective is it?

I'm not understanding your argument here. The failure of a human mind to understand something does not mean that this thing is not objective. Nobody on Earth understood why lightning occurs for most of human history. This does not mean that the existence of lightning and the reasons for its occurrence are not objective qualities of nature.

> What in math are objective facts about the nature of reality?

The existence of concepts that map to reality in producing models that yield consistent, effective, and measurable results.

If there were no conscious beings, the Earth would continue its orbit around the sun and this order is what is captured by the human-invented language of mathematics. Saying that "mathematics" isn't objective may be true depending on how you define mathematics, but there is no denying that there are objective relationships which exist beyond human experience. If you deny this, you are left with cogito ergo sum which was my original argument.


We communicate with words, and people as a whole are used to being lied to and gaslit regularly especially by those in power. It's true that mathematics and the hard sciences have mechanisms for understanding that are on a different scale than, say, ethics and morality. However, it takes time for people -- especially those currently engaged in questioning the nature of their reality[1] -- to accept that in this specific instance lying and gaslighting are a lot harder[2].

The people who eventually accept and internalize the distinction around things that can be objectively shown to be true are those who by in the large have done some of the work to understand these things themselves. Godel's Incompleteness Theorem is beautiful but it takes work to understand and if it didn't, it wouldn't be much of a meaningful breakthrough. Nobody is proving that 3+5=8 and then 4+5=9.

So what the average person sees is a high level language they can't speak with people being absolutely positive that this thing is special and true and incontrovertible. That raises red flags when you're dealing with folks talking about normal everyday stuff, doesn't it? It's a lot harder to say "but I don't understand" and a lot easier to say "but what if you're wrong" socially.

[1] As all college first years do, right? [2] Let's face it, lying to people is never impossible, it's just harder to be successful when you can be fact checked.


After the article, my opinion is going to be that people do this because they are politically privileged by a social relativistic argument. The professor in the interview is advantaged by the narrative that proofs are socially constructed.

Indeed, mathematics and any other STEM field is deeply political. People are politics. But to confuse that--that STEM as practiced is socially constructed and political--and to make a sloppy ontological conclusion about what STEM/math is, is a deeply neoliberalizing argument. So I'm inclined to make the argument that it is actually the neoliberal intellectuals who try to spin this as fundamentally only a human enterprise, because it privileges their social standing as the "correct" practitioners of this compact: capitalism requires that mathematicians be able to "sell" each other the truth of their proofs.

Indeed, the article mentions the Mochizuki controversy but I don't think they understood the social problem. See, it doesn't matter if Mochizuki is right or wrong, or if people understood his 500 pages or not. It is that in principle it could be shown that he is right or wrong. And that's unlike, say, the Bible, which is also a social compact. Reduction of STEM to social construct throws the baby out with the bathwater.


> Mathematics is unique in that, mostly, the field can achieve consensus on results, which we then call "proofs".

No. See Brouwer–Hilbert controversy.

They can't even agree on what makes mathematical logics.


No, no, I think you're missing the point. Mathematics is such that all mathematicians, in particular Brouwer and Hilbert, will read a proof and say "yes, this follows" or "no, it doesn't follow". The agreement is on the argumentation, i.e. what can be said given a set of assumptions. This doesn't mean there are no disagreements in the Philosophy of Mathematics. The place Brouwer and Hilbert disagrees is about what can be assumed at all times. The disagreement is on the foundation that mathematics lies on. This is also not the only disagreement, there are various models of what mathematics really is. You can ask N philosophers of mathematics (who are also trained in math) about a proof and they'll all agree that the mechanism of the proof follows. What they may disagree on is whether the assumptions are sound, and what "mathematics" really is. Some will focus on the social aspect and point out that "proof" ultimately means "convinces all mathematicians". Some will focus on the psychological aspect. Some will push mathematics outside of material realm etc etc.


> Mathematics is unique in that, mostly, the field can achieve consensus on results, which we then call "proofs"

Hard sciences are the same, yes?

Water is two parts hydrogen, one part oxygen.


That's like claiming all historians agree that Hitler was born in 1889 ergo historians achieve consensus on results.

If you study a broad range of subjects, you'd realize in mathematics the level of convergence is quite high relative to the other subjects, even the "hard" sciences. I wouldn't say it's 100% though.


The idea of mathematics posing a single objective truth is....mostly true but only mostly.

E.g. Axiom of Choice, ZFC, constructivism.

1 + 2 + 3 + 4 + ... = -1/12 [1]

[1] https://media.tenor.com/Mbl5heTtzDoAAAAC/obi-wan-kenobi-cert...


Oh, on that I totally agree.

Math kind of lost me at the uncountable sets. Given the results from Gödel, the justification of law of excluded middle really needs a serious reconsideration.

But that's why I'm not a mathematician. (I suspect this aspect of self-selection facilitates convergence, but I don't see why it wouldn't apply to almost all fields of study)


I find his scepticism about proof assistants like Lean a bit weird. Of course, there is never absolute certainty, but there are degrees. A proof in Lean is a quite strong guarantee, you'd basically have to have a bug in Lean's core for it to be wrong, which is possible but less likely than a flaw in a proof that hasn't seen much scrutiny because it's rather unimportant (of course, "big" results get so much scrutiny that it's also very unlikely that they were wrong).


I wouldn't characterize him as being sceptical about Lean; this is what he says:

> Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification. […] I’m not saying that I believe something like Lean is going to make a lot of errors. […] They can be a very valuable tool for getting things right — particularly for verifying mathematics that rests heavily on new definitions that are not easy to analyze at first sight. There’s no debate that it’s helpful to have new perspectives, new tools and new technology in our armory.

In between (the "[…]"s above), he says:

> But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. […] I’m just not sure it’s any more secure than most things done by humans. […] I’m afraid I have a lot of skepticism about the role of computers. […] what I shy away from is the concept that we’re now going to have perfect logical machines that produce correct theorems. You have to acknowledge that we can’t be sure things are correct with computers. Our future has to rely on the sense of community that we have relied on throughout the history of science: that we bounce things off one another. That we talk to people who look at the same thing from a completely different perspective. And so on.

The latter part is less about the correctness and more about community: the main goal of mathematics is not merely a "true" theorems, but (human) mathematical understanding. I wrote an answer about this on math.SE in 2014 (mostly quoting Thurston and Rota): https://math.stackexchange.com/questions/632705/why-are-math...


I agree that a big part of proving thing is understanding intuitively "why" they are true. That's part of the reasons why I don't think we'll just write proofs in Lean in 50 years and leave it at that (the other reason is that, at least for now, those systems are very "stupid" and need a lot of handholding to prove facts that any maths undergraduate would understand to be true).

But when he says that he doesn't believe that a proof in Lean is "more secure than most things done by humans" I fundamentally disagree.


My reading is that for an individual theorem or proof, having it checked by computer may well be more "secure". But a culture where all mathematics is done (only) by computer would be worse, because you'd have outsourced the thinking and quest for understanding which was the point in the first place.


The way theorem provers work right now, it's still a human that has to come up with the proof, the theorem prover just verifies it.

But yes, I wouldn't want computer proofs to "replace" the proofs we have now.


I agree very much with you... but it seems to me like theorem provers / computers can be a really helpful tool in building trust and understanding between people or between groups of people, in ways that printed paper can't quite match.


I read it more as him saying that at the end of the day a human is inputting symbols into Lean, and a human is interpreting the meaning of the proof that Lean has verified. Not that the mechanics of the proof checking are in doubt, but that that meaning of it all is still a social consensus.


> A proof in Lean is a quite strong guarantee, you'd basically have to have a bug in Lean's core for it to be wrong

That, or an error in the specification (ie.: not the implementation).

(I very much am with you that his attitude / way of framing it is a bit weird)


I will go further (since there are certainly bugs in the kernels of proof assistants, more frequently than you might imagine). Even if there is a bug in Lean's kernel, it will almost never invalidate a Lean proof, because bugs in the Lean kernel (or the kernels of other proof assistants) are, just like with normal programs, almost always bugs dealing with edge cases testing the boundaries of mathematics or syntax, not things that show up in everyday proofs. Coq has fixed numerous proofs of False and, to my knowledge, these fixes have never ended up invalidating a "real" proof that wasn't specifically designed to test these kinds of edge cases. Proofs are far more often broken by changes to the surface syntax (e.g. inference algorithms or the tactic language) than by changes to the kernel.

Obviously, this is not foolproof against malicious "adversarial" mathematicians, so knowing something was proved in Lean still requires some degree of checking. But it does make it pretty extraordinarily unlikely that a false proof in a proof assistant was arrived at by accident, which is far from the case with the "old" way of doing mathematics.

(I'd also add that the concern about getting the specification of a design wrong is something that rarely lasts longer than attempts to actually use it to prove anything nontrivial, since the proofs won't go through, but there at least can be a lot of debate about how to interpret specifications in some fields [e.g. computer science] or whether certain specs are "actually" equivalent to the statements someone wants to prove, especially for stuff like category theory where manual proofs have a good deal of handwaving... but these are minor technical issues compared to the viewpoint of the article).


Your "adverserial" mathematician phrasing made me chuckle at the the thought of a future where you have cybersec jobs in automated journals to prevent shady academics from using kernel bugs to spam fake submissions.


It's the same "computers will never take MY job" attitude that you see from programmers, artists, doctors, etc.


I think his argument was restricted to a human-produced mathematical result being ported to a Lean program where one would be just as likely to commit a mistake. However I disagree as well, I recall the difficulty of expressing what I wanted to Coq being a barrier to expressing it incorrectly.


The thing about proof checkers is that they won't accept invalid proofs (assuming their internal axioms are consistents...), so if you make a mistake there it will simply reject your proof. The only place where it won't catch some mistakes is in proposition you want to prove, because the only way for it to know what you want to prove if for you to tell it, but that's much easier to humanly verify.


Having worked with a proof assistant is what separates you from Granville (the interviewee in the article). If he had formalized at least a couple of proofs he would not have written things like "people who convert the proof into inputs for Lean". One does not "convert the proof into inputs for Lean" for a simple reason that a proof written in a formal proof language usually contains much more information than the informal prototype. At best one can try to express similar ideas, but it is far from converting a proof from one form to another. If he had formalized a couple of hundred he would have gained a different opinion on the quality of informal proofs as well. A nice list of typical mistakes can be found in this [answer](https://mathoverflow.net/a/291351/163434) to a MathOverflow question.


It would be interesting to see some computer-aided proof expert write a possible counter-response to the article.


Yeah it's likely he hasn't worked much with Lean and may have some misconceptions around it.


Whenever I want to think about "what does it mean to prove something?" I dig into my copy of Imre Lakatos's "Proofs and Refutations", a Socratic dialogue-style story where a bunch of aspiring theorem-provers try to prove a particular theorem about the Euler characteristic of polyhedra and discuss what they are actually doing and why it works or doesn't.

I originally picked up the book because I was trying to understand Euler's polyhedral formula better -- which in retrospect is kind of like reading Zen and the Art of Motorcycle Maintenance because you wanted to fix a motorcycle.

I'm not wired for pure math -- I loved real analysis then quit while I was ahead. Still it's fun to pretend sometimes, and Lakatos does a great job of making you feel like you're learning inside knowledge about what mathematicians do. He introduces fun concepts like "monster-barring" (the way people sometimes carve out special cases in a proof when they encounter counterexamples).

I've never made it the whole way through, but I like to go back every few months and absorb a little more.

edit to add: I just now skimmed the author's Wikipedia entry and the Stanford Encyclopedia of Philosophy's story about the author's interaction with someone named Éva Izsák and I have a ton of questions.


I had to read that for a first-semester discrete math course in undergrad. It was... challenging.


> Moreover, one could play word games with the mathematical language, creating problematic statements like “this statement is false” (if it’s true, then it’s false; if it’s false, then it’s true) that indicated there were problems with the axiomatic system.

I hope this isn't too far off topic, but can someone clarify exactly how this problem indicts axioms? As an uninformed and naive musing, it occurs to me that an issue with the statement "this statement is false" is this. The whole of the statement, that is, the thing having truth or falsehood, cannot be addressed by one of its components.


This is probably the editorialization.

He's alluding to the fact that unlike what one might naively intuit, it's impossible to formulate a set of axioms that can formally express all mathematics.

Because of Godel's incompleteness theorems, any formal system that's sufficiently powerful to formulate basic arithmetic (see for instance Robinson arithmetic) and consistent (that is, it cannot prove false statements) is both incomplete (meaning that it's possible to formulate a wff within that system that the system itself can neither prove nor disprove) and can't prove it's own consistency.

This fact is often portrayed in popular media as being a "bug" or a "missing foundation" of mathematics. That is inaccurate -- it's just a property of how logical systems work -- but it does prove that the search for the holy grail of a grand unified system of axioms for all of mathematics is destined to remain fruitless.

Modern mathematics is most often implicitly assumed to be formulated in a formal system called ZFC, but there are alternatives.


I think the second theorem is the "worse" result for mathematics. It shows that it's completely impossible to use a weaker system, which is hopefully self-evident, to prove the consistency of a stronger, but weirder system like ZFC.

Hilbert wanted to show that, even if you don't "believe" in ZFC, we could at least try to convince you that it doesn't lead to contradictions, but that fundamentally didn't work.


Oh this is interesting, I didn't know about this historical detail.

It makes sense, Godel's first could be mostly seen as a version of the halting problem, while the second is "game over" for a Hilbert-like foundation program.

p.s. Just wanted to point out that it's funny how we wrote two mostly identical sets of comments. Great minds think alike? :)


> Oh this is interesting, I didn't know about this historical detail.

That's the characterisation I got from Peter Smith's book about Gödel's theorems. I didn't verify original sources or anything, but it sounds very plausible to me.

And it also kind of answers the question about why we should care: if a system can't prove its own consistency, well that's not terribly interesting (even if it could, we would have to believe the system a priori to trust its own consistency proof). But if it also can't prove a stronger system consistent, then that's much more interesting.

> Great minds think alike?

That would be a bit too self-aggrandizing for me. :D I'm very curious about logic (and maths in general), that's why I know some stuff about it, but I'm no maths genius.


It's not a "bug" but it is disappointing.

To mathematicians and non-mathematicians.

Being unable to prove or disprove certain statements is a rainy cloud. Like maybe the Riemann hypothesis is just impossible to prove.

It doesn't mean math is "broken" but it is disappointing, no?


Is the fact that these systems cannot prove their own consistency actually a feature of the incompleteness theorem? I thought it effectively boiled down to "you can keep either consistency or completeness, not both". It's been a while since my metamathematics course ...


It depends on exactly what you mean with those terms.

When you say "you can keep consistency or completeness but not both" you are essentially stating Godel's first.

Morally speaking, Godel's first is "no system is complete", but there are two exceptions: if the system isn't powerful enough to formulate the Godel sentence then the proof doesn't work, and any inconsistent system is trivially complete because ex falso quodlibet sequitur, i.e. any wff is true.

The part about a system not being able to prove its own consistency is Godel's second theorem.

But the theorem only does what it says on the tin: the system not being able to prove its own consistency doesn't mean that it being inconsistent! [1] This is the case for ZFC, for example. We can't prove ZFC's consistency within ZFC: that would violate Godel's second, and we know that either ZFC is inconsistent (that is, you can derive an absurd from the axioms) or ZFC is incomplete (that is, there exists a well-formed formula of ZFC that cannot be proved or disproved within ZFC).

We don't know which one it is.

[1] This implication and similar ones are often made in pop-culture presentations of the foundation problem. I generally vigorously object to them because they're not only imprecise -- that's understandable -- but they leave a layman with a completely wrong impression.


> the system not being able to prove its own consistency doesn't mean that it being inconsistent!

Another funny thing that can happen is that a system proves its own inconsistency, despite being consistent. The short summary is to never trust a system talking about its own consistency.


You're precisely right that it's the self-referentiality that gets us in trouble. Systems which don't have that don't suffer from Gödel-related problems (e.g. propositional logic, or arithmetic without multiplication).

Unfortunately, what Gödel showed is that any logical system that is strong enough to capture addition and multiplication over natural numbers - and nothing more - includes self-referentiality.

That includes the language of arithmetic - you can write down formulas that refer to themselves - and Turing machines (via the recursion theorem, any Turing Machine is equivalent to a machine that knows its own source code). The constructions are a bit technical, but it's possible.


Yeah I always thought that the construction of Gödel numbers was always the weakest part of the proof / the biggest leap of faith / the part your prof would just hand-wave as being a valid move.

Of course once you get into Turing machines it all flows more naturally, what with all of us being accustomed to "code is just data".


I agree that Turing Machines feel more natural to programmers than first-order logic (although "natural" doesn't necessarily mean "rigorously proven"), but there are no leaps of faith involved in Gödel's construction.

You can write down "P is provable" as a first-order sentence of arithmetic (which involves some number theoretic tricks), and you can also do the diagonalisation trick that gives you self-referentiality. That's really all you need.


The article is a bit oversimplifying in summarizing the axiomatic crisis as being problem with sentences like “this statement is false“.

This being said, your intuition is absolutely correct, the crux of the issue is with ‘this‘. What mathematicians realized is that if you are not careful with your choice of axioms, the resulting logical system becomes too “powerful” in the sense that it becomes self-referential: you can construct sentences that refer to themselves in a self-defeating manner.

As others have mentioned, this is the idea underlying Gödel's incompleteness theorem but also, to some extent, Russel's paradox that came before and is what the article is referring to. In Russel's paradox, the contradiction comes from constructing the set of all sets that contain themselves.


Maybe in analogy to Russell's paradox and how you can "fix" it by distinguishing sets and classes, you can "fix" a Gödel sentence by adding it as an axiom, but then you'll just get a new Gödel sentence... and so on.


We would hope that the axioms fully characterised the thing they're meant to describe. Can we even talk about "the" natural numbers at all? If you have two copies of the natural numbers, one red and one blue, are they both the same? Well, trivially they're not: one's red and one's blue. But (we'd hope) all the statements in the language of the natural numbers (which doesn't talk about redness or blueness) that are true of the red copy will be true of the blue copy, so it doesn't matter which copy you use, they're both "the" natural numbers.

E.g. why did we care about the parallel postulate? Why not just have geometry without a fith axiom? Well, because without it the axioms are, well, incomplete: there are statements in the language of geometry that cannot be proven from the other four axioms. You can have spherical geometry, planar geometry, and projective geometry, and they all conform to the first four axioms, but sometimes one of these geometrical statements will be true in one and false in another. So neither is "the" geometry, it matters which specific version of geometry you work in, and we need that fifth axiom to have a complete theory of (planar) geometry.

We'd hope to avoid that kind of situation with the natural numbers - if we need any additional "parallel postulate", we'd like to know about it, and if we don't, we'd like to be able to prove that we don't need one rather than just assuming it because we haven't stumbled across it yet. But Goedel proved that we cannot have a provably complete theory of the natural numbers with addition and multiplication, because he found a way to encode a statement akin to "this statement is false" in the language of the natural numbers. Which indicts any axiomatization of arithmetic - either your axiomatization proves this statement is true (in which case it's inconsistent), it proves this statement is false (in which case it's also inconsistent), or it doesn't prove this statement one way or another (in which case it's incomplete).

> As an uninformed and naive musing, it occurs to me that an issue with the statement "this statement is false" is this. The whole of the statement, that is, the thing having truth or falsehood, cannot be addressed by one of its components.

Well, yes, getting around that is the clever part of Goedel's proof :).


IANA mathematician, but I read "axiomatic system" broadly as including not just the axioms but also the logic in which they are based. My understanding is that a common interpretation is that ZFC axioms are a list of 10 strings of symbols, which only have some sort of meaning when you pick a logic that gives meaning to these symbols. But I think also that this particular understanding of what axioms are ("formalism") is just one way of understanding truth in mathematics, and there are others. https://en.wikipedia.org/wiki/Philosophy_of_mathematics

As for this particular issue I think this wikipedia page is relevant: https://en.wikipedia.org/wiki/Impredicativity


This idea of giving "meaning" to a set of axioms is precisely captured by the notion of "interpretation" in logic [1]. The rough idea is to map the symbols of the formal language to some pre-existing objects. As you say, this gives one way of formalizing truth: a sentence (string of symbols that respect the syntax of your language) is true if it holds for the objects the sentence is referring to (via a chosen interpretation). This notion of truth is sometimes referred to as semantic truth.

An alternative approach is purely syntactic and sees a logical system as collection of valid transformation rules that can be applied to the axioms. In this view, a sentence is true if it can be obtained from the axioms by applying a sequence of valid transformation rules. This purely syntactic notion of truth is known as “provability”.

Then the key question is to ask whether the two notions coincide: one way to state Godel's first incompleteness theorem is that it shows the two notions do not coincide.

[1] https://en.wikipedia.org/wiki/Interpretation_(logic)


> Then the key question is to ask whether the two notions coincide: one way to state Godel's first incompleteness theorem is that it shows the two notions do not coincide

It's even more subtle than that. They do coincide in a sense, which is proven by Gödel's completeness theorem (well, at least in First-Order Logic). That one just says that a sentence is provable from some axioms exactly iff it's true in every interpretation that satisfy the axioms.

So one thing that Gödel's first incompleteness theorem shows it's that it's impossible to uniquely characterise even a simple structure such as the natural numbers by some "reasonable"[0] axioms - precisely because there will always be sentences that are correct in some interpretations but not in others.

Unless you use second-order logic - in which case the whole enterprise breaks down for different reasons (because completeness doesn't hold for second order logic).

[0] reasonable basically means that it must be possible to verify whether a sentence is an axiom or not, otherwise you could just say that "every true sentence is an axiom"


Agreed, thanks for the clarifications. Another result worth mentioning, which also shows that you cannot hope to uniquely characterize a structure by "reasonable" axioms is the Löwenheim–Skolem theorem which predates Godel's incompleteness (although the history of these results is somewhat convoluted).

There, the obstacle is in some sense of a simplest nature: if your set of axioms admits a countable model, then it admits models of all infinite cardinalities. In other words, it shows that there is something fundamentally impossible in trying to capture an infinite structure (like numbers) by finite means (e.g. recursively axiomatizable).


> In other words, it shows that there is something fundamentally impossible in trying to capture an infinite structure (like numbers) by finite means (e.g. recursively axiomatizable).

Let me just point out for other readers that Löwenheim-Skolem applies to ANY first order theory (in a countable language, or also in an uncountable language if stated in the form that a theory with an infinite model with cardinality at least that of the language has infinite models in all cardinalities at least as big as that of the language), it doesn't care about how complex the axioms are from a computability point of view


You're correct, but I think the spirit of what GP wrote is still true.

You can't capture an infinite structure fully by "finitary" methods, either you use FOL and then you run into L-S, or you use higher-order logic (for which L-S doesn't apply) but then you don't have a complete proof system anymore.

To tie it all together, L-S and incompleteness are about different flavours of "not being able to capture something". L-S is about models of different cardinalities. These models do still all satisfy exactly the same sentences. Incompleteness is about different models actually satisfying different sentences.


Löwenheim-Skolem is such a mind-blowing result. It's a shame it's not talked about more often.


See section 7.1 and 10 of http://www.imm.dtu.dk/~tobo/essay.pdf


I guess the problem is that the axioms are not preventing “this statement is false” to be an invalid statement.


There are weaker formal systems like Presburger arithmetic (peano without multiplication) and Skolem arithmetic (peano without addition) that have been proven to be complete and consistent. Tarski also showed that there are formal systems for real numbers (hence also geometry) which have the same properties. (although the real numbers include integers, the integers alone have a lot more structure and so Tarski's result does not imply Peano)

There are also extensions to these (eg. presburger extended to multiplication by constants) that are also known to be complete and consistent.

These systems do not require any social compact. Any theorems proven through them are absolute truth, although the the range of statements that these systems can express is limited.

One may require a social compact for Peano, ZFC, and such other powerful formal systems.

That the software implementations like Coq and Lean are bug free may also require a social compact, if the nature of being bug free cannot be formally proved, although it seems determining this should be an easier problem.


Whether a task is meaningful or meaningless depends on how we think about meaning. I tend to think about it in terms of correspondences between structures. Such structures are usually what we think of as information, so this comment has meaning when interpreted using the corresponding knowledge of English in your brain. A computer simulation of the weather has meaning to the extent that it corresponds to actual weather, and its behaviour. The environmental mapping data structures in a Roomba’s memory has meaning to the extent it corresponds to the environment it is navigating.

So meaning is about correspondences, but also about achieving goals. We communicate for many reasons, maybe in this case to help us understand problems better. The weather simulation helps us plan activities. The Roomba’s map helps it clean the room. So there’s also an element of intentionality.

What is the intention behind these mathematical proofs? What are their correspondences to other information and goals?


I mean... you can abuse language and math notation in ways that you can't do for computer code. Math notation is actually terribly and surprisingly informal compared to code. I'd just argue that many[*] of these "social disagreements" would go away within a computational framework. I think the future of mathematics is in computer proofs.

[*] Note that the claim is "many" (ie.: a subset), not "all".


>What AI can do that’s new is to verify what we believe to be true

This is not AI.

Combining Theorem Provers with AI is promising:

https://leandojo.org/


Nice interview / thoughts. There was a great article almost exactly 10 years ago, revolving around the same example (Mochizuki's claimed proof of the abc conjecture) that this article starts with: http://projectwordsworth.com/the-paradox-of-the-proof/ (It's still worth reading, even as background to the posted article.)


Good teacher, his Number Theory book felt really good though I have no comparable in Number Theory. I must say Number Theory and Combinatorics are the most difficult topics I got acquainted with in undergrad.


Number Theory is really weird because the definitions and theorems are so often straightforward and easy to understand (you could explain Fermat's Last Theorem to a bright school kid), but the proofs can be devilishly complicated.


Catarina Dutilh Novaes has been arguing much the same thing. She has a 2021 book out, The Dialogical Roots of Deduction, which is on my list but I haven't gotten there yet.

I also haven't watched this video, but I'm linking it because I can't find an earlier one where she points out that the basis of logic is in rhetoric.

https://youtu.be/0IOhYneseiM?si=vEfJ-JFCPF05zzNO


Off-topic question: I've been seeing this "si" URL parameter pop up a lot in shared youtube links lately. Which app exactly pulled the URL for you?


The share button under the YouTube video? :-)


> Initially, Newton and Leibniz came up with objects called infinitesimals. It made their equations work, but by today’s standards it wasn’t sensible or rigorous.

It would be great if mathematics was widely presented and taught in this messy, practical, human, truthful way rather than the purist, mystical, mythical manner.

Calculus wasn't "discovered", it was painstakingly developed by some blokes that were trying to solve physical problems.


Actually, in all likelihood it was plagiarized from Indians like Madhava and spun out as 'white European' invention.


Could you provide evidence for your claim?


Doesn't matter. All mathematics is derived from other's work.


Semantics of "Proof" are complicated.

Is incorrect proof still a "proof", just one which is incorrect?

If Curry-Howard holds then programs are "proofs". To write a proof is to write a program.

But what about a program that has a bug in it? We say it is an incorrect program. But we still call it a program. By the same logic an incorrect proof is still a "proof" as well.

And typically it is a proof - of something. Every formally valid proof proves SOMETHING, right?

Instead of asking "Is this proof correct?" we should ask: "Does this proof prove what we claim it proves, or something else"?


I'm not a mathematician but I enjoy maths both as an anchor to an objective reality and as an art of expression (or rather compression of axiomatic ideas into tight and elegant equations). A shift in the framework of thinking can radically change viewpoints we have adopted so far (if it's not the otherwise) however often to come off as controversial.

It reminds me of my old days in high-school when I was intrigued by unconventional ways of doing maths, I often had the habit of mixing many different domains into one derivation (which often led to people interpreting it as nonsense). I realize that even if I were "right", whether it is or not is dependent on a subjective, mutual, and shared convention.


My brain does not like the phrase "social compact" probably because I've heard "social contract" so often, and rarely if ever hear "compact" used in this way. On the other hand I hear "pact" much more often.


It's especially weird because "compact" has a very specific meaning in mathematics.


Construct seems more fitting. I also read the headline and thought it sounded like a mistake.


I read a "social compact" as more of a "social agreement," which gets at the bottom of consensus that he's talking about.

A "social construct" is quite different, that's something entirely constructed by society, like manners or nationalism.


I made the same comment as you in response to parent commenter (now deleted)... But then I looked at some definitions of "social construct" and it seems to have broader semantics than I had thought.


Seems that Lean could be considered as a programming language for mathematical proofs.


What is the meaning of the word 'Compact' in the title here?


One metaphor that I have seen ubiquitous in Nature is Consensus Reality. Fiat currency, quantum systems agreeing on each other's value, social contract, etc. It's everywhere and now social concept of proof.

Here is Daniel Dennett talking it. https://youtu.be/32u12zjgJww

I like the no miracles argument in favour of science. If our scientific theories don't track reality, how are they successful in prediction the future.

Similarly the social aspect of mathematical proofs can be replaced with requirements like being in congruence with established facts (i.e explanatory power), predictive power, and efficient representation (i.e. elegant compression) which means faster programs / less computational steps required.



Oh huh, I never saw it used like that before


Wasn't wittgenstein right? Words gets meaning through their usage and context.


The best reference for proofs as social constructions is:

DeMillo, Lipton, Perlis

Social Processes and Proofs of Theorems and Programs

https://gwern.net/doc/math/1979-demillo.pdf



I personally view maths as just another language. It’s just a medium to transmit ideas from one person to another with the most efficient symbol set possible. It can’t be anything else - it doesn’t exist without some kind of intelligent life form to decode it.


A couple of observations.

It's odd that the article doesn't spell out Zermelo–Fraenkel, and instead uses the shorthand ZFC axioms. At a fundamental level, axioms are simultaneously solid and slippery. The entire purpose of a set of axioms is to serve as a backstop. Without axioms there are no proofs, and instead you wind up riding infinite regress; turtles all the way down. In short, proofs without axioms are impossible.

But once you have a set of axioms, proofs are precisely that, proofs. The notion that mathematics is a social compact is aggrandizing the role of the axiom, and minimizing the rest of mathematics. It is certainly the case that axioms are in fact a "social compact" in the sense that if we cannot agree that a set of given axioms are true without proof, then we can prove nothing. But the goal of axioms is to make that compact as narrow and clear as possible.

The brief discussion of Godel's incompleteness theorem elides the fundamental nuance that incompleteness has only been demonstrated in the context of self reference. There is a longer and better discussion of how this relates to logic, science, and consciousness in Hofstadter's seminal work "Godel, Escher, Bach"


There have been "natural" theorems which have been shown to be unprovable in Peano Arithmetic.

https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theor...


What about with Pinot Arithmetic, where you drink wine until you're sure you're conjecture is a fact?


Math is the description of how one number relates to another number.

I'm not a math person but this Shinichi Mochizuki sounds like a hack job.


The vast majority of academic math has little or nothing to do with numbers.


Mochizuki is an extreme case, but the problem of "bullshitting" is omnipresent in academia, and particularly (pure) mathematics which is a) perceived as consequential and b) impossible for laypeople to verify.


Professional pure mathematician here, so I'm curious if you have many examples of this bullshitting you seem to think is omnipresent. In my experience, the field is reasonably efficient at policing this, as in the case of the one example (Mochizuki) you cite. In fact, the reason he got any attention at all is because of his initial record of nonbullshit and quite non-trivial work on the anabelian program.


I also have a degree in mathematics and am familiar with how the field operates.

> In fact, the reason he got any attention at all is because of his initial record of nonbullshit

Sure, if he didn't have a reputation to burn he would be dismissed a priori as a crank. No one should have the time to parse and debunk random 500-page manifestos.

My issue with pure math is that, for any given (sub)*field, there are at best a handful of experts, and the probability of that small clique of experts being willing to spend time debunking/criticizing each other is low. Quid-pro-quo back-scratching has far better payout. We have an environment with low accountability, but also low stakes.

Basically, my position is that pure math is chock full of arcane nonsense (bullshit is admittedly a strong term that implies bad faith). But, it also doesn't cause harm, so it's not a big deal. Still, it behooves anyone entering or established in mathematics to understand the reality of the game. You will deal with charlatans and it's acceptable if your own work isn't perfect or even remotely correct or applicable.


I like to view mathematical proofs as code for a virtual machine that is executed in the minds of the readers of the proof.


I wonder what people who are working on foundations think of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: