This article gets history approximately wrong. Yes, Cyc failed at its original goals, but it's still going, kind of, and as far as I know was never intended for image recognition. Fuzzy logic isn't used much in science because probability and statistics are the more usual ways to handle uncertainty.
Given its inaccuracy so far, I'm not sure the article is worth finishing?
> attempting to assemble a comprehensive ontology and knowledge base that spans the basic concepts and "rules of thumb" about how the world works (think common sense knowledge but focusing more on things that rarely get written down or said, in contrast with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia), with the goal of enabling AI applications to perform human-like reasoning and be less "brittle" when confronted with novel situations that were not preconceived.
As explained by Douglas Lenat in a talk in the 1980s, he said his AI efforts had reached a point where there was a mattress on the road blocking him, and he decided to do something about it. As the article (https://en.wikipedia.org/wiki/Cyc) alludes to in passing, it started out as an effort to capture the information needed to understand all the entries in a 2 volume encyclopedia", "Cyc" itself is from encyclopedia.
Image recognition? Well, I suppose once you identify an object, Cyc might be useful to reason about it, but....
It isn't - it is sophistic and sophomorophic press piece probably out for personal advancement. It attempts to scapegoat tech for not obeying their literally theoretically and practically impossible set of demands for "ethics". All while wrapped in the overwrought and vapidness of the worst of a stereotypical arrogant philisophy major speaking about what they know nothing about.
I remember reading about fuzzy logic in a discrete math book. It was exactly the same as basic probability theory. I think the article is overblowing it a little...
This article goes off the rails when it conflates fuzzy set theory with connectionism, and then goes over the cliff when it starts talking about ethics without reference to any of the underlying philosophical theories of ethical reasoning under risk and uncertainty — at least a nod to Nozick would have been appreciated. (The idea that probabilistic set theory somehow destroys ethical certainty is nonsense that doesn’t need to be addressed.) The author, I think, may be a bit out of his field; his PhD is coming out of UCLA’s college of library science, so he’s opining on technology and philosophy without coming out of either discipline.
There are crisp sets, such as the set of French people. People are either French or not French. It doesn't make sense to say that someone's not very French, for example. But it does make sense to say that there's an 80% chance that someone's French, given other information about them.
The set of tall people is a fuzzy set. It does make sense to say that someone's not very tall. Tallness is a mapping from height to degree of membership of the set of tall people. So, for example, tallness(1.6m) = 0.0, and tallness(2.0m) = 1.0. To find out whether someone's quite tall, you could use the square root of tallness,to find out whether someone's very tall, you could use the square, and to find out whether someone's not tall, you could subtract their tallness from 1.0. And you might want to put a value on whether someone's quite tall but not very rich, for example, or make deductions given this.
Maybe at some deeper level fuzzy logic does reduce to probability theory, but the idea is quite different. It's about vagueness rather than about uncertainty.
I've heard it said that probability is quantifying the uncertainty you have about an outcome, fuzziness is knowing the outcome and quantifying your uncertainty about what to call it.
Maybe that example could be reduced to probability by saying you had an ensemble of different classifiers, each with its own hard cut. Then, you would ask, "what is the probability that a randomly selected judge would call Sally tall?"
This doesn't really reduce it though, it just changes the problem from degree-of-tallness to probability-of-opinion.
If I round up a million and people and survey them about whether -1, 0 and 1 are natural numbers, the probability of their opinions don't change whether -1, 0 or 1 are natural numbers.
The probabilities of judgement might be 0.1, 0.55 and 0.92, with confidence intervals. But each of these numbers has a set membership function and each of them is crisp. In fuzzy sets these belong to the set of natural numbers as 0.0, 0.0 and 1.0.
The useful concept that fuzzy sets brought to the table was being able to reason formally over uncertain values. Classical logic operates on crisp sets: all men are mortal, Socrates is a man, therefore Socrates is mortal. Fuzzy sets allows you to retain all the tools of logic over a wider range of statements.
This is widespread in control systems, where being able to reason formally about vague concepts like "too fast" is very helpful.
It's also common in caselaw, which is absolutely jampacked with fuzzy sets. "The purchaser at arm's length without notice". "The reasonable person, similarly circumstanced". "A duty of care". Different sets of facts in a case are ultimately resolved into binding rulings by a judge, but the argument itself has to follow logically from facts, legislation and precedent.
The law is also rightly aware of probabilities, especially in civil cases. "The balance of evidence" is sometimes called "the balance of probabilities". In criminal trials, "beyond reasonable doubt" is mostly about deciding on what follows from the probabilities of the facts presented.
'French' is more a culture than a genetic diaspora (i.e. though most French people might have some genetic 'closeness' that distinguishes them from other groups, one does not need that genetic approximation to be French)
And it's perfectly reasonably to say someone is 'not very French'.
I lived in France, I speak French fluently, and could 'get by' as a French person, but it would be fair to say that I am 'not very French'.
I think Fuzzy logic, as a perspective, maybe should be appreciated.
I can be 'a little this and a little that, and mostly some other thing' - it makes sense to describe many things as such.
Though pragmatically, when it comes down to numbers ... it may simply look a lot like probability.
Or perhaps you define being French as having French nationality, which is boolean and comes from a single authority (France!), to more charitatively read the previous post?
I'm really not sure about using fuzzy logic or fuzzy sets for AI, but I did use it for control and doing, of all things, some social stuff. It worked really well and was very simple and understandable. The article does seem to be a bit overblown and more concerned with the word "fuzzy" than what was in the fuzzy logic papers.
> I remember reading about fuzzy logic in a discrete math book. It was exactly the same as basic probability theory.
When is a pile of sand a pile of sand? When I remove a grain of sand, is it still a pile of sand? If I remove all the grains, is it still a pile of sand? If I pour millions of tonnes of sand onto it, is it still a pile of sand, or has it become something else?
Probability does not really answer these questions.
It might be counterargued that I can still say the 'probability that it is a pile of sand' and then we're back to probability theory.
Except we're not: I've just buried the problem a bit. Saying 'probability that it is a pile of sand' still requires a function that can distinguish whether it is a pile of sand in order to assess the probability.
I am given to understand that probability and fuzzy sets are part of a more general mathematics of uncertainty and that their relation to each other is hotly debated.
As Cyc wasn't designed to recognize objects in images, or play chess or go, it's hardly surprising that it proved incapable of those tasks.