Hacker Newsnew | past | comments | ask | show | jobs | submit | hackinthebochs's commentslogin

I've always found it strange how Americans like to validate their ideals using their kids as vehicles. Instead of teaching kids how to be successful in a less than ideal world, we teach them our ideal view of the world. Like teaching kids violence is never the answer, instead of sometimes a situation does call for violence. We raise kids for a world that doesn't exist. It's up to the kid/adult to unlearn those obviously bogus ideals after they make contact with the world. It's just odd how we're so practiced at setting up our children for less success in the real world.

How did you arrive at this being uniquely American? I would say it's Western society more generally.

I mainly said America because I only feel qualified to speak on America. But I do think there is something uniquely American about seeing the march of "progress" as an ultimate ideal and stagnation in any form as a defeat. Economic and social progress is basically a founding ideal of American society and is a major driver of our success over the centuries. It permeates our culture in so many ways, e.g. the idea that your kids should have it better than you. So shaping the next generation by way of shaping the views of your kids, despite the potential mismatch between the ideal and the reality is seen as just a part of the march of progress.

Yes, let me send a picture of my ID to every app on the internet. That's so much better than having the device I own attest to my age anonymously.

They want it because it absolves them of responsibility for what their app does to kids. They can then just point to the existence of an already working mechanism for parents to intervene. The alternative would be for each app to implement stringent age verification or redesign itself to avoid addictive patterns. Neither option is good for their earnings.

The internet and the surrounding context changed so fast that it made little sense to cling to old email addresses made in the old context. Gmail represented the 'new internet' and old patterns became obsolete (less subversive, more mainstream/corporate). When there's a seismic shift in usage patterns that's when all bets are off regarding where everyone lands. Being the first mover means little here. If the way people interacted with AI underwent a massive shift, OpenAI would likely get left behind. The only safe bet is to invent your own killer.


What are neurosymbolic systems supposed to bring to the table that LLMs can't in principle? A symbol is just a vehicle with a fixed semantics in some context. Embedding vectors of LLMs are just that.


Pre-programmed, hard and fast rules for manipulating those symbols, that can automatically be chained together according to other preset rules. This makes it reliable and observable. Think Datalog.

IMO, symbolic AI is way too brittle and case-by-case to drive useful AI, but as a memory and reasoning system for more dynamic and flexible LLMs to call out to, it's a good idea.


Sure, reliability is a problem for the current state of LLMs. But I see no reason to think that's an in principle limitation.


There are so many papers now showing that LLM "reasoning" is fragile and based on pattern-matching heuristics that I think it's worth considering that, while it may not be an in principle limitation — in the sense that if you gave an autoregressive predictor infinite data and compute, it'd have to learn to simulate the universe to predict perfectly — in practice we're not going to build Laplace's LLM, and we might need a more direct architecture as a short cut!


Henry Molaison was exactly this.


Why should odd failure modes invalidate the claim of reasoning or intelligence in LLMs? Humans also have odd failure modes, in some ways very similar to LLMs. Normal functioning humans make assumptions, lose track of context, or just outright get things wrong. And then there people with rare neurological disorders like somatoparaphrenia, a disorder where people deny ownership of a limb and will confabulate wild explanations for it when prompted. Humans are prone to the very same kind of wild confabulation from impaired self awareness that plague LLMs.

Rather than a denial of intelligence, to me these failure modes raise the credence that LLMs are really onto something.


Because it is not an ‘odd failure mode’, it is a consequence of how it generates text without understanding.


This is called begging the question. I don't know why I keep expecting more from HN.


Load extensions in developer mode so they can't silently install malware on you


The dimension of this issue that never gets air time is that we've made having kids almost completely intentional. The richer a country becomes, the more intentional having kids becomes. The dynamic we see with rich countries is that as having kids becomes more intentional, there's also the increase in reasons why people would choose to delay or forego having kids.


I think you're spot on. And all of the various theories and analysis are pretty laughable if one has any sort of historical context.

- "People don't have kids because they're afraid of climate change" - Wildly overestimates the number of people who figure climate change into their life plans, and it discounts the numerous catastrophes people have feared and experience in the past while continuing to have high birth rates. - "People don't have kids because everything is too expensive" - My father-in-law has 11 siblings and they grew up in a 2 bedroom, 1 bathroom home. His story is not unique.

"having kids is almost completely intentional"....in countries where this is the case due to birth control, abortion, feminism (and other cultural shifts), the birth rate plummets.

Delving into the reasons why people opt to have fewer or no children when given the choice consistently across races, religions, cultural background, etc would be a book-length endeavor, but to me it really is that simple. There are numerous reasons someone wouldn't want to have more children, and they tend to find one of them when given the choice.


Yes, this absolutely appears to be the main reason. Both in practical terms through birth control, but also through cultural terms in that it's now seen as a choice rather than as an obvious thing you do. To change this course, we probably need to change the culture first so that a birth control ban will be supported. That's currently not looking likely, so population collapse it is


> To change this course, we probably need to change the culture first so that a birth control ban will be supported.

This is the most casually psychotic thing I've read on HN.

You're advocating forcing women to bear children they don't want and taking away control over their own body.


US voters chose to put people with that view in charge of the federal US government. 1990s and 2000s me would have never believed it.


We need to strike the right balance between personal benefit/freedom and societal benefit.

Clearly we are way too on the individualistic side of the pendulum now.


Why don't you have a few extra children for societal benefit, being aware of the importance thereof?


Oh I'm on it don't you worry son


Proofs or didn't happen.


That's far from clear.


How is forcing people to have children they don’t want a good outcome?

How about we set up society in such a way that choosing to have children is a more appealing option.


> birth control ban will be supported

Wtf... totally the wrong tool to change the calculus of intentionally having children.


A model of a model of X is a model of X, albeit extra lossy.


An LLM has an internal linguistic model (i.e. it knows token patterns), and that linguistic model models humans' linguistic models (a stream of tokens) of their actual world models (which involve far, far more than linguistics and tokens, such as logical relations beyond mere semantic relations, sensory representations like imagery and sounds, and, yes, words and concepts).

So LLMs are linguistic (token pattern) models of linguistic models (streams of tokens) describing world models (more than tokens).

It thus does not in fact follow that LLMs model the world (as they are missing everything that is not encoded in non-linguistic semantics).


At this point, anyone claiming that LLMs are "just" language models aren't arguing in good faith. LLMs are a general purpose computing paradigm. LLMs are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. Training LLMs discovers programs that well reproduce the input sequence. Tokens can represent anything, not just words. Roughly the same architecture can generate passable images, music, or even video.

[1] https://x.com/karpathy/status/1582807367988654081


If it's an LLM it's a (large) language model. If you use ideas from LLM architecture in other non-language models, they are not language models.

But it is extremely silly to say that "large language models are language models" is a bad faith argument.


No, its extremely silly to use the incidental name of a thing as an argument for the limits of its relevance. LLMs were designed to model language, but that does not determine the range of their applicability, or even the class of problems they are most suited for. It turns out that LLMs are a general computing architecture. What they were originally designed for is incidental. Any argument that starts off "but they are language models" is specious out of the gate.


Sorry, but using "LLM" when you mean "AI" is a basic failure to understand simple definitions, and also is ignoring the meat of the blog post and much of the discussion here (which is that LLMs are limited by virtue of being only / mostly trained on language).

Everything you are saying is either incoherent because you actually mean "AI" or "transformer", or is just plain wrong, since e.g. not all problems can be solved using e.g. single-channel, recursively-applied transformers, as I mention elsewhere here: https://news.ycombinator.com/item?id=46948612. The design of LLMs absolutely determines the range of their applicability, and the class of problems they are most suited for. This isn't even a controversial take, lots of influencers and certainly most serious researchers recognize the fundamental limitations of the LLM approach to AI.

You literally have no idea what you are talking about and clearly do not read or understand any actual papers where these models are developed, and are just repeating simplistic metaphors from blog posts, and buying into marketing.


In this case this is not so. The primary model is not a model at all, and the surrogate has bias added to it. It's also missing any way to actually check the internal consistency of statements or otherwise combine information from its corpus, so it fails as a world model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: