Hacker Newsnew | past | comments | ask | show | jobs | submit | ironborn123's commentslogin

Even a teleoperated version can command a huge market. Think millions of robo butlers operated by gig workers from low income countries.


I don't think so.

Who gives unknown people full access to their homes?


Unknown people who work for a large service company? With panopticon-grade surveillance? I bet people could be convinced.


You mean subcontractor of a large company and most likely will share the best bits of video surveillance material


Wasnt there a paper a few months back, Textbooks are all you need. yes found it https://arxiv.org/abs/2306.11644

So search engines in their traditional sense will be obsolete anyway.

1) GPT-4 and other such LLMs will generate textbooks and manuals for every conceivable topic.

2) These textbooks will be 'dehallucinated' and curated by known experts on particular topics, who have reputations to maintain. The experts' names will be advertised by the LLM provider.

3) People will search for stuff by chatting with the LLMs, which will in turn provide citations for the chat output from the curated textbooks.


So any salad that contains chopped cabbage/broccoli and eaten daily should do the trick?


There are weaker formal systems like Presburger arithmetic (peano without multiplication) and Skolem arithmetic (peano without addition) that have been proven to be complete and consistent. Tarski also showed that there are formal systems for real numbers (hence also geometry) which have the same properties. (although the real numbers include integers, the integers alone have a lot more structure and so Tarski's result does not imply Peano)

There are also extensions to these (eg. presburger extended to multiplication by constants) that are also known to be complete and consistent.

These systems do not require any social compact. Any theorems proven through them are absolute truth, although the the range of statements that these systems can express is limited.

One may require a social compact for Peano, ZFC, and such other powerful formal systems.

That the software implementations like Coq and Lean are bug free may also require a social compact, if the nature of being bug free cannot be formally proved, although it seems determining this should be an easier problem.


While willpower may work for some people, what actually works for me (and i believe the majority of people) is self-deception or distraction.

To sleep, use white noise/rhythmic music/soothing voice

To climb a mountain, tell yourself that my next goal is to just reach that particular rock about 100 metres higher

In the gym, make a friend and chat and joke with them while doing your exercises

While sprinting, divide 32 by 13 to many decimal places, as Joey from Friends once suggested.


For me, to get good sleep, I need to have a surprising amount of exercise during the day and have tried hard at something so that I'm mentally tired. That plus not eating any calories at all for the 4 hours before bed and I'm pretty guaranteed to get good sleep.


Rather than asserting that current LLMs are at their tail end, or that AI isnt good enough, it is much more instructive to check what are the bottlenecks or constraints to further progress, and what would help remove these bottlenecks.

They can largely be divided into 3 buckets

1) Compute constraint - Currently large companies using expensive nvidia chips do most of the heavylifting of training good models. Although chips will improve over time, and competition like Intel/AMD will bring down prices, this is a slow process. But what could be a faster breakthrough is training using distributed computing over millions of consumer GPUs. There are already efforts in that direction (eg. petals/swarm parallelism for finetuning/full training, but the eastern europe/russian guys developing them dont seem to have enough resources).

2) Data constraint - If you just rely on human generated text data, you will soon exhaust this resource (maybe GPT4 has already). But the Tinystories dataset generated from GPT4 shows if we can have SOTA models generate more data (and especially on niche topics that appear less frequently in human generated data), and have deterministic/AI filters to segregate the good and bad quality data thus generated, data quantity would not be an issue any longer. Also, multimodal data is expected (with the right model architectures) to be more efficient at training world grokking SOTA models than single modal data and here we have massive amounts of online video data to tap into.

3) Architectural knowledge constraint - This may be the most difficult of all, figuring out what is the next big scalable architecture after Transformers. Either we keep trying newer ideas (like the stanford hazy research group does), and hope something sticks, or we get SOTA models few years down the line to do this ideation part for us.


one (quite convincing) theory is that anything that can be achieved by a carbon-based neural network (eg. human brain) can also be achieved by a silicon-based neural network. The hardware may change, but the hardware's software expressiveness shouldnt be affected, unless there is a fundamental chemistry constraint.

Since human brains during dreams (lucid or otherwise) can generate coherent scenes, and transform individual elements in a scene, diffusion based models running on cpu/gpus should eventually be able to do the same.


> one (quite convincing) theory is that anything that can be achieved by a carbon-based neural network (eg. human brain) can also be achieved by a silicon-based neural network.

That the human brain is exactly equivalent in function to our current model of a neural network is a huge, unproven hypothesis.


How can a declining population selling mostly commoditized goods support ever increasing property prices, that too when property is already overleveraged!!

This crash was always on the cards. Just a matter of when, and the when may have finally arrived.


> How can a declining population selling mostly commoditized goods support ever increasing property prices

It's pretty simple.

If you believed China was going to keep growing at 10% per year indefinitely - then the property prices were a bargain.

I think you needed to be mathematically challenged to believe that, but a lot of people did, or at least thought everyone else just assumed it would happen and they could get in and out before anything bad happened.

You also have to realize it wasn't foreign investment, and the Chinese are astoundingly nationalistic.


It was more like there was no alternative. Capital controls prevent regular Chinese investors (those without special Party connections) from taking their money out of the country. The domestic stock market is a joke. Banks pay low interest rates. So, the only thing they can do with their money is speculate in residential real estate.


The stock market in China is likely to look better than the property market when it implodes on leverage, and definitely bonds and savings.

People didn't want to miss out. That's it.

You're not going to buy a house on 3:1 leverage if you think it's overpriced by 30% just because you think the stock market is overpriced by 50%. You'd just keep your cash in savings.

The problem is - they believed it would keep going up - because it had been for almost everyone's entire adult life.


Exactly this.

Consequence of capital controls.


I get the feeling the article preaches to the choir.

The serious sources have always portrayed NIF's work as technical achievements. But they are read mostly by scientist and engineer types.

Mass media which hypes things is read, well by the masses, who dont have the patience or inclination to delve into technical details.

This dichotomy will always exist. I remember once reading a Chekov story where two intellectuals discuss how the townspeople are more interested in silly affairs and scandals rather than recognizing intellectual achievements.


I like the fact that Andrew is very transparent and honest about his experiments.

Although he didnt explain 1) how did the Fe impurities get localized into certain regions of the overall batch. 2) Were the impurities quantitatively enough in the magnetically susceptible shards to cause the half levitations he demonstrated.

Hope he keeps running and reporting some side experiments, instead of completely going back to his day job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: