Investors immediately lose confidence in the entire space. Anyone who doesn't have other revenue streams -- e.g. Google, Apple, Microsoft, X -- probably goes under or sells shortly thereafter. the aforementioned big tech companies pull back investment considerably because shareholders no longer want to see investment in AI because they see it as a waste of capital resources that could be spent on things that actually make money. They go for the simplest lowest cost implementations and largely abandon advancements. Billions if not trillions of dollars in data center plans and hardware purchases are cancelled causing significant pain in the hardware sector. Hardware manufactures try to pivot to the next thing, but it will be multi year slow process to pivot.
I think the way Sam Altman talked about AI. The framing of it. That they had to hold back the real version because it is just too powerful; they don't even know how it is working; it is already doing these incredible things that would change the world, but we can't / won't release it was all cleverly orchestrated.
I think the only logical conclusion is that many of these tech leaders are liars or have absolutely no idea what they are talking about. Maybe somewhere in between.
On here and else where there are people who see AI for what it is and are absolutely blown away by it and defend these people without realizing that they are regularly promising something much more to investors that can never be fulfilled. The idea that LLMs can ever reach any sense of true AGI is delusional.
Regardless of how bullish you are on AI, I think it is fair to say there has been an incredible over investment. AI is never going to do what some investors have imagined it will do based on some favorable and in some cases misleading best case scenarios demos.
I don't think the current approach lead to Gen AI in any practical sense and I don't think LLMs are reliable enough nor will they be reliable enough to implement cross system and provide decision making authority to. e.g. "hey, "AI, book me a flight to Miami for next Wednesday." You may be able to do something like this, but it would require as many steps as if you did it through the airline website and the chance of it booking an undesirable flight are high, versus just doing it yourself. I bring this up because this is always a demo. It was a demo during the voice assistant boom / craze and it was a demo with these LLM AI models. The problem is AI works 80-90 percent of the time for simple tasks and pretty much 50-50 for most complex tasks. That gap will close a bit more, but it needs to be 99.99% reliable to be trusted and anything much short of that means that it is effectively untrustworthy to do anything important.
Many demos have been proven to be faked or cherry picked to provide a scenario where the AI would succeed under those very specific prompts but any deviations would fail. Just do a search, Google, OpenAI, and many other have faked or exaggerated features and capabilities.
I can tell you investors think from the demos, some of which have been proven to be faked, that this leads to gen AI that can do anything, completely autonomously. That it will be able to do what it can do for basic coding and writing press releases for literally everything. And it can't and it wont. And what it can do it does very expensively. Look at driver less cars. One of the first big problems we have tried to solve with LLMs and machine learning and we still can't reliably trust cars to drive themselves without doing a lot of upfront work for a specific city. Don't get me wrong where we are with driver assists and robo taxis is incredible, but the investment has been far greater than the return and may always be. And once investors understand that fully. Once they realize that the technology IS incredible, but the economics will almost never work out. They are gone. Once they are gone Open AI, Anthropic, with their multi-billion dollar burn rate quickly need to cut costs and / or find a buyer. The only buyers who can afford it and run them will be Google, Apple, Amazon, and Microsoft and they too will be looking to reduce costs and exposure when the bubble bursts, so they will focus on efficiency of models even at the cost of function and features.
The Middle East conflicts have all been follies because there is no real victory condition without completely seizing territory and claiming as your own. Not saying this would be a good or moral position, but half measures only, at best, kick the can down the road or, at worst, exacerbate the situation.
I was right for all but one. High frequencies give it away. I can tell the difference, but it was certainly close enough that I am not sure I care anymore.
When I was on antidepressants I noticed people were much more likely to approach me and start up a conversation. I think so might have been more at ease a confident an also more likely to smile and make eye contact with strangers myself. So I think self confidence and general openness play a big part too.
In my experience, people who are compulsive liars or those who are willing to make large or repeated deceptions for personal gain never change. It is as natural to them as breathing. Some of them I am quite convinced believe their lies, but the net result is the same.
I don't know Trevor Milton. I have never met him. Maybe he isn't a compulsive liar but just got in over his head and was trying to make it work. But I know I would never invest in something he is doing.
Isn't this also in line with recent proclamations by at least two venture capitalists that they do not reflect / introspect / dwell on consequences in any way?
No because you don't understand what Andreessen means by reflection / introspection.
He obviously thinks you should learn from your mistakes and that you must be an avid and quick learner.
But learning skills is not what introspection / dwelling is.
It's spending times on thoughts like "what should I be doing with my life". "I can't believe how much of a victim of the system I am".
And he specifically contrasted it against doing stuff.
Writing code >>> walks in the woods.
Obviously reflection is necessary to recognize mistakes of the past. What Andreessen was talking about that you should spent majority of your time acting not reflecting. Not that you should spent 0 time reflecting.
Did we not understand when he said introspection was something made up in the past few hundred years? I was aghast when he said it right in front of my copy of Meditations given how much these guys also obsess over the Roman Empire
There are those who would argue doing anything is better than doing nothing while you try to figure out what to do. I'm in a position where I know what my passions are and am sufficiently constrained by resources that I can't afford major mistakes; but if I were sufficiently wealthy and indifferent, throwing darts at a board with a couple of ideas on it and just doing whatever has a certain romance to it.
Don't do nothing while figuring out what to do. But you should still spend time thinking about what to do.
If you're wealthy time and personal energy are the most valuable, irreplaceable resources you now possess. Why squander them on fruitless, random pursuits if you could think strategically and do something that really matters?
I have followed Trevor for many years. And I think anybody who has done the same will tell you, lying is very very central to his inner core. He lies even when he has zero need to. He just cannot help himself. It satisfies some inner need.
I grew up with someone like this. And otherwise he was a nice likable person. And his lies were benign, but he lied almost any time you talked to him. Most people didn’t even notice, but once you did you couldn’t unsee it. A couple of times we both witnessed the same event and he would have a completely different recollection of events that favored him and I think he believed those lies himself. I think for some people it is some kind of defense mechanism.
The greatest failing of modernity is its refusal to accept an uncomfortable reality uncovered by biology and psychology: That certain strongly negative personality traits are built-in pathologies which nature tries out to explore what is possible. The neural pattern that is "Trevor Milton" is not him without those intensely compulsive lying behaviors.
The social taboos of cultures around the world are fighting a ceaseless battle to reign in these endemic outliers.
I think what people want out of an EV is the Honda Civic and CRV. Nice practical, reliable low cost EVs that don’t feel cheap or weird. The Tesla model 3 and Y are so close. But there is weirdness to it that a lot of consumers aren’t really interested in an that is before you factor in the polarizing nature of Elon himself.
Maybe we aren’t there yet. The Model 3 and Y are probably still too expensive without incentives.
I think it shifts the skillset of executives a little bit. At publicly traded companies the quarterly shareholder meetings and the preparation that goes into it becomes such an outsized portion of the job that being good at that one thing is highly valued. I don’t think moving quarterly to bi-annually changes that much besides making the CEO and CFOs and some other folks jobs a bit easier.
reply