I have never consciously wrapped Axios or fetch, but a cursory search suggests that there was a time when it was impossible for either to force TLS1.3. It's easy to imagine alternate implementations exist for frivolous reasons, but sometimes there are hard security or performance requirements that force you into them.
AI was trained on Axios wrappers, so it's just going to be wrappers all the way down. Look inside any company "API Client" and it's just a branded wrapper around Axios.
I really liked those books, for all the creative ideas... it's fine that they don't all work, but the Dark Forest has to be among the worst of them. It was unfortunate it was highlighted.
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
You can observe patterns of behavior, develop theories understanding, attempt/experiment with interactions, and refine based on the results. That's communication (and doesn't assume anything about the other alien civilization).
Now, civilizations may be more or less willing to do this and more or less successful, but that's not the same thing as no one will dare try, as the dark forest theory wants.
(Personally, I think civilizations that are better at this will outcompete ones that are worse or refuse, though that's just my own opinion.)
> Bold assumption that they are self limiting.
Name the exponential phenomena that aren't self limiting -- that don't consume the medium which allows them to exist in the first place.
> I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
Well, yes. One of the reasons the dark forest theory isn't coherent.
> Any real alien reasons would be alien to us.
Yes, but this doesn't back up the dark forest theory. It also doesn't mean aliens cannot be understood at any level or interacted with in any way.
(The dark forest theory makes very strong claims on the logic, intentions, strategies, resource use/governance of alien civilizations, BTW, and wants this to be uniform amongst them... even though the one civilization we actually know of doesn't adhere to them.)
Cleansing is basically free for advanced civilizations in the books. The alien (Singer) who wipes out Sol in the 3rd book doesn't even have to answer any questions from their manager about doing it, that's how cheap it is. While its true that individuals desire cooperation, I think you can assume that civilizations will keep a lid on people who will completely destroy them (or failing that, be destroyed). It seems like expansion of civilizations is not really an option. The Singer's civilization only has 1 colony world and they're already in some kind of extremely destructive war with them. Presumably the idea is once your own people expand multiple light years away, all the logic about aliens applies to them too. On the other hand if you can't expand why do you not run scorched earth on the galaxy?
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence.
I'm sure it will be possible, but it may well be very expensive. If it is, why would anyone spend the resources?
AI evolution will certainly follow the money, which is not necessarily the same as the path to AGI.
> or the boilerplate, libraries, build-tools, and refactoring
If your dev group is spending 90% of their time on these... well, you'd probably be right to fire someone. Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.
Something that's getting lost in the new, low cost of generating code is that code is a burden, not an asset. There's an ongoing maintenance and complexity cost. LLMs lower maintenance cost, but if you're generating 10x code you aren't getting ahead. Meanwhile, the cost of unmanaged complexity goes up exponentially. LLMs or no, you hit a wall if you don't manage it well.
Even the most enthusiastic AI boosters I've read don't seem to agree with this. From what I can tell, LLMs are still mostly useful at building in greenfield projects, and they are weak at maintaining brownfield projects
From my experience greenfield /brownfield is not the best dichotomy here. I observed, how same tooling is generating meaningless slop on greenfield project and 10kLoC of change (leading to an outage) on existing project in one hands and building a fairly complex new project and fixing a long (years) standing bug with a two lines patch in the other.
And I have more examples, where I, personally, was on the both sides of the fence: defined by my level of the same problem understanding, not by the tooling.
> Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.
Dude that's everybody in charge. You're young, you build a system, you mold it to shifting customer needs, you strike gold, you assume greater responsibility.
You hire lots of people. A few years go by. Why can't these people get shit done? When I was in their shoes, I was flying, I did what was needed.
Maybe we hired second rate talent. Maybe they're slacking. Maybe we should lay them off, or squeeze them harder, or pray AI will magically improve the outcome.
> its going to take Microsoft a long time to row back
They won't actually move back to a user-focused OS at all. It's nice for them to declare they will, but their culture and business pressures will prevent any kind of sustained effort. (Their users aren't their customers.)
That’s corporate-speak. They say improve, but it’s perfectly well understood internally to mean drive costs down.
There’s no problem with doing that at the expense of the customer as long as you can get away with it. (Seems like here they were going for a boiling-the-frog approach but moved too quickly.)
It seems like some companies may be unaware that not only are they interviewing prospective employees, but candidates are interviewing prospective employers.
I guess if your goal is just to hire desperate people who currently have no better choice (and who will leave as soon as they do), then you can flaunt how little you care about the candidates or the process. But if you're hoping for something better than that, I wouldn't run off as many candidates as possible.
I mean, this is probably a time-saving way to filter out a flood of poor candidates, but you're going to also be filtering out good candidates at a very high rate.
reply