The forever backwards compatible promise of C++ was a tremendous design mistake that has resulted in mindshare death as of 2026. It might suck to have to modify your code to continue to get it to work, but it’s the right long term approach.
Mindshare death is a very large overstatement given the massive amount of legacy C++ out there that will be maintained by poor souls for year to come. But you are right, there used to be a great language hiding within C++ if the committee ever dared to break backwards compat. But even if they did it now it would be too late and they'd just end up with a worse Rust or Zig.
The biggest problem with C++ is that while everyone agrees there is a great language hiding in it, everyone also has a remarkably different idea of what that great language actually is.
I don't agree there's a great language hiding in C++. My high level objections would be that the type system is garbage and the syntax is terrible, so you'd need a different type system and syntax and that's nothing close to C++ after the changes.
After many years of insisting that "dialects" of C++ are a terrible idea, despite the reality that most C++ users have a specific dialect they use - Bjarne Stroustrup has endorsed essentially the same thing but as "profiles" to address safety issues. So for people who think there is a "great language" in there perhaps in C++ 29 or C++ 32 you will be able to find out for yourselves that you're wrong.
The C++ standards committee’s antiquated reliance on compiler “vendors” holds it back. They should adopt maintenance of clang and bless it as the reference compiler.
Rust existed nearly entirely on paper until 2009, when Mozilla started funding researchers to work on it full-time. It wasn't announced in any sort of official capacity until 2010, and had no official numbered release until 2012. It was less than three and a half years between 0.1 and 1.0, and in that time, hard as it is to believe, it underwent more overall change than Zig has.
- Cranelift applies less optimizations in exchange for faster compilation times, because it was developed to compile WASM (wasmtime), but turns out that is good enough for Rust debug builds.
- Cranelift does not support the wide range of platforms (AFAIK just X86_64 and some ARM targets)
I am not expert in compilers, but in databases space there are multiple prominent projects which gain traction, create ecosystem and making Rust strong contender against C++ dominance.
Same reason Android and Chrome and git and Linux weren't written in Rust when they started. Rust didn't exist. All of these projects integrate Rust now, after being single language projects for the longest time.
It's notable that the projects you mentioned mostly don't need to deal with adversarial user input, while the projects I mentioned do. That's one area that Rust shines in.
Rust will never be in Android user space, because it's not competing with Kotlin. Kotlin is already excellent there. Rust will replace the parts of Android written in C++ gradually. That was always the plan. It feels weird and cope-y to move the goalposts to say it's not a big deal unless Rust also replaces Kotlin.
Chrome only needs to replace the parts of their codebase that handle untrusted input with Rust to get substantial benefits. Like codec parsers. They don't need to rewrite everything, just the parts that need rewriting. The parts that are impossible to get right in C++, to the point where Chrome spins up separate processes to run that code.
Rust is the future for Android, and it will become an important of Chrome and Linux and git (starting 3.0). That's just the way it is.
Looking at LLVM build times I seriously believe that C would have been the better choice :/ (it wouldn't be a problem if LLVM wouldn't be the base for so many other projects)
Same for the Metal shading language. C++ adds exactly nothing useful to a shading language over a C dialect that's extended with vector and matrix math types (at least they didn't pick ObjC or Swift though).
...that's just because of the traditional-game-dev Stockholm syndrome towards C++ (but not too much C++ please!).
> Khronos future for Vulkan
As far as I'm aware Khronos is not planning to move the Vulkan API to C++ - and the 'modern C++' sample code which adds a C++ RAII wrapper on top of the Vulkan C API does more harm than good (especially since lifetime management for Vulkan object is a bit more involved than just adding a class wrapper with a destructor).
> now they make use of C++20, including modules if desired.
It's in line with many other shitty design decisions coming out of Khronos, so I'm not even surprised ;)
IMHO it's a pretty big problem when the spec is on an entirely different abstraction level than the sample code (those new samples also move significant code into 'helper classes', which means all the interesting stuff is hidden away).
Hilariously, they broke this compatibility. std::auto_ptr was an abomination, but removing it from the language was needless and undermined the long term stability that differentiates C++ from upstarts.
Tainted? Because they refused to change a contract that was already signed to allow for surveillance of Americans and fully autonomous kill bots? I guarantee, if a sane and non-fascist administration ever takes power again Anthropic will be forgiven. Being attacked by this administration is an honor. OpenAI on the other hand…
can anyone compare the $200/mo codex usage limits with the $200/mo claude usage limits? It’s extremely difficult to get a feel for whether switching between the two is going to result in hitting limits more or less often, and it’s difficult to find discussion online about this.
In practice, if I buy $200/mo codex, can I basically run 3 codex instances simultaneously in tmux, like I can with claude code pro max, all day every day, without hitting limits?
My own experience is that I get far far more usage (and better quality code, too) from codex. I downgrade my Claude Max to Claude Pro (the $20 plan) and now using codex with Pro plan exclusively for everything.
I haven't tried the $200 plans by I have Claude and Codex $20 and I feel like I get a lot more out of Codex before hitting the limits. My tracker certainly shows higher tokens for Codex. I've seen others say the same.
Sadly comment ratings are not visible on HN, so the only way to corroborate is to write it explicitly: Codex $20 includes significantly more work done and is subjectively smarter.
I've only run into the codex $20 limit once with my hobby project. With my Claude ~$20 plan, I hit limits after about 3(!) rather trivial prompts to Opus :/
This is marketing. The same way Apple cares about your privacy so long as they can wall you in their garden.
Not a value judgment, just saying that the CEO of a company making a statement isn't worth anything. See Googles "don't be evil" ethos that lasted as long as it was corporately useful.
If Anthropic can lure engineers with virtue signaling, good on them. They were also the same ones to say "don't accelerate" and "who would give these models access to the internet", etc etc.
"Our models will take everyone's jobs tomorrow and they're so dangerous they shouldn't be exported". Again all investor speak.
> researchers are more valuable than any military spend or any datacenter. It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.
This is a massive cope imo. The reason that the AI industry is so incestuous is just because there are only a handful of frontier labs with the compute/capital to run large training clusters.
Most of the improvements that we’ve seen in the past 3 years are due to significantly better hardware and software, just boring and straightforward engineering work, not brilliant model architecture improvements. We are running transformers from 2017. The brilliant researchers at the frontier labs have not produced a successor architecture in nearly a decade of trying. That’s not what winning on research looks like.
Have there been some step-change improvements? Sure. But by far the biggest improvement can be attributed to training bigger models on more badass hardware, and hardware availability to serve it cheaply. To act like the DoD isn’t going to be able to stand up pytorch or vllm and get a decent result is hilarious: the reason you use slurm and MPI and openshmem is because national labs and DoD were using it first. NCCL is just gpu accelerated scope-reduced MPI. nvshmem is just gpu accelerated scope-reduced openshmem.
If anything, DoD doesn’t have the inference throughput requirements that the unicorns have and might just be able to immediately outperform them by training a massive dense model without optimizing for time to first token or throughput. They don’t have to worry about if the $/1M tokens makes it economically feasible to serve, which is a primary consideration of the unicorns today when they’re choosing their parameter counts. They can just rate limit the endpoint and share it, with a 2 hour queue time.
The government invented HPC, it’s their world and you’re just playing in it.
> Generally, the defense crowd have a somewhat inflated sense of self worth.
Sure the architecture is from 2017. But the gap between GPT-1 and frontier models today is not simply "more FLOPs" and as simple as "standing up PyTorch and vllm" - theres thousands of undocumented decisions about data, alignment, reward modeling, training stability, and inference-time strategies, and lots of tribal knowledge held by a small group of people who overwhelmingly do not want to work on weapons systems.
The dense model argument is self-defeating long term. Sparsity (MoE etc.) lets you build a smarter model at the same compute budget, so going dense because you can afford to waste FLOPs is how you fall behind b/c you never came up with the step function improvements needed.
Sure, the DoD invented HPC, but it also invented the internet, and then the private sector made it actually useful.
Did you guys really think that the jurisprudential issues that became endemic after 9/11 suddenly disappeared because we discovered LLMs?
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
And, to be clear, the way you affect change in democracy is coalition building, listening to others, supporting your allies in their aims, and in turn having them support you, even when you don’t fully agree or understand. There’s no magic wand, none of us are right, there’s no big picture, just a bunch of people working together.
While I agree that we should be voting in people who will respect the power and authority they're given, I can't imagine we will vote away all these problems.
We would need to vote in a president and 60%+ into congress that is willing to throw away their own power and authority. I just don't see that happening, especially not in a political system so corrupted already.
The US needs a organization doing the equivalent of the Nation Popular Vote Interstate Compact but for candidates and for fixing the US voting system. Get running politicians to sign up for if 60% of you are in office you'll table and vote for a specific already spelled out constitutional reform for more representative voting.
The goal being more than two parties in government so that democrats and republicans can fracture into more functional bodies (MAGA, RINOs, neo-liberal, progressive etc) and people can vote closer to their issues/beliefs and that multiple parties mean 1 party isn't running rushod over the other.
Take a step back: Americans voted for this. They want unaccountable police and courts for the Dirty Harry legal system: maximum indiscriminate violence against those designated as criminals.
I've never seen this on a ballot and, maybe with the exclusion of Trump, never heard a candidate campaign on anything similar.
You probably could make the case that Trump did campaign on it so I'll grant that, but this problem started well before he was even firing people on TV.
George Wallace has been dead for something like 30 years, but yes he was very blatant. I have family that knew him in Montgomery, friends of friends kind of a situation. They don't have good things to say about him.
I don't remember Rudy running on such ideas but maybe he did. Arpeio was running as a sheriff, I would never have voted for him but agreed people did absolutely vote for him in a law enforcement capacity with pretty clear views.
I don't know enough about Gosar or Gohmert to comment well about either.
Nobody is saying that Anthropic has to shut down. They’re just saying that nobody taking government money can pay Anthropic for their service as a part of that contract. Anthropic still has the right to exist on their own terms, but their business model is based on rapidly-increasing enterprise subscriptions, which included public sector spending.
If Anthropic can survive on open source contributors shelling out $200/mo and private sector companies doing the same, the government wishes them well. But surely you agree the government has a right to determine how its budget is appropriated?
Well it depends. Being that the federal government constitutes 20% of the US economy, telling federal agencies you cannot contract with someone because they are adversarial to the USA is indeed pretty severe. When in reality they are not adversarial. We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy. So it is not at all equivalent to a private company making a choice
This is obviously subjective, and the only subject that matters in this case is the leadership at the DoD.
> We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy.
I, too, hate big government and the all-powerful executive branch. Welcome to my tent. Let’s invent a time machine together so we can elect Ron Paul in 2008 and nip this in the bud.
> But surely you agree the government has a right to determine how its budget is appropriated
I think the government doesn't have rights, it is my elected representative. And I do not agree with it trying to punish a company for not agreeing to contract terms.
That’s because no company who has ever sold weapons to the government has ever been brazen enough to tell the government how they can and cannot use their purchase. It’s unprecedented because most companies that sell to the government are publicly traded and have a board that would never let this happen. It’s unprecedented because Anthropic is behaving like a reckless startup.
> the existing contract included the language on usage. Other companies also have such language about usage.
The existing contract is only a few dozen months old. It didn’t hold up to scrutiny under real world usage of the service. The government wants to change the contract. This is not the kill shot you think it is. It’s totally normal for agreements to evolve. The government is saying it needs to evolve. This is all happening rapidly and it’s irrelevant that the government agreed to similar terms with OpenAI as well. That agreement will also need to evolve. But this alone doesn’t give Anthropic any material legal challenge. The courts understand bureaucracy moves slowly better than anyone else, and won’t read this apparent inconsistency the same way you are.
> The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.
You might be completely right about their real motivations, but try to steelman the other side.
What they might argue in court: Suppose DoD wants to buy an autonomous missile system from some contractor. That contractor writes a generic visual object tracking library, which they use in both military applications for the DoD and in their commercial offerings. Let’s say it’s Boeing in this case.
Anthropic engaged in a process where they take a model that is perfectly capable of writing that object tracking code, and they try to install a sense of restraint on it through RLHF. Suppose Opus 6.7 comes out and it has internalized some of these principles, to the point where it adds a backdoor to the library that prevents it from operating correctly in military applications.
Is this a bit far fetched? Sure. But the point is that Anthropic is intentionally changing their product to make it less effective for military use. And per the statute, it’s entirely reasonable for the DoD to mark them as a supply chain risk if they’re introducing defects intentionally that make it unfit for military use. It’s entirely consistent for them to say, Boeing, you categorically can’t use Claude. That’s exactly the kind of "subversion of design integrity" the statute contemplates. The fact that the subversion was introduced by the vendor intentionally rather than by a foreign adversary covertly doesn’t change the operational impact.
But there will always be deficiencies in testing, and regardless, the point is that Anthropic is intentionally introducing behavior into their models which increases the chance of a deficiency being introduced specifically as it pertains to defense.
The DoD has a right to avoid such models, and to demand that their subcontractors do as well.
It’s like saying “well I’d hope Boeing would test the airplane before flying it” in response to learning that Boeing’s engineering team intentionally weakened the wing spar because they think planes shouldn’t fly too fast. Yeah, testing might catch the specific failure mode. But the fact that your vendor is deliberately working against your requirements is a supply chain problem regardless of how good your test coverage is.
You expect hyperscalers to play chicken with the DoD?
The courts have historically been pretty consistent about giving the DoD whatever the fuck they want, going back to WW2 and even longer for the predecessors of the DoD. I agree that the next administration might reverse it, but the thing is, the government will stay irrational longer than Anthropic will remain solvent.
The US government told every American company to stop doing business with Huawei and they all did it overnight, even when it cost them billions. TSMC stopped fabricating for them, Google pulled Android licensing… The machinery of sanctions compliance is extremely well-oiled and companies fold instantly because the outcome of noncompliance is literally getting thrown in prison.
So is it actually sanctions? I believe Huawei was on the entities list. Such a list comes from the fact that the government can require export licensing. Since Anthropic is in the U.S., I do not believe it’s the same thing as Huawei.
Huawei did eventually end up on the entities list, but there was a gap between when it was initially announced and when it became law, and the divestment from contractors started immediately overnight.
reply