You're going to make life much harder for yourself, not easier, because you'll still need German legal advice but now you need an expensive multi-national lawyer/firm. Anyone cheaper will refuse to touch it.
Germany cares about where the management of your company actually happens, not just where the entity is incorporated. So you're not going to avoid German bureaucracy, it's going to be worse not better.
Maintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.
Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
The point of trusted publishing is supposed to be that the public can verifiably audit the exact source from which the published artifacts were generated. Breaking that chain via a private repo is a step backwards.
Pardon my limited understanding but my read of the suggestion was simply to perform the same exact operation that the public would do to verifiably audit the exact source when generating the official published artifacts, the point was just that there was no automation to do so directly from the public repo.
this kind of compromise is why a lot of orgs have internal mirrors of repos or package sources so they can stay behind few versions to avoid latest and compromise. seen it with internal pip repos, apt repos etc.
some will even audit each package in there (kind crap job but it works fairly well as mitigation)
The corruptions of this administration are legion, but this isn't one of them. Unless you can point to something Lutnick did to create this outcome, I don't see how he had a better view of the whole thing than anyone else.
Who cares who came up with the illegal tariff implementation, the point is that a member of the government profited off of their own administration's incompetence.
It's obscene. I don't care whether a law was broken or not.
You want to profit from government incompetence? Stop being part of the government then.
Isn't Lutnick literally the chief architect of Trump's tariff policy? I can hardly imagine anybody more responsible for creating this outcome besides Trump himself, who would presumably have appointed somebody else if Lutnick hadn't been available.
> I don't see how he had a better view of the whole thing than anyone else.
Given the above, you really don't think Lutnick had a "better view" of the likely outcomes and timelines, including the Trump admin's planned and gamed out responses to certain outcomes, than the average Joe on the street? I think that's extremely, uh, naive.
The actions of the US government here are openly corrupt.
The point of the supply chain risk provisions is to denote, you know, supply chain risks. The intention is not to give the Pentagon a lever it can pull to force any company to agree to any contract it wants.
Hegseth doesn't even pretend that Anthropic is actually a supply chain risk. The argument for designating them so is that _they won't do exactly what the government wants_.
People use the term "fascism" a lot and people have kind of tuned it out, but what do you call a government that deals itself the power to compel any company to accept any contract, and declare it a pariah on thin pretext if it objects?
By taking the deal under these conditions OpenAI is accepting this. They're saying, "Well, sucks to be them, life goes on". They're consenting to the corruption and agreeing to profit from it. But they'll be next, and if the next company in line has the same stand then yeah, the government can force any company to do anything. There's nothing normal about this.
I don't understand how any sort of deal is defensible in the circumstances.
Government: "Anthropic, let us do whatever we want"
Anthropic: "We have some minimal conditions."
Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"
OpenAI: "Uh well I guess I should ask for those conditions"
Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."
By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?
It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.
From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
> From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
Anthropic didn't want a kill switch, they wanted contractual guarantees (the kind you can go to courts for). This administration just doesn't want accountability, that's all.
It was OpenAI that said they prefer to rely on guardrails and less on contracts (the kind that stops the AI from working if you violate). The same OpenAI that was awarded the contract now.
I don’t know why more people don’t see this. It’s a matter of providing strong guarantees of reliability of the product. There is already mass surveillance. There is already life taking without proper oversight.
I think it's a bit more nuance than that. The government (however good or bad, just bear with me) already has oversight mechanisms and already has laws in place to prevent mass surveillance and policy about autonomous killing.
So the governments stance is "We already have laws and procedures in place, we don't want and can't have a CEO to also be part of those checks"
I don't think this outcome would have been any different under a normal blue government either. Definitely with less mud slinging though.
If you think a blue government would even consider threatening to falsely accuse a company of being a supply-chain threat in order to gain leverage in a contract negotiation, you're insane. There's nothing remotely normal about this, it's not something you see in any western democracy
Government's free to not like the terms and go with another provider. That's whatever.
Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?
This is wise analysis. To summarize: appeasement of the Trump administration is a losing strategy. You won’t get what you want and you’ll get dragged down in the process.
You should quit because the only reasonable thing for your leadership to have done is to refuse to sign any agreement with DoW whatsoever while it's attempting to strongarm Anthropic in this fashion.
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
They're labelling Anthropic a supply chain risk, without even the pretense that this is in fact true. They're perfectly content to use the tool _themselves_, but they claim that an unwillingness to sign whatever ToS DoW asks marks the company a traitor that should be blacklisted from the economy.
I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.
It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.
The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.
LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.
I'm interested in this. Code review, most egregiously where the "author" neglected to review the LLM output themselves, seems like a clear instance. What are some other examples?
Something that should go in a "survival guide" for devs that still prefer to code themselves.
Well, if you take "review the LLM output" in its most general way, I guess you can class everything under that. But I think it's worth talking about the problem in a bit more detail than that, because someone can easily say "Oh I definitely review the LLM output!" and still be pushing work onto other people.
The fact is that no matter whether we review the LLM output or not, no matter whether we write the code entirely by hand or not, there's always going to be the possibility of errors. So it's not some bright-line thing. If you're relatively lazier and relatively less thoughtful in the way you work, you'll make more errors and more significant errors. You'll look like you're doing the work, but your teammates have to do more to make up for the problems.
Having to work around problems your coworkers introduced is nothing new, but LLMs make it worse in a few ways I think. One is just, that old joke about there being four kinds of people: lazy and stupid, industrious and stupid, smart and lazy, and industrious and smart. It's always been the "industrious and stupid" people that kill you, so LLMs are an obvious problem there.
Second there's what I call the six-fingered hands thing. LLMs make mistakes a human wouldn't, which means the problem won't be in your hypothesis-space when you're debugging.
Third, it's very useful to have unfinished work look unfinished. It lets you know what to expect. If there's voluminous docs and tests and the functionality either doesn't work at all or doesn't even make sense when you think about it, that's going to make you waste time.
Finally, at the most basic level, we expect there to be some sort of plan behind our coworkers' work. We expect that someone's thought about this and that the stuff they're doing is fundamentally going to be responsive to the requirements. If someone's phoning it in with an LLM, problems can stay hidden for a long time.
I'm currently really feeling the pain the side bar stuff. The non "application" code/config.
Scripts, cicd, documentation etc. The stuff that gets a PR but doesn't REALLY get the same level of review because its not really production code. But when you need to go tweak the thing it does a few months or years later... its so dense and undecipherable you spend more time figuring out how the llm wrote the damn thing than doing it all over yourself.
Should you probably review it a little harsher in the moment? sure, but thats not always feasible with things that are at the time "not important" and only later become the root of other things.
I have lost several hours this week to several such occurences.
AI-generated docs, charts, READMEs, TOE diagrams. My company’s Confluence is flooded with half assed documentation from several different dev teams that either loosely matches or doesn’t match at all the behavior or configuration of their apps.
For example they ask to have networking configs put into place and point us at these docs that are not accurate and then they expect that we’ll troubleshoot and figure out what exactly they need. It’s a complete waste of time and insulting to shove off that work onto another team because they couldn’t be fucked to read their own code and write down their requirements accurately.
If I were a CTO or VP these days I think I'd push for a blanket ban on committing docs/readmes/diagrams etc along with the initial work. Teams can push stuff to a `slop/` folder but don't call it docs.
If you push all that stuff at the same time, it's really easy to get away with this soft lie, "job done". They can claim they thought it was okay and it was just an honest mistake there were problems. They can lie about how much work they really did.
READMEs or diagrams that are plans for the functionality are fine. Docs that describe finished functionality are fine. Slop that dresses up unfinished work as finished work just fucks everything up, and the incentives are misaligned so everyone's doing this.
Software to date has been a [Jevons good](https://en.wikipedia.org/wiki/Jevons_paradox). Demand for software has been constrained by the cost efficiency and risk of software projects. Productivity improvements in software engineering have resulted in higher demand for software, not less, because each improvement in productivity unblocks more of the backlog of projects that weren't cost effective before.
There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.
If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.
What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.
Germany cares about where the management of your company actually happens, not just where the entity is incorporated. So you're not going to avoid German bureaucracy, it's going to be worse not better.
reply