Have you worked in a true client services model? Based on your statement, my guess is you have not. This is NOT how things work at a consultancy or in a client services model, regardless of industry, unfortunately.
>if it was a jobs program, it would be way better staffed..
You're saying it's not comparable to the size of the New Deal, the biggest jobs program ever in the US.
That doesn't disqualify it from consideration as a jobs program as there are many jobs programs much smaller.
Adding 60k to ~3 million is significant because it's permanent. These are low skilled workers (and security theater as you astutely say) mostly concentrated in large cities.
Whereas the New Deal was temp jobs that disappeared once grants and funding disappeared.
I lead a team of Data Engineers, DevOps Engineers, and Data Scientists. I write code and have done so literally for my entire life. AI-assisted codegen is incredible; especially over the last 3-4m.
I understand that developers feel their code is an art form and are pissed off that their life’s work is now a commodity; but, it’s time to either accept it and move on with what has happened, specialize as an actual artist, or potentially find yourself in a very rough spot.
I wonder if your background just has you fooled. I worked on a data science team and code was always a commodity. Most data scientists know how to code in a fairly trivial way, just enough to get their models built and served. Even data engineers largely know how to just take that and deploy to Spark. They don't really do much software engineering beyond that.
I'm not being precious here or protective of my "art" or whatever. But I do find it sort of hilarious and obvious that someone on a data science team might not understand the aesthetic value of code, and I suspect anyone else who has worked on such a team/ with such a team can probably laugh about the same thing - we've uh... we've seen your code. We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.
I'm not particularly uncomfortable at the moment because understanding computers, understanding how to solve problems, understanding how to map between problems and solutions, what will or won't meet a customer's expectations, etc, is still core to the job as it always has been. Code quality is still critical as well - anyone who's vibe-coded >15KLOC projects will know that models simply can not handle that scale unless you're diligent about how it shoul dbe structured.
My job has barely changed semantically, despite rapid adoption of AI.
I'm a software engineer and _I_ don't understand the aesthetic value of code. I'm interested in architecture and maintainability but I couldn't give a rats ass on how some section of code looks like, so long as it conforms to a style guide and is maintainable.
lol this is not why people do "df1", "df2", etc, nor are those polymorphic names but okay.
> it's coming... some places move slower than other but it's coming
What is coming, exactly? Again, as said, I work at a company that has rapidly adopted AI, and I have been a long time user. My job was never about rapidly producing code so the ability to rapidly produce code is strictly just a boon.
I understand that you’re trying to apply your experience to what we do as a team and that makes sense; but, we’re many many stddev beyond the 15K LOC target you identified and have no issues because we do indeed take care to ensure we’re building these things the right way.
I have worked at many places and have seen the work of DEs and DSs that is borderline psychotic; but it got the job done, sorta. I have suffered through QA of 10000 lines that I ended up rewriting in less than 100.
So, yes; I understand where you’re coming from. But; that’s not what we do.
Yes, but then you said that you do what I'm suggesting is still critical to do, which is maintain the codebase even if you heavily leverage models. " we do indeed take care to ensure we’re building these things the right way."
Which parts of it exactly? I've considered for loops and if branches "commodities" for a while. The way you organize code, the design, is still pretty much open and not a solved problem, including by AI-based tools. Yes we can now deal with it at a higher level (e.g. in prompts, in English), but it's not something I can fully delegate to an agent and expect good results (although I keep trying, as tools improve).
LLM-based codegen in the hands of good engineers is a multiplier, but you still need a good engineer to begin with.
In general I don't disagree that it is a multiplier in the hands of good engineers but it also seems to be a multiplier in the hands of bad engineers (multiples of bad). The question is in larger organizations is having 5x the good commits and 5x the bad commits stable? The answer seems TBD from my perspective.
What I don't get is that there should now be focus on the actual engineering part of software development. I have arguments with people about code quality, styling, how "structured" some work is, when in reality, in an engineering discipline we should focus on whether we can improve the software by functional or non functional metrics.
Like what about performance optimization or security analysis? Shouldn't AI be the CAD of coding tools? Idk.
My problem is that c suite equates “vibe coding” and what you need is spec driven dev.
Spec driven dev is good software engineering practice. It’s been cast aside in the name of “agile” (which has nothing to do with not doing docs - but that’s another discussion).
My problem is writing good specs takes time. Reviewing code and coaxing the codegen to use specific methods (async, critical sections, rwlocks, etc) is based on previous dev experience. The general perception with c suite is that neither is important now since “vibing” is what’s in.
My problem with the code the agents produce has nothing to do with style or art. The clearest example of how bad it is was shown by Anthropic's experiements where agents failed to write a C compiler, which is not a very hard programming job to begin with if you know compilers, as the models do, but they failed even with a practically unrealistic level of assistance (a complete spec, thousands of human-written tests, and a reference implementation used as an oracle, not to mention that the models were trained on both the spec and reference implementation).
If you look at the evolution of agent-written code you see that it may start out fine, but as you add more and more features, things go horribly wrong. Let's say the model runs into a wall. Sometimes the right thing to do is go back into the architecture and put a door in that spot; other times the right thing to do is ask why you hit that wall in the first place, maybe you've taken a wrong turn. The models seem to pick one or the other almost at random, and sometimes they just blast a hole through the wall. After enough features, it's clear there's no convergence, just like what happened in Anthropic's experiment. The agents ultimately can't fix one problem without breaking something else.
You can also see how they shoot themselves in the foot by adding layers upon layers of defensive coding that get so think they themselves can't think through them. I once asked an agent to write a data structure that maintains an invariant in subroutine A and uses it in subroutine B. It wrote A fine, but B ignored the invariant and did a brute-force search over the data, the very thing the data structure was meant to avoid. As it was writing it the agent explained that it doesn't want to trust the invariant established in A because it might be buggy... Another thing you frequently see is that the code they write is so intent on success that it has a plan A, plan B, and plan C for everything. It tries to do something one way and adds contingencies for failure.
And so the code and the complexity compound until nothing and no one can save you. If you're lucky, your program is "finished" before that happens. My experience is mostly with gpt5.4 and 5.3-codex, although Anthropic's failed experiment shows that the Claude models suffer from similar problems. What does it say when a compiler expert that knows multiple compilers pretty much by heart, with access to thousands of tests, can't even write a C compiler? Most important software is more complex than a C compiler, isn't as well specified, and the models haven't trained on it.
I wish they could write working code; they just don't.[1] But man, can they debug (mostly because they're tenacious and tireless).
[1]: By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
The thing is for most places the kind of code they write is good enough. You have painted an awfully pessimistic picture that frankly does not mirror reality of many enterprises.
> What does it say when a compiler expert that knows multiple compilers pretty much by heart, with access to thousands of tests, can't even write a C compiler?
It does not know compilers by heart. That's just not true. The point of the experiment was to see how big of a codebase it can handle without human intervention and now we know the limits. The limitation has always been context size.
>By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
Getting things right ~90% of the time still saves me a lot of time. In fact I would assume this is how autopilot also works in that it does 90% of a job and the pilot is required to supervise it.
> The thing is for most places the kind of code they write is good enough.
The kind of code they write is the kind of code that will be unsalvageable after 10-50 changes. That's throwaway code, although it looks good. I don't think that's good enough for most places.
Of course, if you really take the time to slowly and carefully review what they write (that many people say they do, but the results don't look like it) you can keep the agents on course with a lot of babysitting and a lot of "revert everything you did in this last iteration".
> You have painted an awfully pessimistic picture that frankly does not mirror reality of many enterprises.
Why pessimistic? The agents are truly remarkable at debugging, and they're very good at reviews. They just can't really code. Interestingly, if you ask codex to review other codex-written code it will often show you just how bad it is, it's just that if you loop coding and review, the agents don't converge.
> It does not know compilers by heart. That's just not true.
It is true. The models can reproduce large swathes of their training material with pretty good accuracy.
> The point of the experiment was to see how big of a codebase it can handle without human intervention and now we know the limits.
What they produced was 100KLOC, which is 5-10x larger than some production C compilers, but even 100KLOC isn't a big codebase. And the amount of human intervention in that experiment was huge: humans wrote specs, thousands of tests, a reference implementation and trained the model on all of those. In most software, at least two or three of these four efforts are not realistic.
What they didn't have is close and careful supervision of every coding iteration. If you really do that - i.e. carefully read every line of plausible-looking code and you think about it - fine; if not, you're in for a nasty surprise when it's too late.
> The limitation has always been context size.
I don't buy it because human context size - especially in this case, where the model has been trained on everything - is smaller, and yet writing a C compiler isn't hard for a person to do.
> Getting things right ~90% of the time still saves me a lot of time.
They might get things right ~75% of the time when they write no more than a few hundred lines of code (unless we're talking a mechanical transformation). Anything beyond that is right closer to 10% of the time. The problem is that it works, at first, close to 90% of the time, but not in a way that will survive evolution for long. So if you're okay with code that works today but won't work a year from today, you might get away with it. I think some people are betting that the models a year from now will be able to fix the code written by today's models. Maybe they're right.
But the agents certainly save a lot of time on debugging and review. Coding - not so much, except in refactorings etc..
~Construction and agriculture also run on diesel~
(edit..OPs comment was germane to the thread, and correct, logistics by large the majority of diesel usage on this report).
Check your info bubble. The US has a superb freight rail system that transports massive amounts of goods. If you’re talking about diesel fuel, you’re talking about freight, and we absolutely do have mass transit for freight… one of the best in the world.
I get mine with ads via VZW with HBO Max for $7/month; if it were not for that, I wouldn't keep it. Sure; I sometimes watch things on there, but it's really an awful library these days.
> According to, Michael Doser, a prominent particle physicist at CERN, "one 100th of a nanogram [of antimatter] costs as much as one kilogram of gold."
Those aren't comparable costs. The cost given for antimatter is the cost of producing it from nothing. The cost given for gold is the market price of buying gold that already exists.
Consider the cost of producing one kilogram of gold from nothing.
(Consider also the cost of ownership. Gold has a higher-than-average cost of ownership; you have to provide security or it will be stolen. Antimatter's cost of ownership is far, far beyond that.)
The relevant cost for the buyer is how much they need to pay to obtain the object. So far we haven't discovered any primordial antimatter deposits that we could mine, so creating it from scratch is the only way.
I am arguably a successful employee in a tech-focused role. I enjoy my job and others seem to feel I'm good at what I do.
That said: I am NOT at all interested in identifying myself in social situations by my job. When someone asks what I do, I respond that I work in tech. I am not interested in giving more details nor talking in-depth about what I do to others I have just met.
Why? Because that's not at all what makes me...me. I am far more interested in what I do outside of work (reading...a lot, listening to music, spending as much time w/my family as possible, traveling, spending time at my lake home, etc). That is what I work to do; enjoy my life.
I realize this is an uncommon opinion, but I find it SO VERY ODD that folks are OBSESSED about their jobs and make it a central point of their existence to those outside of their specific industry. I do NOT care what someone does for their day-to-day; it's unlikely it will have any impact on me or my friendship with them. I want to know what they bring to the table in our current or potential social situation and the fact that they make PowerPoint presentations for whomever to look at, ask a few questions answered in the presentation's appendix, and never think about again doesn't do anything to further any of that.
I’d much rather know and learn about someone’s passion for woodworking, hill walking, flower arranging, whatever they enjoy doing in their free time, rather than having to talk about their (or my!) work.
Yeah! IMHO "What are you into / what do you care about or do for fun?" should replace "What do you do? [ie, what's your profession / where do you work]" as the default ice-breaker. More interesting, less reductive or competitive.
So you are saying that your job does not have any impact on your personality, despite you are there for 8+h a day?
The environment you are in for hours (even if its great, you are forced) does not shapre who you are?
And regarding social interactions: Its no difference for you interacting with people from your mind-liked crowd in opposion to someone who runs a gun-shop-chain? For sure, a constructed example, but Id say there is for sure some difference when acting with the different groups?
> So you are saying that your job does not have any impact on your personality, despite you are there for 8+h a day
(Not OP) It's not a core part of it, no. I'm a person who likes solving problems and has an attention to detail. If I see that something is wrong I have a desire to fix it regardless of it's my responsibility or not. This could be finding an outdated piece of documentation at work or finding a piece of litter on the street.
These traits make me an effective software engineer (up to the senior level, then I have to fight against those parts of my personality and focus on specific high-impact things if I want to succeed at Staff+), but they are a part of who I am totally independent from my career.
Software engineering is a field that I am good at and that pays exceptionally well, but I could be happy utilizing these traits in any number of careers. Were I financially independent, my dream career would probably be something closer to the people who design and build elaborate contraptions for stage shows such as Cirque du Soleil.
I like asking both, but these days a lot of the "what do you do for fun" answers are just consumption hobbies (e.g. I watch X show on Netflix) that people use to switch off after a long day of work. It's easier to think of interesting follow up questions about someone's work than about these kinds of hobbies. Even if (especially if) the work is something completely different from what I'm doing.
As the sister comment said: "Not if they work outside of tech…"
And not even then, in many cases. I know exactly what I do, but having to explain that to anyone, including people in tech, is difficult.
And, you know, it's not interesting to talk about. Talking about that is fine at the job, that's what we do. I have no interest in talking about that when I'm not working. Instead I want to talk about other things. Hobbies, activities, music, books, whatever. Enquring about someone's job will not lead to that at all.
After suffering Jira at two previous employers when it was being considered at the third org, I lobbied, pretty much begged, and cried along with many other colleagues who had this inflicted upon them previously. Yes, we indeed ended up with Jira and one another Atlassian monstrosity.
Confluence? I know most people really want a hard-to-use wiki with a special markdown flavor to write up things that instantly go stale, never to be reviewed again. Or, at least that's the only way I've really seen Confluence used?
You can fail to maintain a wiki written in any software. The value of Confluence is when everyone uses it, so there’s one place to find info to answer questions like “why the hell did we do it this way?”
Yes, but it's easier to fail when the markdown (or NIH markdown in the case of Confluence) is far removed from the code it describes. Which is why you should document closer to the products. Markdown files living by your code and even generated from code is way better than any experience I've had with Confluence (which is closing up on two decades soon enough).
I used Confluence a mere decade ago and, if anything, the 10 years after you used it only magnified its flaws relatively to what else was by then available so you didn’t miss much, and I suspect we haven’t missed much since, except more bloat.
I love confluence because it has the worst search engine I've ever used. I used to think TikTok search was the worst: no matter what you typed, you would get only videos of people dancing. World's largest rock? Here's 12 videos of people dancing the renegade. 2020 election? Here's a video of someone dressed like Donald Trump dancing the renegade. Gatorade? Surely you meant the renegade, right? Here's some videos of people dancing the renegade.
But TikTok actually fixed that, so now Confluence is back on top. Good on you, Atlassian.
> That's how the police in America operate now; even for the most common interactions w/the public.
You cannot generalize police forces across the entire country that way. I've never had such an interaction with a police officer, presumably because the police department in my city is run better than that.
Wow. I know people are talking about how they are using 1984 as an instruction manual. But I'd hope we wouldn't be stupid enough to try and re=enact The Minority Report , especially without the Sci fi psychic humans.
Its more likely because affluent people live in that area who can afford to drop $100k on a lawyer and sue the department. If you are in an area where people are poor and can't drop that kind of money on a lawyer and get the case pushed to state or federal courts the cops know they won't get in trouble for being pieces of shit and see citizens as an easy source of funding to exploit.
Roger Stone had a history of making violent threats and long association with an armed paramilitary group. There was also the tape recording of him appearing to plan violence against two Democratic elected officials.
- the warrant was for distribution of narcotics and kiddnapping.
If I were to guess what a list of most dangerous warrants to execute, those two would be up there.
If you note in the video, he jokingly plays around the drugs part. I am not sure where the kidnapping part comes from, but Afroman is not necessarily a household name amongst middle-aged white police officers, so I imagine they just saw "drugs and kidnapping" and went for it.
The kidnapping claim was there was a sex dungeon in the house. The house does not have a basement.
And all of this was obviously pretextual, unless you believe drugs and women were hidden between cds or in whatever pieces of the kitchen they destroyed.
reply