Anecdotally, the person on my team who is producing the most output and the highest quality happens to explicitly shun LLMs and is also relatively young. Other team members embracing AI spend their day being misled by chat bots, trying to get their agents to work properly by toying with contexts (just needs a little more knowledge!), and producing verbose code that's hard to review and with obvious bugs when you actually think about what it's doing. My favorite is how everything is excessively commented but more than once I've caught the code not matching what the comment said it would do!
FWIW I find it useful if I know exactly what I want and it's quicker to prompt it than type it myself. Also for research and building understanding it's generally good. I still catch it being wrong on details of you're really paying attention or literally contradicting itself between prompts. That gives me a lot of pause about trusting things it told me that I just accepted as fact without having enough knowledge myself to question it.
Fire the middle management, HR, and etc that have been enthusiastically using AI to do their jobs for the past two or three years already. 90% of them can be replaced by an agent with access to an email account.
Tbh, if companies want to use AI to lay off and cut costs, that's exactly where they should be doing it, not engineering.
How much bloat and bureaucracy bottleneck is sitting in middle management whose favorite past time is wasting everyone's time on meetings that could have been an email? HR? Not the execs, but the HR drones that do nothing but answer employee questions about policy, could have already been replaced with not even an AI, just an old school chatbot, a long time ago.
Instead of cutting engineers, cut the non-tech jobs, flatten the structure.
Oh they laid people off so they could outsource more to India. Despite the managers reminding them, the cost is 1/3 the cost but 3x slower. American devs were 5x more productive overall.
My experience with outsourcing over 20+ years (Russia, Romania, India, South America) is that you just move money around when you do it.
It takes more planning, more specification, more coordination, more QA. The quality is almost always worse, and remediation takes forever. So your BA, QA and PM time goes way up and absorbs any cost savings.
Exactly, the $20 codex is so good value it’s irresponsible to not give it to everyone. Claude code $20 is otoh pointless, the limits are good enough for 10 mins of work twice per business day.
Every business that's taking AI seriously is giving their team enterprise accounts to AI services. Otherwise you have no control over where your code, data, company info, etc is going.
Someone deciding to drop a spreadsheet of customer data into their personal AI account to increase their productivity would be catastrophic for business, so you need rules. And rules means paying for enterprise AI tooling.
"Bring your own tools" is not exactly novel in the workplace. Maybe so for office workers, but not more generally. Anyway, these particular tools are cheap enough that it hardly even matters who is expected to pay for them.
The $20 a month tier in particular is a trivial expense, on par with businesses that expect their workers to wear steel toed shoes. Some may give workers a little stipend to buy those boots, some not. Either way, it doesn't really matter.
Just because it's not novel, doesn't mean it's right. I also don't agree with, for example, many mechanics being forced to buy their own tools (especially what little they get paid).
I don't do tech outside of 9-5, so either my employer pays for it all, or I don't use it. Simple as that. Thankfully, they do pay for it, but I couldn't imagine working somewhere that says "You need to use AI" and then not providing it on their dime.
Quite frankly it should be regulation that if a W2 employee needs something to perform their job duties, the employer must provide it.
I'm not a tech CEO but people who are anti-LLM for programming have no place on my team.