Wait a minute, what? What I read from your comment is that on your work machines the screen savers display ads? I mean, I’d heard Windows was getting bad with the ads, but surely it doesn’t work that way out of the box.
For the type of buyer you describe this vehicle parked in the garage, to speculate, may be capable of doing double duty as an automated battery backup for the estate nearby to store energy during times of excess grid capacity and to discharge during periods of high demand or grid interuptions. I would be interested to know if the vehicle includes this capability, or if it could be easily modified to offer this capability. Probably is preferable to an onsite diesel generator for example even if it is not an exactly comparable situation, just due to lower local emissions.
You've got to be kidding. The people who can afford multiple luxury cars aren't going to mess around using them as backup batteries just to save a few bucks on generators for their mansions.
It not a killer feature, granted. I would be willing to bet that the cost of the engineering to develop and support this feature as a default capability for the fleet of all vehicles would be less than the value of energy saved ammortized over the lifetime of all relevant vehicles.
To clarify, even if it is not strictly "spying" by some particular definition, the scope and scale is so large, and the channels to direct actual "spying" resources towards potentially relevant targets that are unveiled through OSINT methods really blur the lines.
I would like to have the opportunity to consider a decentralized consensus algorithm that could accommodate nation state adversaries regularly. Not simply something cryptographically secure and distributed but something which can retroactively route around nodes who are temporarily bad due to external circumstances.
As someone on a team with a less stringent code review culture, AI generated code creates more work when used indiscriminately. Good enough to get approved but full of non-obvious errors that cause expensive rework which only gets prioritized once the shortcomings become painfully obvious (usually) months after the original work was “completed” and once the original author has forgotten the details, or worse, left the team entirely. Not to say AI generated code is not occasionally valuable, just not for anything that is intended to be correct and maintainable indefinitely by other developers. The real challenge is people using AI generated code as a mechanism to avoid fully understanding the problem that needs to be solved.
Exactly it’s the non-obvious errors that are easy to miss—doubly so if you are just scanning the code. Those errors can create very hard to find bugs.
So between the debugging and many times you need to reprompt and redo (if you bother at all, but then that adds debugging time) is any time actually saved?
I think the dust hasn’t settled yet because no one has shipped mostly AI generated code for a non-trivial application. They couldn’t have with its current state. So it’s still unknown whether building on incredibly shaky ground will actually work in real life (I personally doubt it).