That's how AI is supposed to be used, no? That's what the providers advertise - it increases development speed, a lot, it replaces devs and so on.
But I guess it's only ok when you work on regular joe facing projects, where the consequences of bugs are on powerless users. If the consequences are on Google, well, that's not acceptable now is it?
If Google and OpenAI and the rest would say this as loud as they praise their models, I would never write comments like that. But this is the fine print, buried somewhere. And so we need to bring it up, because, lo and behold, it matters.
> That's how AI is supposed to be used, no? That's what the providers advertise - it increases development speed, a lot, it replaces devs and so on.
Not really. There’s a difference between accelerating development in the hands of an experienced developer versus having somebody just slop code by hoping for the best.
Adopting AI doesn’t equal removing code review. These were two separate choices combined.
Of course the fine print says to review, just like the ultimate control of the "full self driving" rests with the human driver. But why is the fine print fine, and not large as the large print? Maybe because you're not supposed to pay attention to it? Could this be?
But I guess it's only ok when you work on regular joe facing projects, where the consequences of bugs are on powerless users. If the consequences are on Google, well, that's not acceptable now is it?