AI learned this figure of speech from humans. Even the frequency in which it is used is copied from humans. So you can't really use it to determine if something is written by an AI or not.
LLMs might follow the frequencies of the training data in their raw form, but nobody uses raw LLMs, they use models which have been RLHFed to hell and back to bias them towards specific patterns. Then newer models were trained on the output of those RLHFed models, and further RLHFed, and so on, and so on.
In practice RLHF isn't a survey of every living humans personal style or preferences though, its purpose is to make the model more useful in the eyes of the vendor, mainly by getting cheap third-world labor to nudge the model according to the vendors instructions. You don't get a subservient, sycophantic and "safe" chat interface out of unstructured data without putting your thumb on the scale, hard.
If you think that the article is written by human or that is is unclear, please go ahead. Others here on HN also have pointed out that the author shoots out such lengthy blog posts every day. And you can also see the typical emoji AI slop here: https://www.ivanturkovic.com/services/
But I have no issue with your argumentation whatsoever, it is just that I think there is more than sufficient evidence, and you think there is not.
> That is not an upgrade. That is a career identity crisis.
This is not X. It is Y.
> The trap is ...
> This gap matters ...
> This is not empowerment ...
> This is not a minor adjustment...
Your typical AI slop rhetorical phrasing.
Phrases like: "identity crisis", "burnout machine", "supervision paradox", "acceleration trap", "workload creep"
These sound analytical but are lightly defined. They function as named concepts without rigorous definition or empirical grounding.
There might be some good arguments in the article, but AI slop remains AI slop.