The one-time cost of learning a good abstraction is strictly less than the ongoing cost of understanding and then continually reimplementing it by hand. The purpose of a high-level language is to make programs more concise and clear; a language that doesn't do this may somehow become popular but that shouldn't be mistaken for successful.
> The one-time cost of learning a good abstraction is strictly less than the ongoing cost of understanding and then continually reimplementing it by hand.
There is a danger of a "no true Scotsman" fallacy here - it's easy to define a "good" abstraction as one that is worth the one-time cost of learning it.
So, given that there are both "good" and "bad" abstractions in every language, is the net gain from learning them all greater than the cognitive effort of having to learn them all?
I'd argue that no, it's not. Absolutely reducing the total number of abstractions in the language reduces the cognitive load of working in that language, even if it requires us to be more verbose.
I look on it as falling out of the https://en.wikipedia.org/wiki/Rule_of_least_power. When I see reuse of an abstraction I know not only what it does but what it doesn't do. If I skip that and start writing custom code, well, that code could do anything at all so everyone has to continually reread it carefully. I find poring over tedious boilerplate to be a waste of precious lifetime compared to learning better building blocks.
There's more to the cost of using an abstraction than just the cost of learning it. Example: an abstraction that makes a piece of code easier to test may introduce indirections that make it harder to read. That trade-off may be worth it, but not acknowledging that there is a trade-off is the problem I'm getting at.
The religious adherence to "use all abstractions" is as dangerous as the religious adherence to "use no abstractions." My point is we have to get better at quantifying this stuff otherwise we'll forever be stuck in this cycle of arguing which abstractions are appropriate and when but never actually being able to quantify when we are right.
"The one-time cost of learning a good abstraction is strictly less than the ongoing cost of understanding and then continually reimplementing it by hand."
In theory, I agree with you. However, the main thrust of my post is that the practice is not working out as that theory predicts.
I haven't fully worked out all the bits and pieces, but I do know that just repeating that the theory must be correct because it must be correct even if it contradicts the evidence is not the correct way forward. Theory bows to evidence, not the other way around.