There's more to the cost of using an abstraction than just the cost of learning it. Example: an abstraction that makes a piece of code easier to test may introduce indirections that make it harder to read. That trade-off may be worth it, but not acknowledging that there is a trade-off is the problem I'm getting at.
The religious adherence to "use all abstractions" is as dangerous as the religious adherence to "use no abstractions." My point is we have to get better at quantifying this stuff otherwise we'll forever be stuck in this cycle of arguing which abstractions are appropriate and when but never actually being able to quantify when we are right.
The religious adherence to "use all abstractions" is as dangerous as the religious adherence to "use no abstractions." My point is we have to get better at quantifying this stuff otherwise we'll forever be stuck in this cycle of arguing which abstractions are appropriate and when but never actually being able to quantify when we are right.