From a couple different perspectives, docs and support also love (or should love) dedicated QA as a last line of defense against product changes that engineers don't consider worth documenting.
Every single release in an org with no dedicated QA and dev ownership of testing, I've seen something a dev has changed in the product that developers deemed weren't worth documenting, and every single release a user hits that thing and support is left not even knowing the change happened and looking like they aren't the product experts that sales told the user they were.
But aside from the dedicated viewpoint change of a QA analyst vs. an engineer, it's also a different relationship with other teams. Whether QA is part of engineering teams or siloed, it's usually a little easier for a tech writer or support engineer to pitch process improvements and regular communication to QA than developers. It's not about capacity, but value — adding a test to the product that confirms a code example in the documentation is accurate helps the product and the writer, so someone with a QA-first mindset doesn't have to think twice. But to a developer with a velocity-first mindset, it's busy work (shouldn't the tech writer own that check?) and maintenance load (isn't this another low-value test I have to manually update every release?).
Tests should be a form of docs for what's supported in the product. If a feature isn't tested, documenting it for users is a risk. If the tests don't confirm what's documented, it's a risk. If it's a risk, it's not the devs who'll deal with it first when it breaks, it's support. Having that healthy relationship between documentation, support, and testing is a force multiplier, and it's harder to build that relationship when development priorities like velocity conflict with testing priorities like coverage and accuracy.
Every single release in an org with no dedicated QA and dev ownership of testing, I've seen something a dev has changed in the product that developers deemed weren't worth documenting, and every single release a user hits that thing and support is left not even knowing the change happened and looking like they aren't the product experts that sales told the user they were.
But aside from the dedicated viewpoint change of a QA analyst vs. an engineer, it's also a different relationship with other teams. Whether QA is part of engineering teams or siloed, it's usually a little easier for a tech writer or support engineer to pitch process improvements and regular communication to QA than developers. It's not about capacity, but value — adding a test to the product that confirms a code example in the documentation is accurate helps the product and the writer, so someone with a QA-first mindset doesn't have to think twice. But to a developer with a velocity-first mindset, it's busy work (shouldn't the tech writer own that check?) and maintenance load (isn't this another low-value test I have to manually update every release?).
Tests should be a form of docs for what's supported in the product. If a feature isn't tested, documenting it for users is a risk. If the tests don't confirm what's documented, it's a risk. If it's a risk, it's not the devs who'll deal with it first when it breaks, it's support. Having that healthy relationship between documentation, support, and testing is a force multiplier, and it's harder to build that relationship when development priorities like velocity conflict with testing priorities like coverage and accuracy.