Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data. It writes the correct data if one of the drives has a corrupted block.
How often does this happen? According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week.
This is complete nonsense without more data to back it up.
> Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data.
This is inherent to RAID-5/6. Doesn't really have anything to do with mdadm other than mdadm implements RAID-5/6. And now you probably have a write hole.
Just to make it clear: on raid 5/6 parity isn't checked on reads, so to get your "right thing when repairing/checking/scrubbing data" you'd have to do a full parity rebuild. This isn't anything like what ZFS does.
https://raid.wiki.kernel.org/index.php/RAID_Administration
This is complete nonsense without more data to back it up.