I actually implemented a rule for my website: anytime I write anything and cite a link, I always also include the internet archive url as well just in case. If it's not been archived yet I submit it to be.
as an example:
"You don't have to trust me on this one, here's an article with [a bunch of data] | [*Archive link in case of link rot]"
It's not perfect, but it helps reduce some of the issue.
Other than that solutions are incredibly hard to come by - you need institutions to preserve urls - through tech changes and the like, when they have very little incentive to do so. Eg. making sure they implement a redirect from the http to https sounds simple enough, but not everyone did it. Also if they switch CMSs and the like.
Note that you should also have a rule to save the link content locally, to avoid single-point-of-failure problems in the unlikely-but-catastrophic case that archive.org itself goes down. (Cf the attempts to attack them over their National Emergency Library programme last year.)
as an example:
"You don't have to trust me on this one, here's an article with [a bunch of data] | [*Archive link in case of link rot]"
from: https://kolemcrae.com/notebook/virtue.html
It's not perfect, but it helps reduce some of the issue.
Other than that solutions are incredibly hard to come by - you need institutions to preserve urls - through tech changes and the like, when they have very little incentive to do so. Eg. making sure they implement a redirect from the http to https sounds simple enough, but not everyone did it. Also if they switch CMSs and the like.