What’s really interesting about k-fold cv is that most textbooks claim that the greater the value of k in k-fold cross-validation, the more variance our estimates tend to have, with Leave-One-Out Cross-Validation (LOOCV) exhibiting exceptionally high variance. This is frequently rationalized by asserting that a larger k will lead to models being trained on very similar data, thereby creating essentially identical models. Nevertheless, this rationale does not always hold true and can be significantly influenced by the specifics of the model and dataset in use!
https://mlu-explain.github.io/cross-validation/
What’s really interesting about k-fold cv is that most textbooks claim that the greater the value of k in k-fold cross-validation, the more variance our estimates tend to have, with Leave-One-Out Cross-Validation (LOOCV) exhibiting exceptionally high variance. This is frequently rationalized by asserting that a larger k will lead to models being trained on very similar data, thereby creating essentially identical models. Nevertheless, this rationale does not always hold true and can be significantly influenced by the specifics of the model and dataset in use!