You have to use AREA interpolation for downscaling. Bilinear will only interpolate among the 4 nearest source image pixels. It still ignores most of the source pixels.
This is in essence a special version of sampling artifacts, aliasing artifacts. Anyone writing image processing software should already know about aliasing, the Nyquist theorem etc. Or, well, perhaps not in the current hype, where everyone is a computer vision expert who took one Keras tutorial...
Resizing with nearest neighbor or bilinear (ie ignoring aliasing) also hurts ML accuracy, so they better fix it even regardless of this specific "attack".
Bilinear could mean downscaling with a triangle kernel, but it might well be the standard bilinear interpolation that's native to most GPUs and OSs.
Also area interpolation still has some pretty terrible aliasing, since box kernels are terrible at filtering high frequencies.
And of course with downscaling you could still freely manipulate the downscaled image if you're allowed to use ridiculously high or low values, provided you knew the exact kernel used.
Bilinear uses the triangular kernel over the source image (with size corresponding to the input pixel size).
Area interp works very well in practice, it's more sophisticated than just a box filter on the input and sampling. It calculates the exact intersecting footprint sizes and computes a weighted average. Do you have examples where this causes aliasing and can show a better alternative?
Anything softer than area will help with those kind of issues (which is why the original https://en.wikipedia.org/wiki/Aliasing#/media/File:Moire_pat..., looks fine in most browsers even if your resize it). Bicubic tends to do better in this respect. It's a trade off though.
Now you could use pre-smoothing with a kernel and then resampling, but then we are talking about something else.
It's important to understand that interpolation happens in the source pixels, so it does not help when downscaling. Cubic tends to look nice, yes, but only when UPscaling.
Yeah if you're going to be using interpolation to downscale it's obviously going to look worse than even the most basic version of downscaling. That's why downscaling uses the transpose of the the interpolation kernel, not doing that and being surprised the result doesn't look good is just silly.
Imagemagick should work. It also has quite an extensive documentation: https://legacy.imagemagick.org/Usage/resize/. Though it's a bit hard to know where to start. I'm fairly certain it'll tell you somewhere that interpolation and downscaling use their kernels differently, but I couldn't tell you where.
This is in essence a special version of sampling artifacts, aliasing artifacts. Anyone writing image processing software should already know about aliasing, the Nyquist theorem etc. Or, well, perhaps not in the current hype, where everyone is a computer vision expert who took one Keras tutorial...
Resizing with nearest neighbor or bilinear (ie ignoring aliasing) also hurts ML accuracy, so they better fix it even regardless of this specific "attack".