The attack part seems to be that husky ai is down scaling images it uses to train its model. If it was vulnerable to this attack its down scaling would expose the hidden image and use that for training instead of the user visible image. I think this could be used to trick manual or even automated reviews of the input.