Developing Robust and Lightweight Adversarial Defenders by Enforcing Orthogonality on Attack-Agnostic Denoising Autoencoders

TitleDeveloping Robust and Lightweight Adversarial Defenders by Enforcing Orthogonality on Attack-Agnostic Denoising Autoencoders
Publication TypeConference Proceedings
Year of Conference2023
AuthorsBifis, A, Psarakis, E, Kosmopoulos, D
Conference NameInternational Conference on Computer Vision, Workshop on Resource Efficient Deep Learning for Computer Vision
Abstract

Adversarial attacks have become a critical threat to the security and reliability of machine learning models. We propose a solution to the problem of defending against adversarial attacks using a deep Denoising Auto Encoder (DAE). The proposed DAE is trained to enforce orthogonality between the noise and the range space of its output in each layer of the encoder’s chain. Furthermore, the pseudoinverse decoder of the DAE is designed to ensure that the reconstructed image and the null space of its intermediate representations in each layer of the chain maintain orthogonality as it progresses from the target space to the latent space. The denoising problem is formulated as an equality constrained optimization problem, which is solved by finding the stationary points of the Lagrangian function. The noisy data are generated by adding realizations of multiple random noise distributions to pristine data during DAE training, resulting in excellent denoising performance. We compare the performance of our full weights and tied-weights DAEs, showing that the latter not only has half the complexity of the former, but also outperforms the former in denoising and in strong adversarial attacks. To demonstrate the effectiveness of the proposed solution we evaluate our networks against recent works in the literature, specifically those focusing on defending against adversarial attacks.