Publication
ICASSP 2024
Conference paper

Adversarial Robustness of Convolutional Models Learned in the Frequency Domain

View publication

Abstract

This paper presents an extensive comparison of the noise robustness of standard Convolutional Neural Networks (CNNs) trained on image inputs and those trained in the frequency domain. We investigate the robustness of CNNs to small adversarial noise in the RGB input space and show that CNNs trained on Discrete Cosine Transform (DCT) inputs exhibit significantly better noise robustness to both adversarial and common spatial transformations compared to standard CNNs learned on RGB/Grayscale input. Our results suggest that frequency-domain learning of convolutional models may disentangle frequencies corresponding to semantic and adversarial features, resulting in improved adversarial robustness. This research highlights the potential of frequency domain learning to improve neural network robustness to test-time noise and warrants further investigation in this area.

Date

Publication

ICASSP 2024