Introducing Diversity In Feature Scatter Adversarial Training Via Synthesis accepted at ICPR 2022

Congratulations! Saytya for the paper accepted @ ICPR 2022

Introducing Diversity In Feature Scatter Adversarial Training Via Synthesis

Abstract In an attempt to understand how deep learning models interpret inputs, it has been found that they change their prediction when a carefully optimized imperceptible noise termed adversarial perturbation is added to the input data. Many researchers are focusing on developing methods to counter such effects, but such methods do not generalize well to adversarial test data. Recently, Feature-Scatter adversarial training has come up to solve such problems, but this method uses the traditional adversarial training framework as its basis that cannot generate diverse perturbations.

In this paper, we propose an approach that combines both the Feature-Scatter adversarial training and the generator-based adversarial training framework to optimally explore the adversarial data manifold achieving better robust generalization. We perform extensive experimentation across a wide variety of datasets such as Cifar10, Cifar100, and SVHN. Our framework significantly outperforms the state-of-the-art methods against both strong white-box attacks and black-box attacks.


IDSL Lab News


Highlights - 09 February 2022
The Banff International Research Station will host the *Climate Change Scenarios and Financial Risk* Online workshop at the UBC Okanagan campus in Kelowna, BC, from May 1 to May 6, 2022
LINK

Recruitment

Our group is recruiting year-round for postdocs, MASc and PhD students, visiting students and undergraduate students.

All admitted students will receive a stipend.

Please see our opportunities page for more information.