AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks

Congratulations! Javier and Philip for getting your paper accepted @ IDA 2022.

Javier Perez Tobia and Philip Braun

AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks

Abstract Even though deep learning has allowed for significant advances in the last decade, it is still vulnerable to adversarial attacks - inputs that, despite looking similar to clean data, can force neural networks to make incorrect predictions. Moreover, deep learning models usually act as a black box or an oracle that does not provide any explanations behind its outputs. We propose Attribution Guided Sharpening (AGS), a defense against adversarial attacks that incorporates explainability techniques as a means to make neural networks robust. AGS uses the saliency maps generated on a non-robust model to guide Choi and Hall’s sharpening method to denoise input images before passing them to a classifier. We show that AGS can outperform previous defenses on three benchmark datasets: MNIST, CIFAR-10 and CIFAR-100, and achieve state-of-the-art performance against AutoAttack.

IDSL Lab News

Highlights - 09 February 2022
The Banff International Research Station will host the *Climate Change Scenarios and Financial Risk* Online workshop at the UBC Okanagan campus in Kelowna, BC, from May 1 to May 6, 2022


Our group is recruiting year-round for postdocs, MASc and PhD students, visiting students and undergraduate students.

All admitted students will receive a stipend.

Please see our opportunities page for more information.