Introduction
The adversarial attacks have been widely explored in Neural Network (NN). However, previous studies have sought to create bounded perturbations in a metric manner. Most such work has focused on \(\ell_{p}\)-norm perturbation (i.e., \(\ell_{1}\), \(\ell_{2}\), or \(\ell_{\infty}\)) and utilized gradient-based optimization to effectively generate the adversarial example. However, it is possible to extend adversarial perturbations beyond the \(\ell_{p}\)-norm bounds.
We combined the \(\ell_\infty\)-norm and semantic perturbations (i.e., hue, saturation, rotation, brightness, and contrast), and proposed a novel approach – composite adversarial attack (CAA) – capable of generating unified adversarial examples. The main differences between CAA and previously proposed perturbations are a) that CAA incorporates several threat models simultaneously, and b) that CAA’s adversarial examples are semantically similar and/or natural-looking, but nevertheless result in large differences in \(\ell_{p}\)-norm measures.

To further demonstrate the proposed idea and familiarize other researchers with the concept of composite adversarial robustness, and ultimately, create more trustworthy AI, we developed this browser-based composite perturbation generation demo along with the adversarial robustness leaderboard, CARBEN (composite adversarial robustness benchmark). CARBEN also features interactive sections which facilitate users’ configuration of the parameters of the attack level and their rapid evaluation of model prediction.
Composite Perturbations with Custom Order
Try to change the attack order by moving the perturbation blocks with your mouse. Then, click Generate to see the perturbed images. We provide several samples and the inference results from an \(\ell_{\infty}\)-robust model. Note that the picture below the perturbation blocks contains all perturbations before it.
- Original
- Hue
- Saturation
- Rotation
- Brightness
- Contrast
- \(\ell_{\infty}\)
- Warplane (82%)
- Warplane (79%)
- Warplane (54%)
- Wing (40%)
- Wing (40%)
- Wing (56%)
Craft Your Desired Composite Perturbations
1. Select a model
- Trained and tested on ImageNet
- Architecture: ResNet-50
- Clean Accuracy: 76.13%
- Robust Accuracy (Auto Attack: \(\ell_\infty\), 4/255): 0.0%
- Robust Accuracy (Composite Semantic Attack): 20.6%
2. Select an image
3. Specify Perturbation Level (Real-time Rendering)
Here, we use CSS and Javascript Canvas to render the perturbed images. In real practice, we use Kornia package to implement the semantic attacks.
- Ground Truth:
- Model Prediction:
4. Robustness Statistics
Evaluate model robustness from all test sets. The following chart represents the semantic attacks (w/o \(\ell_\infty\)) robust
accuracy of the models. Currently, we support two datasets: CIFAR-10 and ImageNet.
Use CAA to Evaluate Your Own Models
Quick Start by running the following code! Or,
# !pip install git+https://github.com/IBM/composite-adv.git
from composite_adv.utilities import make_dataloader, make_model, EvalModel, robustness_evaluate
# Load dataset
data_loader = make_dataloader('./data/', 'cifar10', batch_size=256)
# Load a model
base_model = make_model('resnet50', 'cifar10', checkpoint_path="PATH_OF_CHECKPOINT")
model = EvalModel(base_model, input_normalized=False)
# Evaluate the Composite robustness of the model using Composite Attack (Full attack)
from composite_adv.attacks import CompositeAttack
composite_attack = CompositeAttack(model,
enabled_attack=(0,1,2,3,4,5),
order_schedule="scheduled")
accuracy, attack_success_rate = robustness_evaluate(model, composite_attack, data_loader)
Benchmarks
In this benchmark, we evaluate \(\ell_\infty\) (AutoAttack) and CAA (Full Attacks) robustness. The epsilons of \(\ell_\infty\) were set to 8/255 for CIFAR-10 and 4/255 for ImageNet.
CIFAR-10
ImageNet
Citations
If you find this webpage helpful and useful for your research, please cite our papers as follows:
@inproceedings{hsiung2022caa,
title={{Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations}},
author={Lei Hsiung and Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho},
booktitle={{IEEE/CVF} Conference on Computer Vision and Pattern Recognition, {CVPR}},
publisher={{IEEE}},
year={2023},
month={June}
}
@inproceedings{hsiung2022carben,
title={{CARBEN: Composite Adversarial Robustness Benchmark}},
author={Lei Hsiung and Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho},
booktitle={Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}},
publisher={International Joint Conferences on Artificial Intelligence Organization},
year={2022},
month={July}
}