/lastpage (5876) /MediaBox [ 0 0 612 792 ] This sort of training can be done by. /MediaBox [ 0 0 612 792 ] endobj endobj Using adversarial training to defend against multiple types of perturbation requires expensive adversarial examples from different perturbation types at each training step. By continuing to browse this site, you agree to this use. /Contents 350 0 R /MediaBox [ 0 0 612 792 ] Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small ℓ∞ ℓ ∞ -noise). Get the latest machine learning methods with code. Here, we take an orthogonal approach to the previous studies and seek to increase the lower bound of Equation 2 by exploring the joint robustness of multiple classifiers. 1. endobj robust classifiers against multiple perturbations with negligible additional training cost over the standard adversarial training. In this paper, we propose composite adversarial training (CAT), a novel training method that flexibly inte- grates and optimizes multiple adversarial losses, leading to significant robustness 3.2. /Annots [ 319 0 R 320 0 R 321 0 R 322 0 R 323 0 R 324 0 R 325 0 R 326 0 R 327 0 R 328 0 R 329 0 R 330 0 R 331 0 R 332 0 R 333 0 R 334 0 R 335 0 R 336 0 R 337 0 R ] For other perturbations… This is a 3-minute summary of the paper "Adversarial Training and Robustness for Multiple Perturbations" which appears as a spotlight at NeurIPS 2019. of multiple perturbations is still fairly under-studied. >> /Resources 370 0 R << Adversarial training improves the model robustness by train-ing on adversarial examples generated by FGSM and PGD (Goodfellow et al.,2015;Madry et al.,2018).Tramer et al.` (2018) proposed an ensemble adversarial training on ad-versarial examples generated from a number of pretrained Adversarial Training and Robustness for Multiple Perturbations Florian Tramèr Stanford University Dan Boneh Stanford University Abstract Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small ℓ∞-noise). Introduction. Recent works have proposed defenses to improve the robustness of a single model against the union of multiple perturbation types. Adversarial Robustness Against the Union of Multiple Perturbation Models We believe that achieving robustness to multiple perturba- tions is an essential step towards the eventual objective of universalrobustnessandourworkfurthermotivatesresearch in this area. << Browse our catalogue of tasks and access state-of-the-art solutions. For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability. Learn more Adversarial training techniques for single modal tasks on images and text have been shown to make a model more robust and generalizable. /MediaBox [ 0 0 612 792 ] Despite the recent advances in adversarial training based defenses, deep neural networks are still vulnerable to adversarial attacks outside the perturbation type they are trained to be robust against. /Parent 1 0 R !�r�"��DŽ:���Ǻt$�(ȄE��5� �i�ހF/W*ɣ7$LX��v��d�����
ڊogzk�"ڞ_�u�5>���������i�&$�TF����>�&. 3 0 obj Adversarial Training and Robustness for Multiple Perturbations Florian Tramèr Stanford University Dan Boneh Stanford University Abstract Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small ‘ 1-noise). [Download notes as jupyter notebook](adversarial_training.tar.gz) ## From adversarial examples to training robust models In the previous chapter, we focused on methods for solving the inner maximization problem over perturbations; that is, to finding the solution to the problem $$ \DeclareMathOperator*{\maximize}{maximize} \maximize_{\|\delta\| \leq \epsilon} \ell(h_\theta(x + … In this paper, In particular, we uncover a pernicious gradient-masking phenomenon on MNIST, which causes adversarial training with first-order $\ell_\infty, \ell_1$ and $\ell_2$ adversaries to achieve merely $50\%$ accuracy. Implemented in one code library. Here, we take an orthogonal approach to the previous studies and seek to increase the lower bound of Equation 2 by exploring the joint robustness of multiple classifiers. /Resources 249 0 R /Parent 1 0 R >> which adversarial training is the most effective. /Type /Page Deep learning is progressing at an astounding rate with a wide range of real-world applications, such as computer vision , speech recognition and natural language processing .Despite these successful applications, the emergence of adversarial examples , , images containing perturbations imperceptible to human but misleading to DNNs, poses potential security threats to … 14 0 obj Repeat until convergence /Resources 267 0 R >> << Adversarial Robustness Against the Union of Multiple Perturbation Models. << Existing defenses against adversarial attacks are typically tailored to a specific perturbation type. >> We performed a fairly thorough evaluation of the models we trained using a wide range of attacks. Choose a set of perturbations: e.g., noise of small ℓ ∞ norm: 2. Agreement NNX16AC86A, Is ADS down? /Kids [ 4 0 R 5 0 R 6 0 R 7 0 R 8 0 R 9 0 R 10 0 R 11 0 R 12 0 R 13 0 R 14 0 R ] /Book (Advances in Neural Information Processing Systems 32) 15 0 obj Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple … /Resources 175 0 R endobj endobj /Resources 229 0 R Adversarial Training and Robustness for Multiple Perturbations Adversarial training Szegedy et al., 2014 Madry et al., 2017 1. /Type /Page endobj For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability. >> 6 0 obj This site uses cookies for analytics, personalized content and ads. of multiple perturbations is still fairly under-studied. Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). As we seek to deploy machine learning systems not only on virtual domains, but also in real systems, it becomes critical that we examine not only whether the systems don’t simply work “most of the time”, but which are truly robust and reliable. Let’s now consider, a bit more formally, ... and it is often better to incorporate multiple perturbations with different random initializations and potentially also a gradient based upon the initial point with no perturbation. /Producer (PyPDF2) Szegedy et 3.2. /Type /Page Creating human understandable adversarial examples (as in Szegedy et al.) /Contents 317 0 R /Annots [ 238 0 R 239 0 R 240 0 R 241 0 R 242 0 R 243 0 R 244 0 R 245 0 R 246 0 R 247 0 R ] >> /MediaBox [ 0 0 612 792 ] /Parent 1 0 R Building upon new multi-perturbation adversarial training schemes, and a novel efficient attack for finding $\ell_1$-bounded adversarial examples, we show that no model trained against multiple attacks achieves robustness competitive with that of models trained on each attack individually. /Contents 266 0 R /Type /Pages We prove that a trade-off in robustness to different types of $\ell_p$-bounded and spatial perturbations must exist in a natural and simple statistical setting. Introduction. >> Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). 8 0 obj “Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness” ∈0, 1784 natural. /Type /Page Although many notions of robustness and reliability exist, one particular topic in this area that has raised a great deal of interest in recent years is that of adversarial robustness: can we develop … Adversarial robustness and training. Adversarial Interpolation Training: A Simple Approach for Improving Model Robustness Adversarial Robustness Against the Union of Multiple Perturbation Models 发布于 2019-12-20 Evaluation. >> >> Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense. Besides, a single attack algorithm could be insufficient to explore the space of perturbations. /Annots [ 352 0 R 353 0 R 354 0 R 355 0 R 356 0 R 357 0 R 358 0 R 359 0 R 360 0 R 361 0 R 362 0 R 363 0 R 364 0 R 365 0 R 366 0 R ] << 11 0 obj Deep learning is progressing at an astounding rate with a wide range of real-world applications, such as computer vision , speech recognition and natural language processing .Despite these successful applications, the emergence of adversarial examples , , images containing perturbations imperceptible to human but misleading to DNNs, poses potential security threats to … Furthermore, we complement all the methods with efficient training Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition 5 2.2 Adversarial Example Generation With increasing usage of deep learning algorithms for complex and popular tasks such as object recognition and face recognition, researchers are also at-tempting to understand the limitations of deep learning algorithms. Train the model on 4. endobj We propose new multi-perturbation adversarial training schemes, as well as an efficient attack for the $\ell_1$-norm, and use these to show that models trained against multiple attacks fail to achieve robustness competitive with that of models trained on each attack individually. << /Author (Florian Tramer\054 Dan Boneh) /Subject (Neural Information Processing Systems http\072\057\057nips\056cc\057) /Annots [ 50 0 R 51 0 R 52 0 R 53 0 R 54 0 R 55 0 R 56 0 R 57 0 R 58 0 R 59 0 R 60 0 R 61 0 R 62 0 R 63 0 R 64 0 R 65 0 R 66 0 R 67 0 R 68 0 R 69 0 R 70 0 R ] L. 1. perturbed endobj the Lipschitz constant [9, 20, 39] or adversarial training [19, 26]. endobj 9 0 obj << /Type /Page Building upon new multi-perturbation adversarial training schemes, and a novel efficient attack for finding $\ell_1$-bounded adversarial examples, we show that no model trained against multiple attacks achieves robustness competitive with that of models trained on each attack individually. /Type /Page However, training on multiple perturbations simultaneously significantly increases the computational overhead during training. [Download notes as jupyter notebook](adversarial_training.tar.gz) ## From adversarial examples to training robust models In the previous chapter, we focused on methods for solving the inner maximization problem over perturbations; that is, to finding the solution to the problem $$ \DeclareMathOperator*{\maximize}{maximize} \maximize_{\|\delta\| \leq \epsilon} \ell(h_\theta(x + … endobj We corroborate our formal analysis by demonstrating similar robustness trade-offs on MNIST and CIFAR10. Astrophysical Observatory. robust optimization, which guarantees performance under adversarial input perturbations. Adversarial Training and Robustness for Multiple Perturbations Reviewer 1 The theoretical contribution of this paper (Section 2) is solid and neatly extends the work of Tsipras et al. endobj << 4 0 obj endobj Joint Robustness of Multiple Classifiers Let F be an ensemble of k classifiers, F={fi}k−1 i=0, 10 0 obj For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability. /MediaBox [ 0 0 612 792 ] (Simple dataset, centered and scaled, non-trivial robustness is achievable) Using adversarial training, models have been trained to “extreme” levels of robustness (E.g., robust to L 1noise > 30 or L ∞noise > 0.3) 13 Jacobsen et al.“Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness” ∈0,1784 natural L 1perturbed L Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). (or is it just me...), Smithsonian Privacy By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. (Simple dataset, centered and scaled, non-trivial robustness is achievable) Using adversarial training, models have been trained to “extreme” levels of robustness (E.g., robust to L 1 noise > 30 or L ∞ noise > 0.3)Jacobsen et al. [14] formulates the defense ofmodel ro-bustness as a min-max optimization problem, in which the adversary is constructed to achieve high loss value gradient norm and adversarial robustness. >> xڭZK�۸�ϯ�%��b�_�!���)o�g=�� Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). /Contents 369 0 R /Type (Conference Proceedings) robustness by augmenting training data with adversarial examples. /Length 3650 << Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). /Annots [ 254 0 R 255 0 R 256 0 R 257 0 R 258 0 R 259 0 R 260 0 R 261 0 R 262 0 R 263 0 R 264 0 R 265 0 R ] /Resources 16 0 R /Description-Abstract (\376\377\000D\000e\000f\000e\000n\000s\000e\000s\000 \000a\000g\000a\000i\000n\000s\000t\000 \000a\000d\000v\000e\000r\000s\000a\000r\000i\000a\000l\000 \000e\000x\000a\000m\000p\000l\000e\000s\000\054\000 \000s\000u\000c\000h\000 \000a\000s\000 \000a\000d\000v\000e\000r\000s\000a\000r\000i\000a\000l\000 \000t\000r\000a\000i\000n\000i\000n\000g\000\054\000 \000a\000r\000e\000 \000t\000y\000p\000i\000c\000a\000l\000l\000y\000 \000t\000a\000i\000l\000o\000r\000e\000d\000 \000t\000o\000 \000a\000 \000s\000i\000n\000g\000l\000e\000 \000p\000e\000r\000t\000u\000r\000b\000a\000t\000i\000o\000n\000 \000t\000y\000p\000e\000 \000\050\000e\000\056\000g\000\056\000\054\000 \000s\000m\000a\000l\000l\000 \000\044\000\134\000e\000l\000l\000\137\000\134\000i\000n\000f\000t\000y\000\044\000\055\000n\000o\000i\000s\000e\000\051\000\056\000 \000F\000o\000r\000 \000o\000t\000h\000e\000r\000 \000p\000e\000r\000t\000u\000r\000b\000a\000t\000i\000o\000n\000s\000\054\000 \000t\000h\000e\000s\000e\000 \000d\000e\000f\000e\000n\000s\000e\000s\000 \000o\000f\000f\000e\000r\000 \000n\000o\000 \000g\000u\000a\000r\000a\000n\000t\000e\000e\000s\000 \000a\000n\000d\000\054\000 \000a\000t\000 \000t\000i\000m\000e\000s\000\054\000 \000e\000v\000e\000n\000 \000i\000n\000c\000r\000e\000a\000s\000e\000 \000t\000h\000e\000 \000m\000o\000d\000e\000l\000\047\000s\000 \000v\000u\000l\000n\000e\000r\000a\000b\000i\000l\000i\000t\000y\000\056\000\012\000O\000u\000r\000 \000a\000i\000m\000 \000i\000s\000 \000t\000o\000 \000u\000n\000d\000e\000r\000s\000t\000a\000n\000d\000 \000t\000h\000e\000 \000r\000e\000a\000s\000o\000n\000s\000 \000u\000n\000d\000e\000r\000l\000y\000i\000n\000g\000 \000t\000h\000i\000s\000 \000r\000o\000b\000u\000s\000t\000n\000e\000s\000s\000 \000t\000r\000a\000d\000e\000\055\000o\000f\000f\000\054\000 \000a\000n\000d\000 \000t\000o\000 \000t\000r\000a\000i\000n\000 \000m\000o\000d\000e\000l\000s\000 \000t\000h\000a\000t\000 \000a\000r\000e\000 \000s\000i\000m\000u\000l\000t\000a\000n\000e\000o\000u\000s\000l\000y\000 \000r\000o\000b\000u\000s\000t\000 \000t\000o\000 \000m\000u\000l\000t\000i\000p\000l\000e\000 \000p\000e\000r\000t\000u\000r\000b\000a\000t\000i\000o\000n\000 \000t\000y\000p\000e\000s\000\056\000\012\000\012\000W\000e\000 \000p\000r\000o\000v\000e\000 \000t\000h\000a\000t\000 \000a\000 \000t\000r\000a\000d\000e\000\055\000o\000f\000f\000 \000i\000n\000 \000r\000o\000b\000u\000s\000t\000n\000e\000s\000s\000 \000t\000o\000 \000d\000i\000f\000f\000e\000r\000e\000n\000t\000 \000t\000y\000p\000e\000s\000 \000o\000f\000 \000\044\000\134\000e\000l\000l\000\137\000p\000\044\000\055\000b\000o\000u\000n\000d\000e\000d\000 \000a\000n\000d\000 \000s\000p\000a\000t\000i\000a\000l\000 \000p\000e\000r\000t\000u\000r\000b\000a\000t\000i\000o\000n\000s\000 \000m\000u\000s\000t\000 \000e\000x\000i\000s\000t\000 \000i\000n\000 \000a\000 \000n\000a\000t\000u\000r\000a\000l\000 \000a\000n\000d\000 \000s\000i\000m\000p\000l\000e\000 \000s\000t\000a\000t\000i\000s\000t\000i\000c\000a\000l\000 \000s\000e\000t\000t\000i\000n\000g\000\056\000\012\000W\000e\000 \000c\000o\000r\000r\000o\000b\000o\000r\000a\000t\000e\000 \000o\000u\000r\000 \000f\000o\000r\000m\000a\000l\000 \000a\000n\000a\000l\000y\000s\000i\000s\000 \000b\000y\000 \000d\000e\000m\000o\000n\000s\000t\000r\000a\000t\000i\000n\000g\000 \000s\000i\000m\000i\000l\000a\000r\000 \000r\000o\000b\000u\000s\000t\000n\000e\000s\000s\000 \000t\000r\000a\000d\000e\000\055\000o\000f\000f\000s\000 \000o\000n\000 \000M\000N\000I\000S\000T\000 \000a\000n\000d\000 \000C\000I\000F\000A\000R\0001\0000\000\056\000 \000W\000e\000 \000p\000r\000o\000p\000o\000s\000e\000 \000n\000e\000w\000 \000m\000u\000l\000t\000i\000\055\000p\000e\000r\000t\000u\000r\000b\000a\000t\000i\000o\000n\000 \000a\000d\000v\000e\000r\000s\000a\000r\000i\000a\000l\000 \000t\000r\000a\000i\000n\000i\000n\000g\000 \000s\000c\000h\000e\000m\000e\000s\000\054\000 \000a\000s\000 \000w\000e\000l\000l\000 \000a\000s\000 \000a\000n\000 \000e\000f\000f\000i\000c\000i\000e\000n\000t\000 \000a\000t\000t\000a\000c\000k\000 \000f\000o\000r\000 \000t\000h\000e\000 \000\044\000\134\000e\000l\000l\000\137\0001\000\044\000\055\000n\000o\000r\000m\000\054\000 \000a\000n\000d\000 \000u\000s\000e\000 \000t\000h\000e\000s\000e\000 \000t\000o\000 \000s\000h\000o\000w\000 \000t\000h\000a\000t\000 \000m\000o\000d\000e\000l\000s\000 \000t\000r\000a\000i\000n\000e\000d\000 \000a\000g\000a\000i\000n\000s\000t\000 \000m\000u\000l\000t\000i\000p\000l\000e\000 \000a\000t\000t\000a\000c\000k\000s\000 \000f\000a\000i\000l\000 \000t\000o\000 \000a\000c\000h\000i\000e\000v\000e\000 \000r\000o\000b\000u\000s\000t\000n\000e\000s\000s\000 \000c\000o\000m\000p\000e\000t\000i\000t\000i\000v\000e\000 \000w\000i\000t\000h\000 \000t\000h\000a\000t\000 \000o\000f\000 \000m\000o\000d\000e\000l\000s\000 \000t\000r\000a\000i\000n\000e\000d\000 \000o\000n\000 \000e\000a\000c\000h\000 \000a\000t\000t\000a\000c\000k\000 \000i\000n\000d\000i\000v\000i\000d\000u\000a\000l\000l\000y\000\056\000 \000\012\000I\000n\000 \000p\000a\000r\000t\000i\000c\000u\000l\000a\000r\000\054\000 \000w\000e\000 \000f\000i\000n\000d\000 \000t\000h\000a\000t\000 \000a\000d\000v\000e\000r\000s\000a\000r\000i\000a\000l\000 \000t\000r\000a\000i\000n\000i\000n\000g\000 \000w\000i\000t\000h\000 \000f\000i\000r\000s\000t\000\055\000o\000r\000d\000e\000r\000 \000\044\000\134\000e\000l\000l\000\137\000\134\000i\000n\000f\000t\000y\000\054\000 \000\134\000e\000l\000l\000\137\0001\000\044\000 \000a\000n\000d\000 \000\044\000\134\000e\000l\000l\000\137\0002\000\044\000 \000a\000t\000t\000a\000c\000k\000s\000 \000o\000n\000 \000M\000N\000I\000S\000T\000 \000a\000c\000h\000i\000e\000v\000e\000s\000 \000m\000e\000r\000e\000l\000y\000 \000\044\0005\0000\000\134\000\045\000\044\000 \000r\000o\000b\000u\000s\000t\000 \000a\000c\000c\000u\000r\000a\000c\000y\000\054\000 \000p\000a\000r\000t\000l\000y\000 \000b\000e\000c\000a\000u\000s\000e\000 \000o\000f\000 \000g\000r\000a\000d\000i\000e\000n\000t\000\055\000m\000a\000s\000k\000i\000n\000g\000\056\000\012\000F\000i\000n\000a\000l\000l\000y\000\054\000 \000w\000e\000 \000p\000r\000o\000p\000o\000s\000e\000 \000a\000f\000f\000i\000n\000e\000 \000a\000t\000t\000a\000c\000k\000s\000 \000t\000h\000a\000t\000 \000l\000i\000n\000e\000a\000r\000l\000y\000 \000i\000n\000t\000e\000r\000p\000o\000l\000a\000t\000e\000 \000b\000e\000t\000w\000e\000e\000n\000 \000p\000e\000r\000t\000u\000r\000b\000a\000t\000i\000o\000n\000 \000t\000y\000p\000e\000s\000 \000a\000n\000d\000 \000f\000u\000r\000t\000h\000e\000r\000 \000d\000e\000g\000r\000a\000d\000e\000 \000t\000h\000e\000 \000a\000c\000c\000u\000r\000a\000c\000y\000 \000o\000f\000 \000a\000d\000v\000e\000r\000s\000a\000r\000i\000a\000l\000l\000y\000 \000t\000r\000a\000i\000n\000e\000d\000 \000m\000o\000d\000e\000l\000s\000\056) /Resources 318 0 R >> /ModDate (D\07220200213002547\05508\04700\047) Adding adversarial perturbations to the embedding space (as in FreeLB). %PDF-1.3 /MediaBox [ 0 0 612 792 ] /Date (2019) /Type /Page We study statistical properties of adversarial robustness in a natural statistical model introduced in [tsipras2019robustness], and which exhibits many phenomena observed on real data, such as trade-offs between robustness and accuracy [tsipras2019robustness] or a higher sample complexity for robust generalization [schott2018towards]. /Parent 1 0 R Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small -noise). Use, Smithsonian << /Type /Page 1 0 obj which adversarial training is the most effective. We propose new multi-perturbation adversarial training schemes, as well as an efficient attack for the $\ell_1$-norm, and use these to show that models trained against multiple attacks fail to achieve robustness competitive with that of models trained on each attack individually. >> 2 0 obj /Title (Adversarial Training and Robustness for Multiple Perturbations) /Parent 1 0 R 7 0 obj 1. Abstract. In [11], the model’s robustness is en-hanced by using adversarial training on large scale mod-els and datasets. Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types. %aF,K�BR�����
P�W�9�l ��F��d�H�If��"]l7Te�PqY���,�o�~�߽ۛ�07��H���~!�4.�l��E\�jq�]|���~Y�$2]_uu�_�d�D��\G]U7u������ˏ�z)�����{��/e������E��Zf��(�R��ǻ�~��{ó��z�
�n]
u��������L�q�,����-����v��2�,��~�m���.؎sb7Q��r&�;�M���JK=0� �d's��m��|���4����;D����ɡ�"���S4�4��m���ޠ>���ͅ� ��"�"���OQHw��~��`E?W�%"N�x0ZYJe�*t ^̽izCʠ��zX�T�����@C�����Š��ٹ�+��nU�:֛j��2 =)�$�,.�f����"��ږ�eT�z��:N�G�������b"E�?`{>�#DA
�R! /Contents 248 0 R /Parent 1 0 R /Annots [ 150 0 R 151 0 R 152 0 R 153 0 R 154 0 R 155 0 R 156 0 R 157 0 R 158 0 R 159 0 R 160 0 R 161 0 R 162 0 R 163 0 R 164 0 R 165 0 R 166 0 R 167 0 R 168 0 R 169 0 R 170 0 R 171 0 R 172 0 R 173 0 R ] Computer Science - Cryptography and Security. For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability. In this paper, we propose composite adversarial training (CAT), a novel training method that flexibly inte-grates and optimizes multiple adversarial losses, leading to significant robustness improvement with respect to individual perturbations as well as their “compo-sitions”. /MediaBox [ 0 0 612 792 ] Our results question the viability and computational scalability of extending adversarial robustness, and adversarial training, to multiple perturbation types. /Created (2019) Joint Robustness of Multiple Classifiers Let F be an ensemble of k classifiers, F={fi}k−1 i=0, /Published (2019) /EventType (Poster) /Type /Page /firstpage (5866) For other perturbations, /Resources 72 0 R /Parent 1 0 R /MediaBox [ 0 0 612 792 ] Adversarial Training and Robustness for Multiple Perturbations . endobj This model also proves useful in analyzing and understanding … /Type /Page 3 Adversarial Setting The goal of an adversary is to \fool" the target model by adding human-imperceptible perturbations to its input. Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types.