Effect of Boosting on Adversarial Robustness
MetadataShow full item record
In this paper I explore the relationship between boosting and neural networks. We see that our adaptation of ADABOOST.MM for neural networks results in a consistent increase in accuracy, in the nonadversarial setting. This provides a way to increase the accuracy of any model, without modification to the model itself, making it very easy. In addition, we attempt to use these techniques to improve adversarial robustness, that is, a model's performance while under an adversarial attack. While our ensemble does not have a large increase in adversarial accuracy with the addition of weak learners, the ensemble has increased accuracy on non-adversarial examples. The accuracy of an adversarial model on non-adversarial examples is very important in the real world, and we present an easy way to increase that accuracy.