Show simple item record

dc.contributor.advisorAbernethy, Jacob
dc.contributor.authorKareer, Simarpreet
dc.date.accessioned2021-06-30T17:37:47Z
dc.date.available2021-06-30T17:37:47Z
dc.date.created2021-05
dc.date.submittedMay 2021
dc.identifier.urihttp://hdl.handle.net/1853/64873
dc.description.abstractIn this paper I explore the relationship between boosting and neural networks. We see that our adaptation of ADABOOST.MM for neural networks results in a consistent increase in accuracy, in the nonadversarial setting. This provides a way to increase the accuracy of any model, without modification to the model itself, making it very easy. In addition, we attempt to use these techniques to improve adversarial robustness, that is, a model's performance while under an adversarial attack. While our ensemble does not have a large increase in adversarial accuracy with the addition of weak learners, the ensemble has increased accuracy on non-adversarial examples. The accuracy of an adversarial model on non-adversarial examples is very important in the real world, and we present an easy way to increase that accuracy.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectAdversarial Robustness
dc.subjectMachine Learning
dc.subjectDeep Learning
dc.subjectBoosting
dc.subjectEnsemble
dc.subjectComputer Science
dc.titleEffect of Boosting on Adversarial Robustness
dc.typeUndergraduate Research Option Thesis
dc.description.degreeUndergraduate
dc.contributor.departmentComputer Science
thesis.degree.levelUndergraduate
dc.contributor.committeeMemberMuthukumar, Vidya
dc.date.updated2021-06-30T17:37:48Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record