Understanding, Fortifying and Democratizing AI Security
MetadataShow full item record
As we steadily move towards an AI-powered utopia that could only be imagined in lofty fiction in the recent past, a formidable threat is emerging that endangers the acute capitalization of AI in our everyday lives. A growing body of adversarial machine learning research has revealed that deep neural networks — the workhorse of modern AI applications — are extremely vulnerable to adversarial examples. These are malicious inputs crafted by an attacker that can completely confuse deep neural networks into making incorrect predictions. Therefore, for people to have complete confidence in using AI applications, there is not only an urgent need to develop strong, practical solutions to defend real-world AI cyber-systems; there is also an equally pressing necessity to enable people to interpret AI vulnerabilities and understand how and why adversarial attacks and defenses work. It is also critical that the technologies for AI security be brought to the masses, and AI security research be as accessible and as pervasive as AI itself. After all, AI impacts people from all walks of life. This dissertation addresses these fundamental challenges through creating holistic interpretation techniques for better understanding of attacks and defenses, developing effective and principled defenses for protecting AI across input modalities, and building tools that enable scalable interactive experimentation with AI security and adversarial ML research. This dissertation has a vision of enhancing trust in AI by making AI security more accessible and adversarial ML education more equitable, while focusing on three complementary research thrusts: (1) Exposing AI Vulnerabilities through Visualization & Interpretable Representations. We develop intuitive interpretation techniques for deciphering adversarial attacks. (2) Mitigating Adversarial Examples Across Modalities & Tasks. We develop robust defenses which are generalizable across diverse AI tasks and input modalities. (3) Democratizing AI Security Research & Pedagogy with Scalable Interactive Experimentation. We enable researchers, practitioners and students to perform in-depth security testing of AI models through interactive experimentation. Our work has made a significant impact to industry and society: our research has produced novel defenses that have been tech-transferred to industry; our interactive visualization systems have significantly expanded the intuitive understanding of AI vulnerabilities; and our scalable AI security framework and research tools, becoming available to thousands of students, is transforming AI education at scale.