• Login
    View Item 
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Interactive Scalable Interfaces for Machine Learning Interpretability

    Thumbnail
    View/Open
    HOHMAN-DISSERTATION-2020.pdf (26.46Mb)
    Date
    2020-12-01
    Author
    Hohman, Frederick
    Metadata
    Show full item record
    Abstract
    Data-driven paradigms now solve the world's hardest problems by automatically learning from data. Unfortunately, what is learned is often unknown to both the people who train the models and the people they impact. This has led to a rallying cry for machine learning interpretability. But how we enable interpretability? How do we scale up explanations for modern, complex models? And how can we best communicate them to people? Since machine learning now impacts people's daily lives, we answer these questions taking a human-centered perspective by designing and developing interactive interfaces that enable interpretability at scale and for everyone. This thesis focuses on: (1) Enabling machine learning interpretability: User research with practitioners guides the creation of our novel operationalization for interpretability, which helps tool builders design interactive systems for model and prediction explanations. We develop two such visualization systems, Gamut and TeleGam, which we deploy at Microsoft Research as a design probe to investigate the emerging practice of interpreting models. (2) Scaling deep learning interpretability: Our first-of-its-kind Interrogative Survey reveals critical yet understudied areas of deep learning interpretability research, such as the lack of higher-level explanations for neural networks. Through Summit, an interactive visualization system, we present the first scalable graph representation that summarizes and visualizes what features deep learning models learn and how those features interact to make predictions (e.g., InceptionNet trained on ImageNet with 1.2M+ images). (3) Communicating interpretability with interactive articles: We use interactive articles, a new medium on the web, to teach people about machine learning's capabilities and limitations, while developing a new interactive publishing initiative called the Parametric Press. From our success publishing interactive content at scale, we generalize and detail the affordances of Interactive Articles by connecting techniques used in practice and the theories and empirical evaluations put forth by diverse disciplines of research. This thesis contributes to information visualization, machine learning, and more importantly their intersection, including open-source interactive interfaces, scalable algorithms, and new, accessible communication paradigms. Our work is making significant impact in industry and society: our visualizations have been deployed and demoed at Microsoft and built into widely-used interpretability toolkits, our interactive articles have been read by 250,000+ people, and our interpretability research is supported by NASA.
    URI
    http://hdl.handle.net/1853/64147
    Collections
    • College of Computing Theses and Dissertations [1191]
    • Georgia Tech Theses and Dissertations [23877]
    • School of Computational Science and Engineering Theses and Dissertations [100]

    Related items

    Showing items related by title, author, creator and subject.

    • USING A HANDS-ON ROBOTICS PROJECT TO AFFECT SKILL DEVELOPMENT IN A CONTROL ANALYSIS COURSE 

      Inghilleri, Niccolo (Georgia Institute of Technology, 2021-05-05)
      This study aims to assess the impact on skill development of a hands-on experimentation and learning device within the undergraduate aerospace control analysis curriculum at Georgia Institute of Technology. The Transportable ...
    • On sparse representations and new meta-learning paradigms for representation learning 

      Mehta, Nishant A. (Georgia Institute of Technology, 2013-05-15)
      Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it ...
    • New insights on the power of active learning 

      Berlind, Christopher (Georgia Institute of Technology, 2015-07-22)
      Traditional supervised machine learning algorithms are expected to have access to a large corpus of labeled examples, but the massive amount of data available in the modern world has made unlabeled data much easier to ...

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology