• Login
    View Item 
    •   SMARTech Home
    • Institute for Information Security & Privacy (IISP)
    • Institute for Information Security & Privacy Cybersecurity Lecture Series
    • View Item
    •   SMARTech Home
    • Institute for Information Security & Privacy (IISP)
    • Institute for Information Security & Privacy Cybersecurity Lecture Series
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Protecting Intellectual Property in Additive Manufacturing Systems Against Optical Side-Channel Attacks

    Thumbnail
    View/Open
    liang.mp4 (117.5Mb)
    liang_videostream.html (1.096Kb)
    transcript.txt (43.58Kb)
    thumbnail.jpg (52.55Kb)
    Date
    2022-04-08
    Author
    Liang, Sizhuang
    Metadata
    Show full item record
    Abstract
    Additive Manufacturing (AM), also known as 3D printing, is gaining popularity in industry sectors, such as aerospace, automobile, medicine, and construction. As the market value of the AM industry grows, the potential risk of cyberattacks on AM systems is increasing. One of the high value assets in AM systems is the intellectual property, which is basically the blueprint of a manufacturing process. In this lecture, we present an optical side-channel attack to extract intellectual property in AM systems via deep learning. We found that the deep neural network can successfully recover the path for an arbitrary printing process. By using data augmentation, the neural network can tolerate a certain level of variation in the position and angle of the camera as well as the lighting conditions. The neural network can intelligently perform interpolation and accurately recover the coordinates of an image that is not seen in the training dataset. To defend against the optical side-channel attack, we propose to use an optical projector to artificially inject carefully crafted optical noise onto the printing area. We found that existing noise generation algorithms can effortlessly defeat a naive attacker who is not aware of the existence of the injected noise. However, an advanced attacker who knows about the injected noise and incorporates images with injected noise in the training dataset can defeat all of the existing noise generation algorithms. To address this problem, we propose three novel noise generation algorithms, one of which can successfully defend against the advanced attacker.
    URI
    http://hdl.handle.net/1853/66376
    Collections
    • Institute for Information Security & Privacy Cybersecurity Lecture Series [149]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology