Machine Learning at Georgia Institute of Technology (ML@GT) is an interdisciplinary research center that will serve as a home to education and research around ML and related fields.

Collections in this community

Recent Submissions

  • Generative models based on point processes for financial time series simulation 

    Wei, Qi (2021-04-07)
    In this seminar, I will talk about generative models based on point processes for financial time series simulation. Specifically, we focus on a recently developed state-dependent Hawkes (sdHawkes) process to model the limit ...
  • You can lead a horse to water...: Representing vs. Using Features in Neural NLP 

    Pavlick, Ellie (2021-03-24)
    A wave of recent work has sought to understand how pretrained language models work. Such analyses have resulted in two seemingly contradictory sets of results. On one hand, work based on "probing classifiers" generally ...
  • Compressed computation of good policies in large MDPs 

    Szepesvari, Csaba (2021-03-10)
    Markov decision processes (MDPs) is a minimalist framework to capture that many tasks require long-term plans and feedback due to noisy dynamics. Yet, as a result MDPs lack structure and as such planning and learning in ...
  • Learning Tree Models in Noise: Exact Asymptotics and Robust Algorithms 

    Tan, Vincent Y. F. (2021-02-10)
    We consider the classical problem of learning tree-structured graphical models but with the twist that the observations are corrupted in independent noise. For the case in which the noise is identically distributed, we ...
  • Interpretable latent space and inverse problem in deep generative models 

    Zhou, Bolei (2021-01-27)
    Recent progress in deep generative models such as Generative Adversarial Networks (GANs) has enabled synthesizing photo-realistic images, such as faces and scenes. However, it remains much less explored on what has been ...
  • ML@GT Lab presents LAB LIGHTNING TALKS 2020 

    AlRegib, Ghassan; Chau, Duen Horng (Polo); Chava, Sudheer; Cohen, Morris; Davenport, Mark A.; Desai, Deven; Dovrolis, Constantine; Essa, Irfan A.; Gupta, Swati; Huo, Xiaoming; Kira, Zsolt; Li, Jing; Maguluri, Siva Theja; Pananjady, Ashwin; Prakash, B. Aditya; Riedl, Mark; Romberg, Justin K.; Xie, Yao; Zhang, Xiuwei (2020-12-04)
    Labs affiliated with the Machine Learning Center at Georgia Tech (ML@GT) will have the opportunity to share their research interests, work, and unique aspects of their lab in three minutes or less to interested graduate ...
  • Bringing Visual Memories to Life 

    Huang, Jia-Bin (2020-12-02)
    Photography allows us to capture and share memorable moments of our lives. However, 2D images appear flat due to the lack of depth perception and may suffer from poor imaging conditions such as taking photos through ...
  • Let’s Talk about Bias and Diversity in Data, Software, and Institutions 

    Deng, Tiffany; Desai, Deven; Gontijo Lopes, Raphael; Isbell, Charles L. (2020-11-20)
    Bias and lack of diversity have long been deep-rooted problems across industries. We discuss how these issues impact data, software, and institutions, and how we can improve moving forward. The panel will feature thought ...
  • Towards High Precision Text Generation 

    Parikh, Ankur (2020-11-11)
    Despite large advances in neural text generation in terms of fluency, existing generation techniques are prone to hallucination and often produce output that is unfaithful or irrelevant to the source text. In this talk, ...
  • Applying Emerging Technologies In Service of Journalism at The New York Times 

    Boonyapanachoti, Woraya (Mint); Dellaert, Frank; Essa, Irfan A.; Fleisher, Or; Kanazawa, Angjoo; Lavallee, Marc; McKeague, Mark; Porter, Lana Z. (2020-10-30)
    Emerging technologies, particularly within computer vision, photogrammetry, and spatial computing, are unlocking new forms of storytelling for journalists to help people understand the world around them. In this talk, ...
  • Reasoning about Complex Media from Weak Multi-modal Supervision 

    Kovashka, Adriana (2020-10-28)
    In a world of abundant information targeting multiple senses, and increasingly powerful media, we need new mechanisms to model content. Techniques for representing individual channels, such as visual data or textual data, ...
  • Active Learning: From Linear Classifiers to Overparameterized Neural Networks 

    Nowak, Robert (2020-10-07)
    The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains using huge amounts of human-labeled training data. Machines can recognize objects in images and translate ...
  • Using rationales and influential training examples to (attempt to) explain neural predictions in NLP 

    Wallace, Byron (2020-09-09)
    Modern deep learning models for natural language processing (NLP) achieve state-of-the-art predictive performance but are notoriously opaque. I will discuss recent work looking to address this limitation. I will focus ...
  • Global Optimality Guarantees for Policy Gradient Methods 

    Russo, Daniel (2020-03-11)
    Policy gradients methods are perhaps the most widely used class of reinforcement learning algorithms. These methods apply to complex, poorly understood, control problems by performing stochastic gradient descent over a ...
  • Solving the Flickering Problem in Modern Convolutional Neural Networks 

    Sundaramoorthi, Ganesh (2020-02-12)
    Deep Learning has revolutionized the AI field. Despite this, there is much progress needed to deploy deep learning in safety critical applications (such as autonomous aircraft). This is because current deep learning ...
  • Question Answering, Event Knowledge, and other NLP Stuff: Forays into Reuse, Decomposition, and Control in Neural NLP Models 

    Balasubramanian, Niranjan (2020-01-15)
    In this three-part talk, I will present some of our recent efforts that aim to control and adapt neural models to work more effectively in target applications. The first part will focus on how to repurpose a pre-trained ...
  • Learning to Optimize from Data: Faster, Better, and Guaranteed 

    Wang, Zhangyang (2019-11-20)
    Learning and optimization are closely related: state-of-the-art learning problems hinge on the sophisticated design of optimizers. On the other hand, the optimization cannot be considered as independent from data, since ...
  • The Data-Driven Analysis of Literature 

    Bamman, David (2019-11-15)
    Literary novels push the limits of natural language processing. While much work in NLP has been heavily optimized toward the narrow domains of news and Wikipedia, literary novels are an entirely different animal--the long, ...
  • A Discussion on Fairness in Machine Learning with Georgia Tech Faculty 

    Cummings, Rachel; Desai, Devan; Gupta, Swati; Hoffman, Judy (2019-11-06)
    Fairness in machine learning and artificial intelligence is a hot, and important topic in tech today. Join Georgia Tech faculty members Judy Hoffman, Rachel Cummings, Deven Desai, and Swati Gupta for a panel discussion on ...
  • NLP Approaches to Campaign Classification 

    Ahmed, Muhammed (2019-10-17)
    Mailchimp is the world's largest marketing automation platform. Over a billion emails are sent by it every day, which raises the question: what exactly are users sending? We'll do a deep dive into the natural language ...

View more