Machine Learning at Georgia Institute of Technology (ML@GT) is an interdisciplinary research center that will serve as a home to education and research around ML and related fields.

Collections in this community

Recent Submissions

  • The Seeing Eye Robot: Developing a Human-Aware Artificial Collaborator 

    Mirksy, Reuth (2021-10-27)
    Automated care systems are becoming more tangible than ever: recent breakthroughs in robotics and machine learning can be used to address the need for automated care created by the increasing aging population. However, ...
  • Generalized Energy-Based Models 

    Gretton, Arthur (2021-10-13)
    Arthur Gretton will describe Generalized Energy Based Models (GEBM) for generative modeling. These models combine two trained components: a base distribution (generally an implicit model, as in a Generative Adversarial ...
  • Structured Prediction - Beyond Support Vector Machine and Cross Entropy 

    Bach, Francis (2021-09-29)
    Many classification tasks in machine learning lie beyond the classical binary and multi-class classification settings. In those tasks, the output elements are structured objects made of interdependent parts, such as sequences ...
  • Towards a Theory of Representation Learning for Reinforcement Learning 

    Agarwal, Alekh (2021-09-15)
    Provably sample-efficient reinforcement learning from rich observational inputs remains a key open challenge in research. While impressive recent advances have allowed the use of linear modelling while carrying out ...
  • Learning Locomotion: From Simulation to Real World 

    Tan, Jie (2021-09-01)
    Deep Reinforcement Learning (DRL) holds the promise of designing complex robotic controllers automatically. In this talk, I will discuss two different approaches to apply deep reinforcement learning to learn locomotion ...
  • Generative models based on point processes for financial time series simulation 

    Wei, Qi (2021-04-07)
    In this seminar, I will talk about generative models based on point processes for financial time series simulation. Specifically, we focus on a recently developed state-dependent Hawkes (sdHawkes) process to model the limit ...
  • You can lead a horse to water...: Representing vs. Using Features in Neural NLP 

    Pavlick, Ellie (2021-03-24)
    A wave of recent work has sought to understand how pretrained language models work. Such analyses have resulted in two seemingly contradictory sets of results. On one hand, work based on "probing classifiers" generally ...
  • Compressed computation of good policies in large MDPs 

    Szepesvari, Csaba (2021-03-10)
    Markov decision processes (MDPs) is a minimalist framework to capture that many tasks require long-term plans and feedback due to noisy dynamics. Yet, as a result MDPs lack structure and as such planning and learning in ...
  • Learning Tree Models in Noise: Exact Asymptotics and Robust Algorithms 

    Tan, Vincent Y. F. (2021-02-10)
    We consider the classical problem of learning tree-structured graphical models but with the twist that the observations are corrupted in independent noise. For the case in which the noise is identically distributed, we ...
  • Interpretable latent space and inverse problem in deep generative models 

    Zhou, Bolei (2021-01-27)
    Recent progress in deep generative models such as Generative Adversarial Networks (GANs) has enabled synthesizing photo-realistic images, such as faces and scenes. However, it remains much less explored on what has been ...
  • ML@GT Lab presents LAB LIGHTNING TALKS 2020 

    AlRegib, Ghassan; Chau, Duen Horng (Polo); Chava, Sudheer; Cohen, Morris; Davenport, Mark A.; Desai, Deven; Dovrolis, Constantine; Essa, Irfan A.; Gupta, Swati; Huo, Xiaoming; Kira, Zsolt; Li, Jing; Maguluri, Siva Theja; Pananjady, Ashwin; Prakash, B. Aditya; Riedl, Mark; Romberg, Justin K.; Xie, Yao; Zhang, Xiuwei (2020-12-04)
    Labs affiliated with the Machine Learning Center at Georgia Tech (ML@GT) will have the opportunity to share their research interests, work, and unique aspects of their lab in three minutes or less to interested graduate ...
  • Bringing Visual Memories to Life 

    Huang, Jia-Bin (2020-12-02)
    Photography allows us to capture and share memorable moments of our lives. However, 2D images appear flat due to the lack of depth perception and may suffer from poor imaging conditions such as taking photos through ...
  • Let’s Talk about Bias and Diversity in Data, Software, and Institutions 

    Deng, Tiffany; Desai, Deven; Gontijo Lopes, Raphael; Isbell, Charles L. (2020-11-20)
    Bias and lack of diversity have long been deep-rooted problems across industries. We discuss how these issues impact data, software, and institutions, and how we can improve moving forward. The panel will feature thought ...
  • Towards High Precision Text Generation 

    Parikh, Ankur (2020-11-11)
    Despite large advances in neural text generation in terms of fluency, existing generation techniques are prone to hallucination and often produce output that is unfaithful or irrelevant to the source text. In this talk, ...
  • Applying Emerging Technologies In Service of Journalism at The New York Times 

    Boonyapanachoti, Woraya (Mint); Dellaert, Frank; Essa, Irfan A.; Fleisher, Or; Kanazawa, Angjoo; Lavallee, Marc; McKeague, Mark; Porter, Lana Z. (2020-10-30)
    Emerging technologies, particularly within computer vision, photogrammetry, and spatial computing, are unlocking new forms of storytelling for journalists to help people understand the world around them. In this talk, ...
  • Reasoning about Complex Media from Weak Multi-modal Supervision 

    Kovashka, Adriana (2020-10-28)
    In a world of abundant information targeting multiple senses, and increasingly powerful media, we need new mechanisms to model content. Techniques for representing individual channels, such as visual data or textual data, ...
  • Active Learning: From Linear Classifiers to Overparameterized Neural Networks 

    Nowak, Robert (2020-10-07)
    The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains using huge amounts of human-labeled training data. Machines can recognize objects in images and translate ...
  • Using rationales and influential training examples to (attempt to) explain neural predictions in NLP 

    Wallace, Byron (2020-09-09)
    Modern deep learning models for natural language processing (NLP) achieve state-of-the-art predictive performance but are notoriously opaque. I will discuss recent work looking to address this limitation. I will focus ...
  • Global Optimality Guarantees for Policy Gradient Methods 

    Russo, Daniel (2020-03-11)
    Policy gradients methods are perhaps the most widely used class of reinforcement learning algorithms. These methods apply to complex, poorly understood, control problems by performing stochastic gradient descent over a ...
  • Solving the Flickering Problem in Modern Convolutional Neural Networks 

    Sundaramoorthi, Ganesh (2020-02-12)
    Deep Learning has revolutionized the AI field. Despite this, there is much progress needed to deploy deep learning in safety critical applications (such as autonomous aircraft). This is because current deep learning ...

View more