Search
Now showing items 1-10 of 39
Selfie-Presentation in Everyday Life: A Large-scale Characterization of Selfie Contexts on Instagram
(Georgia Institute of Technology, 2017)
Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, ...
What Are the Ants Doing? Vision-Based Tracking and Reconstruction of Control Programs
(Georgia Institute of Technology, 2005-04)
In this paper, we study the problem of going
from a real-world, multi-agent system to the generation of
control programs in an automatic fashion. In particular,
a computer vision system is presented, capable of ...
Deep Segments: Comparisons between Scenes and their Constituent Fragments using Deep Learning
(Georgia Institute of Technology, 2014-09)
We examine the problem of visual scene understanding and abstraction from first person video. This is an important problem and successful approaches would enable complex scene characterization tasks that go beyond ...
EM, MCMC, and Chain Flipping for Structure from Motion with Unknown Correspondence
(Georgia Institute of Technology, 2003)
Learning spatial models from sensor data raises the challenging data association
problem of relating model parameters to individual measurements. This paper proposes an
EM-based algorithm, which solves the model learning ...
A Visualization Framework for Team Sports Captured using Multiple Static Cameras
(Georgia Institute of Technology, 2013)
We present a novel approach for robust localization of multiple people observed using a set of static cameras. We use this
location information to generate a visualization of the virtual offside line in soccer games. To ...
Modeling structured activity to support human-robot collaboration in the presence of task and sensor uncertainty
(Georgia Institute of Technology, 2013-11)
A representation for structured activities is developed that allows a robot to probabilistically infer which task actions a human is currently performing and to predict which future actions will be executed and when they ...
Anticipating Human Actions for Collaboration in the Presence of Task and Sensor Uncertainty
(Georgia Institute of Technology, 2014-06)
A representation for structured activities is developed that allows a robot to probabilistically infer which task actions a human is currently performing and to predict which future actions will be executed and when they ...
Behind the Scenes: Decoding Intent from First Person Video
(Georgia Institute of Technology, 2017-02-01)
A first person video records not only what is out in the environment but also what is in our head
(intention and attention) at the time via social and physical interactions. It is invisible but it can be
revealed by ...
Learning from the Field: Physically-based Deep Learning to Advance Robot Vision in Natural Environments
(Georgia Institute of Technology, 2020-01-08)
Field robotics refers to the deployment of robots and autonomous systems in unstructured or dynamic environments across air, land, sea, and space. Robust sensing and perception can enable these systems to perform tasks ...
Weakly Supervised Learning from Images and Video
(2016-09-30)
Recent progress in visual recognition goes hand-in-hand with the supervised learning and large-scale training data. While the amount of existing images and videos is huge, their detailed annotation is expensive and often ...