Search
Now showing items 1-10 of 26
Motion Fields to Predict Play Evolution in Dynamic Sport Scenes
(Georgia Institute of Technology, 2010-06)
Videos of multi-player team sports provide a challenging
domain for dynamic scene analysis. Player actions and interactions
are complex as they are driven by many factors,
such as the short-term goals of the individual ...
Automated Assessment of Surgical Skills Using Frequency Analysis
(Georgia Institute of Technology, 2015)
We present an automated framework for visual assessment
of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis techniques
for extracting motion ...
A Visualization Framework for Team Sports Captured using Multiple Static Cameras
(Georgia Institute of Technology, 2013)
We present a novel approach for robust localization of multiple people observed using a set of static cameras. We use this
location information to generate a visualization of the virtual offside line in soccer games. To ...
Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation
(Georgia Institute of Technology, 2013-11)
There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against
obesity and other nutrition-related diseases. ...
Decoding Children’s Social Behavior
(Georgia Institute of Technology, 2013-06)
We introduce a new problem domain for activity recognition:
the analysis of children’s social and communicative
behaviors based on video and audio data. We specifically
target interactions between children aged 1–2 ...
Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing
(Georgia Institute of Technology, 2012-09)
Activity recognition in the home has been long recognized as
the foundation for many desirable applications in fields such
as home automation, sustainability, and healthcare. However,
building a practical home activity ...
Orientation-Aware Scene Understanding for Mobile Cameras
(Georgia Institute of Technology, 2012-09)
We present a novel approach that allows anyone to quickly
teach their smartphone how to understand the visual world
around them. We achieve this visual scene understanding
by leveraging a camera-phone’s inertial sensors ...
A Practical Approach for Recognizing Eating Moments With Wrist-Mounted Inertial Sensing
(Georgia Institute of Technology, 2015)
Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, ...
Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study
(Georgia Institute of Technology, 2015)
Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of ...
Predicting Daily Activities From Egocentric Images Using Deep Learning
(Georgia Institute of Technology, 2015)
We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We ...