Now showing items 1-20 of 72

    • Weakly Supervised Learning for Musical Instrument Classification 

      Gururani, Siddharth Kumar (Georgia Institute of Technology, 2020-08-18)
      Automatically recognizing musical instruments in audio recordings is an important task in music information retrieval (MIR). With increasing complexity of modeling techniques, the focus of the Musical Instrument Classification ...
    • Directed Evolution in Live Coding Music Performance 

      Dasari, Sandeep; Freeman, Jason (Georgia Institute of Technology, 2020-10-24)
      Genetic algorithms are extensively used to understand, simulate, and create works of art and music. In this paper, a similar approach is taken to apply basic evolutionary algorithms to perform music live using code. Often ...
    • Promoting Intentions to Persist in Computing: An Examination of Six Years of the EarSketch Program 

      Wanzer, Dana Linnell; McKlin, Thomas (Tom) McKlin; Freeman, Jason; Magerko, Brian; Lee, Taneisha (Georgia Institute of TechnologyTaylor & Francis, 2020-01-21)
      Background and Context: EarSketch was developed as a program to foster persistence in computer science with diverse student populations. Objective: To test the effectiveness of EarSketch in promoting intentions to persist, ...
    • The sound within: Learning audio features from electroencephalogram recordings of music listening 

      Vinay, Ashvala (Georgia Institute of Technology, 2020-04-28)
      We look at the intersection of music, machine Learning and neuroscience. Specifically, we are interested in understanding how we can predict audio onset events by using the electroencephalogram response of subjects listening ...
    • Regressing dexterous finger flexions using machine learning and multi-channel single element ultrasound transducers 

      Hantrakul, Lamtharn (Georgia Institute of Technology, 2018-04-27)
      Human Machine Interfaces or "HMI's" come in many shapes and sizes. The mouse and keyboard is a typical and familiar HMI. In applications such as Virtual Reality or Music performance, a precise HMI for tracking finger ...
    • Addressing the data challenge in automatic drum transcription with labeled and unlabeled data 

      Wu, Chih-Wei (Georgia Institute of Technology, 2018-07-23)
      Automatic Drum Transcription (ADT) is a sub-task of automatic music transcription that involves the conversion of drum-related audio events into musical notations. While noticeable progress has been made in the past by ...
    • The algorithmic score language: Extending common western music notation for representing logical behaviors 

      Martinez Nieto, Juan Carlos (Georgia Institute of Technology, 2018-05-22)
      This work proposes extensions to Western Music Notation so it can play a dual role: first as a human-readable representation of the music performance information in the context of live-electronics, and second as a programming ...
    • Towards an embodied musical mind: Generative algorithms for robotic musicians 

      Bretan, Peter Mason (Georgia Institute of Technology, 2017-04-19)
      Embodied cognition is a theory stating that the processes and functions comprising the human mind are influenced by a person's physical body. The theory of embodied musical cognition holds that a person's body largely ...
    • Storage in Collaborative Networked Art 

      Freeman, Jason (Georgia Institute of Technology, 2009)
      This chapter outlines some of the challenges and opportunities associated with storage in networked art. Using comparative analyses of collaborative networked music as a starting point, this chapter explores how networked ...
    • Enhancing stroke generation and expressivity in robotic drummers - A generative physics model approach 

      Edakkattil Gopinath, Deepak (Georgia Institute of Technology, 2015-04-24)
      The goal of this master's thesis research is to enhance the stroke generation capabilities and musical expressivity in robotic drummers. The approach adopted is to understand the physics of human fingers-drumstick-drumhead ...
    • Supervised feature learning via sparse coding for music information rerieval 

      O'Brien, Cian John (Georgia Institute of Technology, 2015-04-24)
      This thesis explores the ideas of feature learning and sparse coding for Music Information Retrieval (MIR). Sparse coding is an algorithm which aims to learn new feature representations from data automatically. In contrast ...
    • Analog synthesizers in the classroom: How creative play, musical composition, and project-based learning can enhance STEM standard literacy and self-efficacy 

      Howe, Christopher David (Georgia Institute of Technology, 2015-04-24)
      The state of STEM education in America's high schools is currently in flux, with billions annually being poured into the NSF to increase national STEM literacy. Hands-on project-based learning interventions in the STEM ...
    • Lecture & Demonstration / Young Guru 

      Keaton, Gimel (Young Guru) (Georgia Institute of Technology, 2013-03-05)
      Grammy winning, hip-hop audio engineer Young Guru, who engineered 10 of Jay-Z’s 11 albums, returned to Georgia Tech on Tuesday, March 5 for a far-reaching discussion on hip-hop and its history, the art of audio engineering, ...
    • Young Guru and Nettrice Gaskins 

      Keaton, Gimel (Young Guru); Gaskins, Nettrice (Georgia Institute of Technology, 2013-03-05)
      Digital Media PhD student Nettrice Gaskins had the opportunity to interview with Young Guru and moderate questions on March 5, 2013 from 12:00 – 1:00 pm in the Clough Commons 4th floor study area. Gaskins became interested ...
    • Audience participation using mobile phones as musical instruments 

      Lee, Sang Won (Georgia Institute of Technology, 2012-05-21)
      This research aims at a music piece for audience participation using mobile phones as musical instruments in a music concert setting. Inspired by the ubiquity of smart phones, I attempted to accomplish audience engagement ...
    • A sonification of Kepler space telescope star data 

      Winton, Riley J.; Gable, Thomas M.; Schuett, Jonathan; Walker, Bruce N. (Georgia Institute of Technology, 2012-06)
      A performing artist group interested in including a sonification of star data from NASA’s Kepler space telescope in their next album release approached the Georgia Tech Sonification Lab for assistance in the process. The ...
    • Perceptual effects of auditory information about own and other movements 

      Schmitz, Gerd; Effenberg, Alfred O. (Georgia Institute of Technology, 2012-06)
      In sport accurate predictions of other persons’ movements are essential. Former studies have shown that predictions can be enhanced by mapping movements onto sound (sonification) and providing audiovisual feedback [1]. The ...
    • Exploring 3D audio for brain sonification 

      Schmele, Timothy; Gomez, Imanol (Georgia Institute of Technology, 2012-06)
      Brain activity data, measured by functional Magnetic Resonance Imaging (fMRI), produces extremely high dimensional, sparse and noisy signals which are difficult to visualize, monitor and analyze. The use of spatial music ...
    • "Trained ears" and "correlation coefficients": A social science perspective on sonification 

      Supper, Alexandra (Georgia Institute of Technology, 2012-06)
      This paper presents a social science perspective on the field of sonification research. Adopting a perspective informed by constructivist science and technology studies (STS), the paper begins by arguing why sonification ...
    • Sonic Window #1 [2011] — A Real Time Sonification 

      Vigani, Andrea (Georgia Institute of Technology, 2012-06)
      This is a real time audio installation in Max/MSP. It is a sonification of an abstract process: the writing on Twitter about music listening experiences on the web by people around the world. My purpose is not to sonify ...