A close up of a robot with a clear, bright face with red digital eyes.

Research Projects

Research Projects

Music Technology students don't just make music. They look at it through the lens of the scientific method, and research new and innovative ways to create it. Every semester they work in one of the labs in the Center for Music Technology to invent new technologies that redefine how we express ourselves through music.

Flash Music

A series of patches in a layout to demonstrate a new way to control music with computers.

Project by: Matthew Arnold, Seth Holland, and Dallas McCorkendale

Flash Music is an SSVEP-based music interface that allows a user to control a music composition by focusing their visual attention on different corners of a screen.

Sky Above

Three BSMT students sing together while the middle student is wearing a device to help her control their voices.

Project by: Chad Bullard, Carson Myers, and Ally Stout

In this online collaborative performance with Chad Bullard and Carson Myers, Ally Stout uses electromyography to control the balance between voices.

Design and Build of a Robot for Emotional Interaction Study

A small black robot dancing with a digital smile on its face.

Project by: Rishikesh Daoo and Yilin Zhang

Rishikesh Daoo and Yilin Zhang collaborated to create a robot with emotional gestures, facial expression, and music timbre synthesis, which aims to generate emotional responses to human interactions.


A look at the software used to create "Elevate" along with a map demonstrating changes in elevation.

Project by: Lauren McCall

"Elevate" is a map-based interactive soundscape. It utilizes latitude, longitude, and elevation data in order to dynamically shape and spatialize music. As part of a repertory of geographically based sonification projects, Elevate explores the connection and uniqueness of topographical data from various locations.

Melody Conditioned Lyrics Generation with SeqGAN

A screenshot of a powerpoint presentation explaining the project SeqGAN.

Project by: Yihao Chen

In this project, Yihao Chen builds a system that benefits musicians and singer-songwriters in terms of their lyrics writing. He proposes an end-to-end melody conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN), which generates a line of lyrics given the corresponding melody at the input.

Robotic Musician plays Carnatic Music on the Violin

A violin attached to a robot that is controlling it and playing it.

Project by: Raghavasimhan Sankaranarayanan

A novel robotic violin player designed for performing Carnatic music. Using sophisticated mechatronics and music information retrieval techniques, the robot can play almost all the musical articulations and ornamentations including gamakas (Glissandos) with intonation.


Timbral Sonification Designer

A shot of the Timbral Sonicification application, featuring numerous options to change sound expressions.

Project by: Takahiko Tsuchiya

The Timbral Sonification Designer is a web application for prototyping sonifications, the representation of data through various types of sound. It facilitates the creation of musical, changeable, and quantifiable sound expressions with data and arbitrary shapes mapped to multiple timbral dimensions.



If you can't find the information you were looking for, we'll get you to the right place.
Contact Us