Trevor Cheitlin is a fifth-semester pro music major at Berklee. When he’s Bloggers_Trevor
not writing or singing, he’s pretending to be far cooler than he actually
is. Check him out on YouTube.

I had the pleasure of accompanying Panos Panay, the founder of Berklee’s new Institute of Creative Entrepreneurship, to the Boston Music Tech Fest a couple of weeks ago, where he sat down with MIT’s Ken Zolot to discuss the entrepreneurial connection between music and tech (Oh what’s this? A Berklee.edu post on the subject? You don’t say!). They weren’t the only presenters, however, which means I got to learn about many exciting emerging technologies in the music industry.

One presentation in particular really captured my imagination. Bryan Pardo, associate professor of electrical engineering and computer science at Northwestern University, stood up to present a couple of his research projects, most of which aim to simplify the creative process and/or enhance the musical experience of both consumers and creators. Pardo spent a majority of his time discussing one project in particular, one of his so-called “adaptive user interfaces” titled SocialEQ.

In short, SocialEQ acts as an accessible way for musicians to find the sounds they are looking for. Pardo used the adjective “tinny” as an example. Say you are a musician, looking for an old-timey, “tinny” sound for the piano track in one of your songs. Using SocialEQ, you can enter “tinny” as the sound you are looking for, and the program will randomly generate a number of EQ curves for a single track (so far guitar, piano, and drums are the three supported by the software, but there is the option to import a file instead).

From there, you rate the sounds by how close they are to your concept of “tinny,” and after the program reaches 100% “confidence” in what you are looking for, it will generate a standardized EQ curve that represents your definition of a “tinny” sound. The plugin can then be exported into ProTools or other sequencing programs for use. SocialEQ can be used to sonically define any word, and that data can then be compiled to create a collection of crowdsourced EQ curves, or to compare how people perceive sounds differently.

I think it’s brilliant. Coming from the perspective of a musician who consistently finds myself frustrated when trying to define the sound I’m looking for, I could definitely see myself using SocialEQ on a regular basis. It provides relevant and fascinating information about how music is perceived across cultures and languages, and could act as a gateway for artists to more accurately describe their vision.

Social EQ isn’t the only cool project Pardo and his team at Northwestern are working on. Music Story is a program (now available for download) that listens to music and, using a database of images on the internet, creates an on-the-fly music video, “heightening, clarifying, and exposing the connections between words, ideas, and images that we often do not notice.” I highly recommend checking out these and other projects at music.cs.northwestern.edu.

Latest posts by Trevor Cheitlin (see all)