Making Music for Smart Computers

by Mary Martialay on December 10, 2010

In conventional terms, the music of Triple Point is hard to explain – the rythym changes, the melody (if there is a melody) shifts from phrase to phrase. You can’t call it jazz, you can’t call it classical, you can’t call it new age. It’s improvisational. It’s avant-garde. The word “genre” doesn’t really apply.

And that makes the music of Triple Point an ideal challenge for artificial intelligence.

This week, Rensselaer announced the launch of a new collaboration between musicians and cognitive scientists to build an artificial intelligence system (or “agent”) that can conduct the musicians of Triple Point. The project – formally called the Creative Artificially-Intuitive and Reasoning Agent – is supported by a three-year $650,000 NSF grant.

Professor Selmer Bringsjord, head of the cognitive science department and director of the Rensselaer Artificial Intelligence and Reasoning Laboratory, has joined with the musicians of Triple Point (Pauline Oliveros – a virtual accordionist and clinical professor of music, Jonas Braasch – an acoustician and assistant professor of architecture, and Doug Van Nort – an electronic musician and music technology researcher) on the project.

A conductor that could guide their performances must be capable of “high-level reasoning,” said Bringsjord.

Oliveros, founder of the Deep Listening Institute put it this way:

“Most people understand music in terms of pitch, rhythm, and volume. We’re concerned with texture and density and timbre, as well. These parameters are more complicated for the system recognizer and more exciting for us.”

Above is a short video about the project filmed during an early meeting of the group.