On Generative Music – “In C” performed by Max/MSP

“Generative music” can be loosely defined as music created by a system and then evolved over time according to the rules or constraints of that system.  This can be a mechanical system, a piece of computer software, or even a set of live human performers following a specific set of instructions.  Wikipedia credits Brian Eno with coining the term (along with many other aspects of modern electronic music as we know them today).  The truth is, I’ve been interested in the concept since way before I’d ever heard the term (or even heard of Brian Eno), and I’ve done a lot of tinkering and prototyping in this area.  It turns out that generative music systems have the potential to play a huge role in gaming and interactive audio, topics I will certainly explore further in future posts.  But for today, I’d like to present one particular experiment in the area.

I’ve found that one of the best ways to learn is to copy, so I decided to build a system to perform the famous minimalist composition “In C”, by Terry Riley.  My tool of choice for constructing this system (as with most of my work) was Cycling ‘74’s “Max/MSP”, and my goal was to study the piece, understand what makes it work, and ultimately to gain some insight to apply to my future projects.

“In C” is the perfect composition to turn into a piece of software.  The score reads as an algorithm – literally a set of instructions for executing the performance – and it’s open to just enough interpretation to keep things interesting.  “In C” can be performed by any number of musicians, in any combination of instrumentation, and last for any amount of time (although Riley suggests that it should typically take 45-90 minutes).  It consists of 53 short melodic phrases that are played in sequence, with each performer deciding how long to stick with one pattern before moving on to the next.  Players are encouraged to start at different times, switch patterns whenever they want, even to take short breaks if they feel so inclined.  The resulting music is almost hypnotic in its repetition, and as the players step through the phrases all kinds of interesting patterns emerge.  (You can view the full score, complete with 2 pages of “Performing Directions” by the composer here)

I wanted to build a system that could perform “In C” to Riley’s specifications, and remain flexible enough to generate a unique take each time.  The patch consists of one Master module, and a number of individual Performer modules.

The "Master" module

The Master module acts as the conductor, setting the tempo and broadcasting it to each of the performers.  It also controls the arrangement, including specifying the number of performers and how they should stagger their entrance.

The "Performer" module

Each of the Performer modules is identical, and they are instantiated at runtime according to the number set in the master (using a bit of Max scripting to dynamically generate and connect bpatchers).  When a performer is spawned, it randomly chooses an instrument and a transposition value (for variety, I found that it helped to have some of the players go up or down an octave).  Once the clock starts each performer acts as an individual, mimicking the behaviors specified in Riley’s instructions.  It chooses a random amount of time to stick with each pattern, and it can even decide to rest for a moment before moving on to the next phrase.  All of these decisions are made autonomously based on parameters set in my beautiful UI.  (Yes, I’m colorblind.  No, I’m not a UI artist.)

The results can be heard above, in excerpts grabbed from several different performances by different sized ensembles.  There are a lot of stagnant moments in the recordings but there are also times when truly interesting patterns emerge from the fog.  Every time the patch runs it generates a new take on the work, just as Terry Riley specified.

I have to admit that I’m not totally satisfied with the audio output.  For one thing, minimalist music is, well… an acquired taste.  I don’t think that I’m doing it any favors with my choice of sounds.  I purposely limited the palette to a set of fairly percussive instruments, and I think a wider variety of textures would help keep things interesting.

But as an exercise in learning and experimentation, I think the project was a big success.  I now have a solid foundation in Max/MSP for creating further pattern-based, pseudo-random systems.  With a bit more work, I can easily picture extending this system to contain a number of different performer types (drums, bass, lead, etc) each with their own set of rules and patterns.  A slightly more sophisticated Master module could ensure not just beat sync, but also phrase sync.  Opening up the system to a bit of real-time control over performance parameters like tempo, timbre and effects could allow a user to exert more direct influence over the music.  I’ve already got some new prototypes going based on this project, and I wouldn’t be surprised if some of them make it to this blog sooner or later.

As an aside, if anyone has any interest in the Max/MSP source, let me know in the comments and I’d be happy to share it.

    • Ben
    • June 20th, 2011

    Coldcut’s “Music 4 No Musicians” is relevant here, as might be applications of “Music as Continual Merophony.”

    • I’m a big fan of Coldcut but I’m not familiar with “Music 4 No Musicians”. Couldn’t find anything more on Google beyond a music video. Is it a generative work? Any more info you can share?


        • Ben
        • June 20th, 2011

        I think the specifics are in the liner notes of “Let Us Play!” I recall that the temporal placement of the different parts is machine-determined; more instrumentation details may be found at http://www.soundonsound.com/sos/1997_articles/oct97/coldcut.html

        As the title suggests, it may be more fitting to regard it as in the “gradual process” category than in the generative category per se, though I suppose that depends upon where one draws the line between composition and performance.

      • Thanks for the info Ben! Very cool…

        And thanks too for pointing out that there is a wide range of processes and methodologies to produce “generative” music that may vary widely in their relative autonomy. In the example from my blog post, I could add a way for the software to generate its own pseudo-random musical phrases based on some rule set. This would actually make the music *more* generative, taking away additional control from the composer. Or, I could add a method to alter the tempo and instrumentation in real-time, putting more control in the hands of human performer for hands-on manipulation.

        There are plenty of options. I think that’s what makes the subject so fascinating…

  1. Hi Dan,

    This looks like an awesome patch. Could I see it? Thanks!


    • Thanks Aaron!

      I just posted an image of the Performer sub-patch here: https://onthedll.files.wordpress.com/2011/06/inc_unlocked_performer.jpg

      The Master sub-patch is pretty straight-forward, mostly just a clock and a bunch of sends for the settings. The rest of the functionality is in the main patch itself, mostly just a bunch of javascript to control the spawning of Performers and other performance parameters.

      I wouldn’t mind sharing the entire patch as-is, but it’s pretty deeply intertwined with my particular setup (hard-coded file paths, auto-loads my synth plug-ins, etc) so it won’t run on your machine without some tweaking. I may get around to “generalizing” it one of these days, but I don’t think that will be any time soon…

    • Ben
    • July 12th, 2011

    The final minute of this video may be of interest:

    • very, very cool. squarepusher is definitely on the cutting edge of the roll-your-own musical movement…

    • Robin
    • May 12th, 2012

    Hi Dan. I just came across this and would be very interested in taking a look at the max patch. Would you consider sharing it? Cheers!

    • If you can drop me an email and let me know what you’re working on and what your interest is I’m sure we can work something out – DLehrich at gmail dot com

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: