Introduction to Computer Music: Volume One

1. Overview and Early History | page 2

Max Mathews, who worked at Bell Labs alongside Claude Shannon with other computer music notables, John Pierce and James Tenney (even Charles Dodge in his student days) produced a program that allowed a mainframe computer to digitally synthesize and playback sound, which—though it had only one voice, one waveform, and little control over other parameters—proved the concept that sound could be digitized, stored and retrieved. Widely regarded as the father of computer music, he eventually went on to develop Music V, a much more robust and widely used digital synthesis program and the precursor to many more. Click here for a further history of the MUSIC-N programs.

Mathews joined us at the 1997 Indiana University Horizons in Computer Music festival, where he sent me the following information for our program:

Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, CSound, Cmix. Many exciting pieces are now performed digitally.

The IBM 704 and its siblings were strictly studio machines—they were far too slow to synthesize music in real-time. Chowning's FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.

Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments.

Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the "C" language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using "C". Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.

In 1961, Mathews produced one of the early speech synthesis by arranging Daisy, a feat honored in 2001: a Space Odyssey as HAL loses his mind. Mathews moved to Stanford in 1987 where he worked on his Radio Baton and haptic (force feedback) performance interfaces. He was recruited by his Bell Lab supervisor John Pierce (also CCRMA emeritus faculty until his death in 2002). On a personal note, I have not met many pioneers as generous with their time and contributions and as dedicated to helping out others in their field as Max is with his work—he is truly a remarkable gentleman.

I was trained to plug in patch cords and turn dials on modular synthesizers while recording the results on analog tapes that then had to be painstakingly spliced to create a composition (gee, this author must be pretty old!). While younger students long for the unknown nostalgia of the "good old days," I am more than happy to work with the incredible digital synthesis, sampling, signal processing and audio editing software now available to me at a minute fraction of what an IBM mainframe and custom converters would have cost Bell Labs in 1957.

1 | 2

| Jacobs School of Music | Center for Electronic and Computer Music | Contact Us | ©2017-18 Prof. Jeffrey Hass