It’s a bitterly cold January night in Williamsburg, Brooklyn, but the back room at MonkeyTown, a multimedia performance venue, is packed. The underground audio-magazine loud5 is holding a release party featuring an evening’s worth of experimental music and video, including performances by the laptop artists Zach Layton and R. Luke DuBois. The battery on DuBois’s instrument — a sleek Apple PowerBook — has just died in mid-performance, but a quick jiggle of the laptop’s power cord and all’s right again.
Dubois is a leading figure on the laptop music scene. In addition to composing and performing his own works, he teaches computer music at New York University and recently served as guest director of the Princeton Laptop Orchestra, or plork. He is also the inventor of time-lapse phonography, a software-based system for transforming complex audio samples into static masses of sound.
DuBois’s method involves taking the average value of every frequency in a given sample and using that data to construct a single “average chord” representing the spectral profile of the entire thing. When he applied it to forty-two years’ worth of Billboard Hot 100 hits, the ensuing piece, a thirty-seven-minute-long sonic phantasmagoria called “Billboard,” neatly illustrated four decades of changes in both recording technology and popular taste. As the vocal-laden, hiss-heavy vinyl records of the 1960s gave way to 1970s-era stadium rock and, ultimately, to the digitally mastered drum and bass grooves of hip hop, DuBois’s average chords moved from haunting choral textures to noisy chunks of sound.
For his performance at MonkeyTown, DuBois combined time-lapsed passages from Led Zeppelin’s Physical Graffiti with samples of voice and electric guitar from the original album, stretching and looping his material in real time to create a shifting mixture of heavily processed and relatively untouched sounds. The result sounded like a highly refined form of electronica, one that was almost entirely free of recognizable melodies or rhythms. But the methods that DuBois uses to create his music are at once related to those used by pop musicians and heir to their own lengthy intellectual pedigree. Composers and computers have been engaged in an intimate dance for the last half century, and the electronic music that we hear today on television commercials, in clubs, and on the Internet is only a recent manifestation.
In 1957, Max Mathews, a scientist at Bell Labs, unveiled one of the first computer programs designed to synthesize musical sounds from scratch. That same year, a University of Illinois professor named Lejaren Hiller used a computer called illiac to generate his String Quartet No. 4, also known as the illiac Suite. illiac stood three metres high, weighed five tons, contained 2,800 vacuum tubes, and had roughly one-millionth the storage capacity of my 4GB iPod. By feeding it a series of algorithms encoded on punch cards, Hiller composed a work that moved from tonal counterpoint to atonal serialism.
Hiller and Mathews are now hailed as two of the founding fathers of computer music, a genre that has influenced much of what we hear every day. Still, few people know their names, and fewer still understand the role they and others like them have played in shaping contemporary music.
Electromechanical instruments such as Thaddeus Cahill’s 200-ton Telharmonium date to the turn of the twentieth century. By the 1920s and ’30s, composers like Edgard Varèse, Olivier Messiaen, and John Cage were writing original works for electronic instruments such as the theremin and the Ondes Martenot. They were inspired by the Italian futurists Luigi Russolo and Francesco Balilla Pratella, who advocated for technological innovation and the use of mechanical noise as the basis for a new music. They also had the same fetish for technological innovation that has driven the evolution of music for millennia, from the invention of the pump organ by the Greek engineer Ktesibios in the third century BC to the development of the modern piano in the late 1800s — a development that made possible the piano music of Debussy and Ravel, music that could not have been accurately reproduced on the pianofortes of Mozart’s time.
In 1948, Pierre Schaeffer, a radio engineer for Radiodiffusion Française in Paris, broadcast his “Concert de Bruits,” a series of pieces composed entirely of recorded sounds from various musical and non-musical sources: locomotives, percussion, a toy top, a saucepan. Schaeffer’s musique concrète — a reference to his use of concrete sounds to generate an abstract piece of music — spawned an entire school of modern composition. It was also the first mature example of what a contemporary listener would recognize as sampling.
Soon afterwards, Herbert Eimert and Werner Meyer-Eppler founded an electronic music studio under the aegis of Nordwestdeutscher Rundfunk in Cologne. Unlike its counterpart in Paris, the Cologne studio, which eventually came under the directorship of Karlheinz Stockhausen, focused on the creation of music made from purely electronic source material. Sine-wave oscillators, ring modulators, filters, and noise generators were used to create new electronic sounds, which were recorded and painstakingly edited on magnetic tape.
Hundreds of electronic music studios modelled after the ones in Paris and Cologne sprung up across Europe and North America. In 1959, the University of Toronto founded the second electronic music studio on the continent, and others soon appeared at McGill, the University of British Columbia, and the Royal Conservatory of Music. Both the Toronto and McGill studios were equipped by Hugh Le Caine, director of the National Research Council’s electronic music laboratory and inventor of the Electronic Sackbut, one of the first analog synthesizers.
By the time computer music entered the scene in the late 1950s, the fundamental elements of modern electronic music were all in place. Composers were splicing and sampling bits of sound in the studio, creating novel timbres and sonorities by electronic means, and processing the results with various effects (echo, reverberation, delay). They had also begun using computers to compose musical scores and to synthesize new and unusual sounds. Their equipment — magnetic tape, patch cords, vacuum tubes, and analog synthesizers the size of railroad boxcars — may seem crude by contemporary standards, but their basic goals and techniques have endured in the digital era.
Many of the attractions of electronic music remain the same as well. Technology doesn’t just allow composers to create brand new sounds, or modify existing ones. It also allows them to realize works that no human interpreter could possibly execute. And it affords them absolute control over their materials, starting from the most fundamental parameters of frequency and amplitude.
Many of the questions raised by those early experiments in electronic music also endure. When does sound cease to be music and become noise? If you teach a computer to write a piece of music by feeding it an algorithm, have you composed the resulting piece or has the computer? And who, exactly, is going to listen to it?
That last question is especially pertinent. By mid-century, serialism had already alienated many listeners who preferred their music tonal and melodic. “Serious” electronic music embraced the least accessible elements of serialism — atonality, extreme abstraction, an often impenetrable complexity — while discarding its most accessible ones, i.e., traditional acoustic sounds. Hiller’s illiac Suite was written for a standard string quartet, but more often than not composers opted either for electronically generated sounds or for acoustic sounds that had been modified beyond all recognition.
They also tended to focus more on the creation of unusual timbres than on rhythm, melody, or harmony — a kind of radical extension of Arnold Schoenberg’s concept of klangfarbenmelodie, or “sound-colour-melody,” whereby a single note is passed from instrument to instrument, acquiring different tone colours as it goes. Electronic music allowed for infinite manipulations of tone colour while freeing composers from the constraints imposed by acoustic instruments and discrete pitches.
Electronic music eventually found a mass audience by filtering out of the academy and into more commercial realms. Pop musicians and film composers, who were as interested in exploring new sounds as their academic and institutional brethren, quickly capitalized on the work of the early electronic pioneers. By the early 1970s, advances in synthesizer and computer technology had placed sophisticated electronic tools in the hands of many pop and rock bands. Today, virtually everything you hear on your stereo, your mp3 player, or your television has been electronically primped and massaged, and electronic sounds and effects that would have been considered bizarre a generation ago are now simply part of the musical landscape.
“This interdisciplinary combination of music and technology, in the 1960s, was completely unknown. It was very much hidden away in the research labs,” says Barry Truax, a composer at Simon Fraser University who developed a number of groundbreaking computer music applications in the 1970s and ’80s. “In relatively living memory, it’s gone from something very esoteric to something that’s part of popular culture.”
The story of electronic music is not only one of technical refinement and gradual acceptance, however. Recent developments in computer technology — the spread of powerful laptop computers and the growing sophistication and user-friendliness of software applications — have radically affected how electronic music is made and who gets to make it. These developments allow composers to perform staggering feats of electronic manipulation in real time before live audiences. And they have made the tools and techniques of the electronic music studio available to all.
Prior to DuBois’s performance at MonkeyTown, Zach Layton had demonstrated his own brand of computer-assisted music. Layton used an evolutionary algorithm, something normally employed to test the fitness of a given population, to trigger a series of samples on his MacBook Pro laptop. He then tweaked those samples in real time to create a constantly shifting wash of ambient noise, which he used as background for his own improvised electric guitar riffs — a process he later likened to “a sort of real-time musique concrète that’s being driven and interacted and changing over time.” The results were highly unpredictable, interesting to hear, and fun to watch.
So was the audience, which was overwhelmingly young, with a heavy sprinkling of students and experimental sound artists. According to Truax, that’s long been the case with electronic music. “There’s always been a young audience for it,” he says. “The younger the ears, the more open they are.”
The more accustomed to electronic sounds, as well. The Vancouver Island-based composer John Mills-Cockell, who helped introduce synthesizers to Canadian pop music in the 1960s and ’70s through his work with the bands Syrinx and Kensington Market, notes that electronic music has become so pervasive that contemporary listeners don’t even recognize “electronic music” and “computer music” as distinct categories.
“I believe electronic music has greatly influenced the sound of contemporary music in ways most people don’t quite realize,” Mills-Cockell wrote in an email. “There are obvious things like a nifty new synth sound in a pop song, the imaginative use of processing in vocals (for example), but the use of sampling instruments and synthesizers in the production of all kinds of music we hear every day in many different contexts is so prevalent that it’s simply taken for granted.”
Mills-Cockell adds that our perception of music in general is being subtly altered by the fact that much of what we hear in clubs and on film and television soundtracks is generated by computers, rather than by human beings playing acoustic instruments. “It is increasingly accepted as ‘music’ although it is very different from anything we heard say forty years ago and before,” Mills-Cockell writes. “Music is being reinvented.”
So, too, is the very definition of music. Conventional notions of tone and timbre, not to mention melody, rhythm and harmony, do not apply to much computer-generated music. In the early days of the genre, this placed it within the realm of the avant-garde. Yet few contemporary listeners would be genuinely surprised by the work of the British electronica guru Aphex Twin or Canadian artists like Deadbeat and Plastikman. Now that computer music has infiltrated almost every nook and cranny of the pop world, the philosophical questions it raises, while still intellectually provocative — Is this music at all or a highly engineered form of noise? Are those who make it really composers or musicians or are they sound engineers and software developers? — have been rendered moot.
As computer music has moved from the fringes to the centre, it has also become far less exclusive. DuBois and Layton are both trained composers who accomplish their feats of musical prestidigitation using software they code themselves in a graphic programming environment called Max/msp. (The Max part is a homage to Max Mathews.) Max/msp is one of the most powerful music-oriented programming environments around, and it does require some effort to master. But anyone who can operate a mouse can use one of several menu-driven software tools, such as Reason, Reaktor, or Ableton Live, to construct a virtual electronic music studio with remarkably little effort. Gone are the days when an aspiring composer of electronic or computer-generated music needed access to a high-grade research facility stocked with custom-built hardware. “Now you can do everything that they could do better on your laptop,” says Barry Schrader, a composer at CalArts and founder of seamus, the Society of Electro-Acoustic Music in the United States.
Of course, owning a hammer doesn’t necessarily mean you know how to build a house. This was painfully obvious at a laptop jam I attended one night in January at Reboot, a pizza joint on Manhattan’s Lower East Side. The jam was hosted by Share, an organization that regularly invites laptop musicians and video artists to strut their stuff over a local WiFi network. When I arrived, a handful of people were sitting around, fiddling with their laptops, while what I took to be canned electronica played softly in the background.
Turns out it wasn’t canned. Two guys who I had assumed were checking their email were, in fact, “jamming.” One was laying down repetitive computer generated beats using a software interface he’d written himself back in the 1990s, before commercial music software was widely available. The other, seated directly behind him, was generating equally repetitive swathes of generic techno using Reason. The jammers never made eye contact, and I wouldn’t have known anyone was actually involved in making the utterly forgettable sounds being piped over the room’s speakers if I hadn’t thought to ask.
“A lot of these programs are basically word processors,” says Sever Tipei, the composer and former professor at the University of Illinois who first introduced me to the world of computer music. (Tipei built his own custom applications using high-performance machines at Argonne National Laboratory and the National Center for Supercomputing Applications, both in Illinois.) And apparently, not everyone has something interesting to say with them. This is hardly surprising: the tools of musical expression may now be available to all, but musical ability is not, and never has been. Moreover, as Truax points out, when people use standardized tools to make music, “everything risks sounding the same.”
Still, whatever its downside, the democratization of music through electronic means should in the long run be a good thing. It is already encouraging an exchange of ideas and information among musicians and composers who would not normally cross paths, let alone talk to one another. “You have programs like Max and SuperCollider, and there are a lot of academics using it, but there are also a lot of ordinary laptop performers, and they’re all on the same user groups,” says Du Bois, who has seen tenured professors bumping elbows online with the likes of Radiohead’s Jonny Greenwood.
It’s hard to say where all of this may be leading, but we’re certain to see more of it. DuBois already feels like a Luddite compared with his students at nyu, for whom computers aren’t just devices for doing work but also leisure objects and social networking tools. Who’s to say future generations won’t wear their computers like personal jewellery and create music with programs that make today’s software look as primitive as illiac? Ultimately, it may be wise to adopt Schrader’s attitude and hope for the best. “Good work can come from anywhere,” he says. “So the more access, the better.”