BY MARTIN ROBINSON

Why do we use computers? Because they make our work easier or because they enable new forms of exploration and expression? No doubt most of us would admit that the burden of certain tasks are relieved by the application of computers, they are, after all, machines. But software engineers build systems to encode and communicate our ideas with these complex machines. How do we interact with computers and how do we entrust them to store our ideas? Must we accept the designs and ideas of engineers in such a potentially rich and expressive medium?

We should at least ask these questions in an area where much of the technologies involved are the ideas of others. (Issues of authorship, however, are beyond the scope of this article.)

Beginning with a comparison of some of the differences between using commercial music sequencing packages and employing a music programming language, an introduction to object oriented programming is given and some pointers to its expressivity for musical purposes is suggested (the reader should see Pope (1991) for more examples).

Can programming a computer ever be of equivalent musical value as a virtuoso performance?

Composer-computer dialogue
In theory, computer hardware and software can be built and programmed to receive any physical stimuli, through human intervention or otherwise. Similarly, they can be designed to synthesise or reproduce an infinite variety of responses (including sounds and images). Unlike traditional musical instruments there is no prescribed connection between a particular physical stimulus and a particular physical response. This is simultaneously, for want of better words, good and bad. On the one hand we have the ability to decide how the computer responds to stimuli, on the other, we are forced to decide.

Interactive music systems can be divided into three stages: sensing, processing and response. In many traditional instruments these are almost inseparable. An acoustic guitar, for example, senses via the strings (from where they are stopped and how they are plucked), processes through the strings, body and neck and responds via the strings and body. Changing almost any aspect of the physical appearance of the guitar affects all three stages, sensing, processing and response (Rowe 1993). Imagine the difference in sound and playing style of a 1/50th scale guitar. In traditional instruments, sensing, processing and response stages are (with a few exceptions, notably the piano) interdependent. A keyboard synthesiser, however, may respond with a reasonable variety of sounds while the input stimuli remains the same. Sensing, processing and response stages are not interdependent.

Figure 1 illustrates this difference between traditional and electronic instruments (after Pressing (1992), Mulder (1994) and Rowe (1993)). Although these ideas apply primarily to what one might class as instruments or real-time performance systems, they are equally apt in an inspection of computer-based music making.

Representing musical ideas
Since the first computer-generated musical sounds of the late 1950s there have been a multitude of ways in which computers have been applied to music composition and production. A number of efforts have been made to classify these activities and a glance through the contents pages of computer music texts will reveal many of the viewpoints. One broad distinction would be between employing Œoff-the shelf¹ commercial packages and writing one¹s own software in a programming language. Another distinction might be between applications of the computer for compositional processes and synthesising sounds with the computer. Classification and differences are, perhaps, interesting and useful areas but what unites any interaction with a computer is that the means by which a dialogue is undertaken must be chosen rather than assumed. Similarly, the way in which ideas are represented and encoded must also be chosen. (This is not new, however, since we have always been able to choose our methods for representing and encoding ideas ‹ whether through text, speech, music or art.)

Buying an Œoff-the-shelf¹ computer music package heavily prescribes the means of dialogue and representation. Although it is often reasonably easy to adopt the means of dialogue and manipulate ideas based on the way in which the sound and music is represented, the severity of the limitations imposed by the software¹s programmers quickly becomes apparent.

Most commercial music sequencing packages follow a similar model: data is represented as either an event or a signal (Pope 1993). MIDI notes and a block of audio data are examples of events. Gradual amplitude changes over time and the data stored within a block of audio are examples of signals. Events and signals are arranged and manipulated through a number of means e.g.:

_ input from a MIDI device
_ input from an audio source
_ mouse control
_ menu driven actions
_ keyboard data entry

These seem very limited and mechanical tasks for the communication of musical ideas to a computer.

Music programming languages
Loy (1988) argues that Œ[...] music is considered to be open to the purest expression of order and proportion [...]¹. Programming languages enable the encoding of music and sound in such terms in addition to the features provided by commercial sequencing packages. The composer is freed from such a limited model.

At a low level, of course, the means of dialogue and representation are still prescribed by the particular programming language employed. The dialogue between composer and programming language, for example, may be text-based, graphically-based (or a combination of the two), but in either case the composer has a greater degree of freedom to choose how these methods are utilised. Similarly, music and sound may still be represented as signals and events within a programming language but the composer has the choice to combine these low level components into high level structures. This is not to say that low level components are not combined within commercial sequencing packages to form higher level structures. The difference is that these higher level structures are encoded as structures rather than mere orderings of single events (as is the case in commercial sequencing packages). This will be demonstrated later.

During the short history of computer music, there have been numerous contrasting approaches to the design of computer music languages (See Roads (1996) pp783-818). The sheer number of completed and experimental projects exhibits composers¹ desire to choose how to communicate and represent their musical ideas. (Since many of those involved in the development of the languages are also composers).

Object oriented programming
A flavour of computer software engineering technology that pervades many of the software systems in use today ‹ from industrial applications to games consoles and the Internet ‹ is object oriented programming or OOP. The technology of OOP has its history in the 1960s, but OOP systems experienced a significant rise in popularity in the 1980s (Pope 1991). It is not the goal here to give a tutorial on OOP but rather to signify that OOP is highly appropriate to the communication and representation of musical ideas. It is necessary, however, to introduce some terms and concepts.

OOP systems comprise a number (usually a great number) of classes. Each class describes how to make objects of that class and how they operate. An object is a combination of data and functions ‹ called methods ‹ which are said to be encapsulated within the object. In this way objects contain information and processes appropriate to that type of object or rather the class of the object.

To illustrate this imagine a class ŒNumber¹ (an instance of class ŒNumber¹ might be 10). Each object of class ŒNumber¹ knows how to add itself to other objects of class ŒNumber¹ rather than there being an overall knowledge within the system about something called Œaddition of numbers¹. In order for anything to happen in an OOP language an object must be sent a message which causes one of the object¹s methods to be performed. A message may be sent with additional arguments (or parameters) which, perhaps confusingly, are also other objects. The object receiving the message is given a special status, it is known as the receiver.

To begin with an abstract numerical example, take our ŒNumber¹ 10. One of the methods ŒNumber¹ knows about is add, and we want to add 10 to 5.

Figure 2 illustrates an add message being sent to the object 10 (which is of class ŒNumber¹) with an additional argument 5 (which is also of class ŒNumber¹). The receiver ‹ 10 ‹ knows how to add it self to other numbers and returns (i.e. gives the answer) 15. These notions are intended to make the process of modelling real-world objects or ideas more straight forward.

Although this is still hypothetical, consider modelling a CD player in an OOP system. There may be methods insert and eject for inserting and ejecting discs, play and stop for starting and stopping playback. There may also be other methods for skipping, searching and volume control. A MiniDisc player performs a very similar job to the CD player, there are identical operations: insert, eject, play, stop, skip and search. Yet the way in which the CD and MiniDisc player perform these operations is very different (e.g. disc design, digital audio encoding scheme and motor and servo configuration). Thus the CD and MiniDisc player have a similar interface but a different implementation. The user that issues a play message knows roughly what result to expect in response (since Œplay¹ has inherent meaning) but does not need to know how result is being achieved (i.e. the implementation is not important). This feature of OOP languages is called polymorphism.

Since the CD and MiniDisc player share many attributes, it is logical to group them as being similar classes. Classes in OOP languages are arranged hierachically, very general objects at the top and gradually more specified lower down. The CD and MiniDisc player for example might belong to a superclass DigitalMediaPlayer (the CD and MiniDisc are subclasses of DigitalMediaPlayer) ‹ which may have some general knowledge about digital sampling. This is known as inheritance, (see figure 3). A subclass inherits features from its superclass but can add new features (a DAT player knows about magnetic tape but not about lasers) or redefine features (i.e. change the way the object responds to a particular message).

Sound objects
The OOP paradigm seems entirely appropriate to musical applications. Consider two imaginary ŒPitched¹ and ŒNoisy¹ classes of sound objects. Both may respond to vibrato and tremolo messages in a appropriate manner (ŒNoisy¹ may respond to vibrato in the same way as it does tremolo, i.e. ignoring pitch; ŒPitched¹ would modulate pitch in response to pitch and modulate amplitude in response to tremolo). ŒPitched¹ and ŒNoisy¹ may belong to a superclass ŒSoundTypes¹ and there may be another class ŒSequence¹ to order ŒSoundTypes¹ and other ŒSequence¹ objects in time (see figure 4). Also, a ŒSequence¹ object would pass any tremolo or vibrato messages that it received to any of the ŒSoundTypes¹ objects contained within its sequence. Even this very simple example shows the possibilities of manipulating and processing high level musical structures with particular reference to object oriented techniques. Although it is clear that a programming language occupies the realm of the processing stage of an interactive music system it is arguable that it is also a major part of its sensing stage.

 

The programming language interface
The programmer not only provides the sensing capabilities of a system but in the act of programming, the computer Œsenses¹ logical thought, ideas and procedures through the mechanics of the language. At a microscopic level sight, sound and touch are merely complex (although I do stress complex) codes and mechanics. I could not argue that using programming language is as expressive, intimate or pleasurable as the phenomena of sight, sound and touch. But I believe that composing by programming is more about expression than data entry.

SuperCollider
SuperCollider (authored by James McCartney) is a fully object oriented programming language for real-time composition and sound synthesis (and soon image synthesis) on the PowerMac. It is my chosen platform for designing composition and performance systems. Many of the reasons for this have been indicated above but there are many more ‹ its expressivity, its uniformity and direct contact with the author and other experienced SuperCollider users via the Œsc-users¹ email list being some of the most important. SuperCollider does not currently support a plug-in architecture but a new version (SuperCollider version 3) will support plug-ins in C++ (see McCartney (2000). This has finally addressed the major criticism of SuperCollider from the hardcore DSP fraternity that it does not allow control at a low enough level.

A specific introduction to the language and syntax have been avoided since a demo of SuperCollider (which includes many examples and a tutorial) is available from: http://www.audiosynth.com (It is suggested that you heed the warning regarding a PC version!)

Conclusion
The original subject for this article was to be a review of SuperCollider, this plan has changed somewhat. A review of this kind would have inevitably resulted in healthy portions of SuperCollider code that in this forum would waste, rather than utilise, space (a web site with examples seemed much more appropriate for this purpose).

Instead a more general approach was taken, by examining the differences between musicians interacting with traditional musical instruments and computer based systems. The problem of mapping input to response is vast in a computer system, not least since there is such an enormous choice. Traditional instruments must obey physical laws, as a result mapping input stimuli to responses is limited. Communicating with a computer through an effective programming language for music composition enables the transmission of structural ideas in addition to physical gestural interaction.

It is not suggested that everyone using computers must become even a novice computer programmer but that if computer packages offer restricted means of dialogue and inflexible systems of representation they should be at least questioned.

References
Loy, G. (1988) Composing with Computers ‹ a Survey of Some Compositional Formalisms and Music Programming Languages, in Mathews, M. V. and J. R. Pierce (1989) Current Directions in Computer Music Research, The MIT Press, Cambridge, Massachusetts, pp291-396. McCartney, J. (1998) SuperCollider 2, programming language, Austin, Texas. http://www.audiosynth.com McCartney, J. (2000) A New, Flexible Framework for Audio and Image Synthesis, Proceedings of the 2000 International Computer Music Conference, pp258-261. Mulder, A. (1994) Virtual Musical Instruments: Accessing the sound synthesis universe as a performer, Proceedings of the first Brazilian Symposium on Computer Music, pp243-250. Pope, S. T. (1991) The Well-Tempered Object: Musical Applications of Object-Oriented Software Technology, The MIT Press, Cambridge Massachusetts. Pope, S. T. (1993) Real-time Performance via User Interfaces to Musical Structures, Interface 22(3), pp195-212. Pressing, J. (1990) Cybernetic issues in interactive performance systems, Computer Music Journal 14:1, pp12-25. Roads, C. (1996) The Computer Music Tutorial, The MIT Press, Cambridge Massachusetts. Rowe, R. (1993) Interactive music systems, The MIT Press, Cambridge Massachusetts.

 

Home | About SAN | Education | Sounds | Research | Membership | Links | Shop
Copyright 2004 Sonic Arts Network