Theory - passivist/GRNLR GitHub Wiki
"Musical instruments produce sounds. Composers produce music. Musical instruments reproduce music. Tape recorders, radios, disc players, etc., reproduce sound [...]." -John Oswald "Battered by the Borrower: The Ethics of Musical Debt"
What is a musical instrument? Well, a musical instrument is a tool for making sound. I would argue that an instrument is also a tool for mediating a sonic idea. As such one can think about it being a medium1 for sound production. When we assume instruments to be media they can also be viewed as extensions2 of the capabilities and forming symbiotic combinations3 with the person utilizing them. Musical instruments allow a composer to realize a sonic expression but they also inform the sonic ideas of the composer by giving access to previously unobtainable sounds. These sounds are not only used to realize preconceived musical ideas but also expand the horizon of the composer by behaving in unexpected ways and offering a space of new sounds to explore. I think it can not be overstated how important the choice of media (and thus instruments) is for artistic expression.
Since the rise of recording technology the definition of a musical instrument has been broadened from traditional acoustic instruments to include recording technology and the recording studio4. Today the step to include computer software doesn't seem too far fetched. But with that arise new aesthetic problems. In his much noted essay "The Studio as compositional tool" Brian Eno talks about the significance of tape and it "[putting] music in a spatial dimension, making it possible to squeeze the music, or expand it"4. The studio, consisting mostly of analog technology such as tape, as Eno sees it is still a real room in the physical world. As part of the physical world it is bound by the physical laws that govern that world. Digital technology further transports music into a 'virtual' space where the rules of the physical world don't necessarily have to apply.
Established instruments don't exist in a contextual vacuum. Whenever one chooses to use an instrument the artistic and cultural context is imported into the work. As Curtis Roads put it:"To pick up a gold-tipped fountain pen and inscribe a treble clef and key signature on onionskin staff paper is to import a system of composition. It entails a selection from an enormous range of possible approaches to making music, a palette of sounds, their combinations, a mode of performance, and even the audience."5 Whenever an instrument is used it brings it's own cultural context into the composition: a double bass or saxophone never really loose a jazz connotation, a 909 kick drum is always associated with house music. Using electronic or software instruments the composer is influenced by what the creator of the instrument intended or assumed you to make: "Even general-purpose toolkits impose aesthetic, technical, and sonic biases. Anything may be possible in theory, but is it easy to design and interact with, and is the sonic result worth the development effort?"6.
Whereas the sonic possibilities of traditional acoustic instruments are pretty well defined the computer puts the burden to impose sonic and creative limits on the user. This is especially true for composers who write their own tools or instruments (see also: Robert Henke - "Give me Limits!": https://youtu.be/iwOaYxSJGqI). In the following paragraph I will try to argue that instruments shape these boundaries of the work the composer can realize with their interface and sonic abilities. I therefore think that it is important to be mindful of those boundaries as a composer and that there is merit in knowing how to design your own instruments and define these boundaries yourself.
In this section I will lay out a model of my aesthetic practice as a process involving instrument design and composition.
What constitutes the process of instrument design? For me it is the process of defining a sonic space. The space is defined by setting up constraints for both the sonic capabilities and the user interaction with the instrument. These constraints set the borders of the sonic space the composer and listener find themselves in. They give context and allow the listener to draw conclusions about the position of objects in this space. The topology of the space also governs the interaction of the individual with the instrument.
Sonic constraints limit what sounds the instrument is able to produce. The sonic palette is largely defined by the synthesis method such as additive synthesis, frequency modulation, sampling, physical modeling or granular synthesis. Further constraints can be defined by imposing limits on the range of values of the instruments parameters. These constraints can be thought of as hard boundaries: sounds that lie outside of the boundaries can not (without unpractical amounts of repurposing and hacks) be produced by the instrument and are thus not part of it. They lay the groundwork for the more intricate aspects that are to follow.
Constraints on the user interaction define the terrain in the sonic space. Some places are easier to reach than others, some are hidden from plain sight. Every instrument has a suggested form of interaction. These constraints are put in place through the instruments' interface, this can be the physical form of the instrument or a GUI. The interaction with the interface has two main points where constraints can be defined: the behavior of parameters and the mapping of parameters to one another. The behavior of a parameter can be one or a combination of the following: linear, exponential, curved, random or chaotic. This allows the designer to emphasize specific regions of the parameter values to invite more user interaction or make traversing regions that are deemed not interesting more quickly. The designer can make values behave more or less unpredictable, the interaction simple or complex. Mapping of parameters to one another is a way to reduce an unpractically large number of parameters. It also establishes relationships between parameters. These relationships already encode a large number of musical gestures. A obvious example would be dynamics in classical acoustic instruments: forte is not only loud but also suggest a certain timbre which is the result of many parameters working in tandem to produce a certain sound. In reducing parameters and setting up more complex interactions parameter mappings are especially suited for live performance6. These are soft boundaries: they shape how a composer will interact with an instrument and favor certain sounds or gestures that the designer deemed more important.
We can already see that the design of the instrument has significant effect on the practice of composition. A composition can only be realized in the sonic space defined by the used instruments. Moreover the composer is also subjected to all the soft boundaries that influence her expression. Because of this the composer must be aware of this influence the instrument is having on her work and use that to her advantage. With this in mind the design and choice of instruments becomes part of the compositional process. When a composer uses an instrument in her composition she imports the sonic space that defines the instrument as well as the cultural connotations of the instrument into the work.
A musical composition can be thought of being a framework of interconnected sonic spaces in which musical gestures are performed. For me, the organization of these spaces is an integral part of composition.
This instrument cannot be described sufficiently without its environment. On its own it seems quite limited in some aspects. The interface is primitive offering just mouse input resulting in only being able to change one parameter at a time. It lacks filters and reverberation. But the instrument is designed to be used with a modern DAW in mind. The sonic tools it lacks can be easily added. For this one can use specialized effects of much better quality than I would have the time or skill to implement as part of the program. The interface can also be made vastly more usable with the augmentation of the DAW. Host automation lets the user write a detailed graphical score controlling all the parameters independently.
Certain DAWs also allow for the addition of modulators like LFOs or Envelopes. Giving access to high level abstraction of the parameters. Ableton Live also allows the user to write his own interface augmentations in Max for Live. This makes more in depth parameter mappings possible that are open and easy to change not needing to be hardcoded in the instrument.
It is also possible to implement less conventional physical interfaces like gamepads or joysticks. All of this can also be done loading the instrument directly into MAX.
This was one of my main motivations for writing the plugin. I wanted the ability to control all the parameters of the instrument visually over time and integrate it into my already existing workflow all the while being open enough to be reasonably easy to extend. Another important aspect for me was the ability to collaborate with other artists who might not be technically versed or interested enough to use a platform like SuperCollider for making music.
It is of course also equally feasible to completely abandon traditional DAWs and make music just with tools like SuperCollider or PD/MAX but it poses a few technical obstacles that can be detrimental to the creative flow. Especially when you want to compose in a linear time based way.
Perhaps the most important decision regarding the design of the instrument is that it is sample based. So rather than synthesizing completely new sounds the instrument takes an existing digitized sound as sonic material. The selection of the source material is one of the most important decisions to make while composing. The materials timbre and (micro-)structure are the foundation of the sonic process and shape the direction early on. In fact I would like to draw a parallel of the complex inner dynamics of a sample when used in granular syntheses and complex parameter mappings in user interface design. For example: if one uses a granular synthesis program controlling just the start position of new grains, modulating the position from the beginning to the end of the sample, the listener can observe many complex changes in timbre. In this way the sample functions like a very complex, interconnected web of parameters.
Depending on how strong the identity of the material is and how the parameters of the instrument are set the material is more or less identifiable. The instrument is acting like a mask put upon the source material. One can think of the mask having more or less holes giving the material a chance to shine through.
A strong obfuscation of the source material allows the composer to use material without it being recognizable. The material then loses much of its features and identity. Most importantly it loses its structural and logical integrity. Notions of beginning, end, attack, release or pitch lose their meaning: the material is then "fluid". "Audiodaten werden zu abstrakten medialen Artefakten, die in Bezug auf physikalische Grundgesetzmäßigkeiten 'flüssig' [...] werden" 8
The composer can however also decide to deliberately reveal the unaltered identity of the sample. In this case effects of recontextualization become more pronounced.
This can be a musical parameter, with the composer revealing, or concealing the nature of the source material.
1: Jonathan Sterne "Media or Instruments? Yes." from http://offscreen.com/view/sterne_instruments
2: Marshall McLuhan "Understanding Media - The Extensions of Man"
3: Philip Brey "Theories of Technology as Extension of Human Faculties"
4: Brian Eno "The Studio as Compositional Tool" in "Audio Culture, Readings in Modern Music"
5: Curtis Roads "Microsound"
6: Curtis Roads "Composing Electronic Music: A New Aesthetic"
7: Andy Hunt et al "The importance of parameter mapping in electronic instrument design"
8: Michael Harenberg "Virtuelle Instrumente im akustischen Cyberspace"
