Tuesday, March 11, 2008

Studying at Monash

I have just started a PhD (part-time) in the Faculty of Art and Design at Monash University, Melbourne. My PhD will be practice-based, i.e. largely by portfolio; I will be making generative artworks. Art and Design describe the desired outcome as a contribution that has "substantial cultural significance", which is a much better approach than trying to fit a practice-based research degree into the usual framework of a "substantial contribution to knowledge".

I knew that moving from a conservatorium of music to an art and design school would be a substantial cultural shift, but it is an even bigger change than I expected, and I'm not confident that I have really grasped the differences yet.

There are a huge number of higher degree students around in Art and Design, I think upwards of 150. There is a lot going on, and the Faculty are making an effort to overcome the isolation often felt by research students. Monash is a multi-campus university these days; I will largely be at the Caulfield campus, where most of the Art and Design people and facilities are.

So far I feel confused but exhilarated!

Tuesday, March 4, 2008

Music.Sound.Design Symposium at UTS

Warning: Long post!

Recently I attended the Music.Sound.Design symposium held on February 13-15 at the University of Technology, Sydney (UTS). To quote from the symposium booklet:
The Faculties of Design, Architecture and Building and Humanities and Social Sciences at UTS are together embarking on a project to develop a new undergraduate program emphasizing cross-disciplinary practice across the areas of music, sound and design and as part of that process are holding the UTS Music.Sound.Design Symposium 2008.

The main senior academic figure present was the Dean of the Faculty of Design, Architecture and Building, Theo van Leeuwen. The nitty-gritty organisation was done (I think) mostly by Ben Byrne. Apparently there was a only a short lead time for organising the event, yet it ran pretty well.

There were four one-hour keynote addresses, twenty 20-minute presentations, two concerts, a showing of an installation, and three so-called workshops. All of these except the workshops were open to the general public; the workshops were about curriculum and pedagogy, and were restricted to educators and practitioners. In fact everyone who showed up at the Symposium fitted into these categories, so the workshops were also opened to everyone. Since I am no longer teaching, I only attended one of the workshops; I got to most of the other sessions.

Usual disclaimer: what follows is a personal view of a complex event. Apologies to anyone I have misrepresented!

The keynote addresses

The opening address was by Kees Tazelaar, head of the Institute of Sonology at the Hague. The Institute is a unique institution, devoted to sound, with a central focus on electronic and computer music; the name "sonology" was coined when the Institute was set up. Kees talked about several topics related to "classical" electro-acoustic music, including his own compositional methods, his work on the reconstruction of Varèse's Poème électronique, created for the Philips pavilion at the Brussels World Fair in 1958, and the compositional work of Gottfried-Michael Koenig, a computer music pioneer who was a predecessor of Kees's as head of the Institute. As I understand it, the approaches of both Kees and Koenig involve treating processes on an equal footing with source material; the output consists of many layers of processing, where the processes themselves are organised in a manner inspired by serialism.

The second keynote address was by Ernest Edmonds, who runs the Creativity and Cognition Studios at UTS. He is interested in what he called "Art Systems" or "Art Processes", and in particular works where a single abstract underlying process gives rise to both sounds and visuals. He presented an example of an audio-visual work inspired by the colour-field paintings of the 1960s, and presented at a celebration of these paintings. The visuals consisted of vertical bars of colour, accompanied at first by "chuffs" of sound passed through resonant filters. Initially there seemed to be a very simple correlation: each chuff caused a change in the image, but after a while, as the density of chuffs increased, it became clear that something more complicated was happening. I asked Ernest about this, and he said that the piece was in four sections, with a different process in each section. He also said that it went down very well with an audience of colour-field painting buffs.

The Japanese artist Yasunao Tone was the third keynote speaker. His presentation was difficult to follow because of language problems, but nonetheless very interesting. Yasunao was a member of the Fluxus group and has a long history of engagement with experimental art in many media. He talked about some early pieces, including one done as part of a presentation for Volkswagen, where a VW Beetle was wired up with proximity sensors and so on, so that it made various sounds when people approached it, opened the doors, etc. Some of the sounds were very short snippets of the German national anthem. According to Yasunao, the VW executives weren't impressed. Yasunao then talked about a number of works he has done based on text, and in particular on Chinese characters, which of course make up the main part of Japanese writing. He described one work, based on a Chinese translation of a work of Ezra Pound (I think),where each character was represented in several ways, in its modern form, in an older quasi-pictorial form, by an actual picture, by its sound as read by Yasunao (not an expert Chinese speaker, he says), and by the original English word. His aim was to supply the aural and visual elements missing from a normal written translation. He said that the older form of one of the characters was supposed to represent a baby being placed in a river as part of some sort of ritual, and he actually found a picture of a baby being placed in a river. Yasunao is well-known for his "wounded CDs", where he played CDs damaged in various ways, to upset the then much-hyped "purity of digital sound", but he didn't talk particularly about this work. It is clear that he has enormous energy, and he opened his talk by reciting/singing a sound-poem (no words) by (I think) Nam June Paik. He said it was to wake himself up, and it woke up the rest of us too!

I discuss the fourth keynote address, by Julian Knowles, at the end of this post.

The twenty-minute-talks

The twenty-minute talks covered a very wide range of topics. The biggest single group was formed by artists talking about the way they use sound in their art, including audio-visual work, installations with an audio component, sound sculpture, virtual musical instruments (realised on a computer), and so on. Jim Denley talked about recordings he had made of his own improvising in some extraordinary natural spaces in the Buddawang Mountains. There were two talks by builders of physical (as opposed to virtual) instruments: Danielle Wilde described her "hipDisk", which requires the wearer to make dancer-like movements with the body in order to play tunes, and Donna Hewitt described her "eMic", a microphone stand with various controls attached to it, allowing singers to control the processing of their sound.

A couple of the twenty-minute talks were about pedagogy: John Bassett spoke on teaching sound engineering and Densil Cabrera on teaching acoustics. Damien Castaldi discussed the way that radio is mutating into podcasting and webcasting. Stephen Barrass talked about the work his group is doing in data representation, with an emphasis on sonification (representing data as sound; analogous to visualisation).

Another group of talks could be described as historical and critical. There were three talks on sound from a cinematic point of view, in part devoted to historical changes in the way sound has been treated in film. Peter Blamey talked about La Monte Young's idea of listening "inside a sound", and the progressive changes in the sort of sounds that La Monte Young used. Caleb Kelly talked about "cracked media", a movement where a closed medium, for example an LP, was cracked open by melting part of the disc, sawing it up and gluing the pieces back together in a different arrangement, and so on. Computer technology has now made all media open, despite the Digital Rights Management bully-boys.

Finally (in logical order, if not in time order), Mitchell Whitelaw gave a very wide-ranging talk about new media, starting from the distinction made by Hans Ulrich Gumbrecht between "meaning culture" (trying to understand the world) and "presence culture" (body-centric living in the world), and leading on to a dialectic between the immaterial (abstract patters, bits, data) and the material (the embodiment of these patterns as things we can hear, see, feel). These ideas seemed to be in danger of becoming a theory of everything, but this is work in progress, and it will be very interesting to see what it develops into.

The installation and the concerts - the power of orthodoxy

Robin Fox had an installation which represents a development of his work with oscilloscopes. In the earlier work Robin fed carefully calculated audio signals into an oscilloscope, generating amazing rapidly-changing shapes and patterns. Later Robin used a laser with a green beam in performance, flicking it rapidly all over the room and the audience, again under the control of sound. The installation at UTS was in a blacked-out room, with two of these sonically controlled controlled green lasers, some mirrors, fog from a fog machine, and quite intense sound. The result was pretty impressive, but didn't quite have the impact for me of the laser performance I saw him do a year or two ago.

There were two concerts. Given the wide range of practices discussed during the Symposium, they had a surprisingly narrow scope. The organisers were not really to blame; rather it seems to be the power of orthodoxy. In fact two orthodoxies were represented at the concerts, where by "orthodoxy" I mean not just a genre, but a genre that becomes a normative force: things ought to be done this way; it becomes difficult to break away. The word "orthodoxy" was suggested to me in discussion.

The three pieces presented by Kees Kazelaar come from the (now) academic electronic/computer music tradition, which belongs squarely to art music. The first piece was the short Concret PH by Xenakis, created for the 1958 Philips pavilion along with Varèse's Poème électronique. Then we heard the reconstruction of the Poème électronique worked on by Kees, and a piece by Kees himself, whose title I unfortunately didn't catch, but which was inspired by the phenomenon seen in very cold climates of one's breath crystallising into a cloud of ice particles.

The remaining pieces in both concerts, with the partial exception of Yasunao Tone's, fell under the "laptop performance" orthodoxy. It is a part of this orthodoxy that the only information given to the audience is the names of the performers. The pieces, or sets, don't even have names, and there is nothing resembling program notes. (Kees's piece fell victim to this orthodoxy; there was no information about it in the program.) Of course there are different sub-practices within laptop performance: Donna Hewitt made visible gestures during her engaging performance with the eMic instrumented mike stand; Philip Samartzis played very quiet sounds after a raucous beginning, while Robin Fox's piece was uncomfortably loud; Peter Blamey didn't use a laptop at all, just a mis-wired mixing desk. Nonetheless the pieces did all belong to one relatively narrow practice.

Yasunao Tone's performance was the only one with a visual component. Yasunao had a drawing tablet connected to his laptop and drew a sequence of Chinese characters. I think these constituted a Chinese translation of a poem by Ezra Pound (again, there were no program notes). There was a base sound-track of fairly confused-sounding and harsh noises from many sources. The stylus of the drawing tablet acted to puncture through this base layer and release even louder and harsher sounds. Although the overt (and literary) structure of the piece set it apart from the usual laptop performance, in other respects, especially in the sound world used and the semi-improvised performance, it fitted in to the laptop performance orthodoxy very well.

The organisers invited various people to participate in the concerts, and some of those invited work across a range of genres and practices. But it seems that they all automatically went into laptop performance mode, succumbing indeed to the power of the orthodoxy.

The conservatoire model and its inversion


The final keynote address was by Julian Knowles from the Faculty of Creative Industries, Queensland University of Technology. Julian's intentionally provocative talk was about aspects of music education. Among other things, he put up a list of various composition appointments at conservatoria around Australia. The appointees all came from the Western classical tradition, most had studied in England, and the dominant influence was European modernism; indeed some of the people had studied with key figures in this movement. The only real exception was the now defunct Music Department at La Trobe University, which had strong links with the University of California at San Diego.

In this context, Julian put up quotations from conservative figures in Australian music asserting that the only sort of music education worthy of the name was education in the Western classical tradition.

Julian went on to list various features of what he called the "conservatoire model". I didn't catch them all (there were a lot), but the starting point was that composition and performance are distinct activities carried out by different people, and recording is a third distinct activity, carried out by technicians rather than creative people. Julian then systematically inverted all the features of the conservatoire model, so after this inversion the same person is both composer and performer, recording is a creative activity carried out as often as not by the performer/composer, and so on. Julian argued that this inverted model is the reality of today's practitioners.

When Julian was at the University of Western Sydney he was involved in the Electronic Arts program there (now closed, as with much of the rest of the art program at UWS). The program attempted to address this new reality in its course structure. For example, traditional music notation was not a prerequisite. Julian put up a collage of alternative notations, such as waveform displays, track layouts in ProTools, a Max patch, and so on. Julian argued that if a student needs traditional notation, it should be available, but not everyone needs it.

Julian also made the point that thanks to the wide spread of music-making technology, the institutions are no longer the gate-keepers for innovation in electronic or computer music. The institutions can certainly act as creative centres, but they are no longer the only source.

I did have the feeling that there was a certain amount of stigmatisation of conservatoria during the symposium, and after Julian's talk I was tempted to leap up and say that my MMus portfolio at the Sydney Conservatorium exemplifies several of the practices discussed at the symposium, and contains no traditionally notated pieces. But of course Julian is largely correct. The core mission of the Sydney Conservatorium is to train the next generation of classical music performers, and the Conservatorium has close industry links with the Sydney Symphony Orchestra and other such organisations. The other multifarious activities of the Conservatorium—composition, musicology, music education, music technology, research into music pedagogy and performance—are all seen as ancillary. Of course the Conservatorium orchestra must play Tchaikovsky, as the students must be able to perform Tchaikovsky as part of their professional training, to summarise a conversation I overheard. It doesn't matter that access to an orchestra is essentially impossible for a student composer. Playing Tchaikovsky is the reality of the industry.

Julian's talk was the last activity in the pedagogical strand of the conference. I didn't really engage with this, as I am not now involved in teaching, but I was aware of some of the undercurrents. Remarkably, although three of the four keynote speakers describe themselves as composers, there was not very much discussion of music (and I include jazz, pop, rock, world music, electronica, hip-hop,...). Also I heard no overt discussion of design at all, and I don't know what the word means in this context. It was suggested to me that design in some sense underlies all of the symposium topics, though surely any discipline worthy of the name has a systematic methodology.

Thus there was an impression that the symposium was really about sound, and that music and design were secondary. Another topic was whether computer programming should be taught, and if so in what form. Tom Ellard (rock muso, electronic music pioneer, audio-visual artist) put up a page from Schoenberg's harmony textbook and said that this should be taught before programming. But then Tom made an ambit claim for "music" to include all art forms, including painting and architecture.

Finally the question arose as to whether the proposed course will just be a collection of unrelated units, or whether there is a coherent disciplinary core. It was suggested to me that historical and critical studies might provide such a core. The course is at a very early stage of construction, and the symposium was not expected to provide final answers to such questions. It will be interested to see what is taught in 2010, when the course is planned to start.

For me the value of the symposium was what I hoped it would be: encountering a wide range of views from a collection of very interesting people!
Add to Google Reader or Homepage
Subscribe in Bloglines