T HETHECHNOLOGICALCULTURE OFOPEN- AIRSOUNDSYST EMS SONIC STATUES THE TECHNOLOGICAL CULTURE OF OPEN-AIR SOUND SYSTEMS Ties van de Werff i023760 |
[email protected]Thesis MPhil Cultures of Art, Science & Technology Supervisor: prof. dr. Karin Bijsterveld Thesis-committee: prof. dr. Wiebe Bijker dr. Jo Wachelder Faculty of Arts & Social Sciences Maastricht University, August 2008 Table of contents Acknowledgements 1. Introduction 6 Developments in sound technology and the rise of the popfestival 7 The technological culture of sound systems: an STS-approach in sound studies 9 2. Hearing the empirical: methodologies 15 Mapping the network of the sound system 15 Getting backstage 17 Making jottings on stage: ethnography at the festival-site 18 3. The rise of the banana: conventional PA versus line arrays 21 The state of the art in speaker-technology at the MusikMesse 22 Synco: the organisation of predictability 31 Stack and fly: the construction of different sound systems 38 4. No input, no output: sound engineers, mixers, and the choice between analogue and digital consoles 50 The touring industry: a traveling circus 52 Mixers: sound engineers in the wild 54 Continuity, reliability, and rock ‘n roll 61 5. Live performances at the popfestival: seeing and hearing in the digital age 73 The co-evolution of visual and auditory technologies 74 Liveness: come closer! 78 6. Conclusion 82 References 87 List of figures and illustrations 91 2 Appendix List of interviewees 92 The line array at Pinkpop 93 Example of a patchlist 94 3 Acknowledgements “The most important thing of writing is the answer to the question: how good are you in confining yourself, and how long can you stand it?” – Menno Wigman Although these are the words of a suffering poet, they do sound applicable to a suffering thesis-writer. “Writing,” as Menno Wigman continues, “actually doesn’t have anything to do with living.” He could be right. Lucikly, I had to do empirical research in addition to the solitary process of writing, and during those encounters, I saw a whole lot living going on. This thesis is the result of a wonderful experience. Not only did Research bring me to the MusikMesse and two popfestivals, where I strolled around for days in the backstage- area’s, observing the change-over of bands as Bad Religion and Incubus, it also made me discover the beauty (and horrors) of empirical research. The world out there is interesting; it forces you to think, to analyse, to find emerging patterns from numeruous interviews. A few times, I really felt the thrill as if I actually had ‘discovered’ something. I also found out that criticising social-onstructivsm is easier than practicing it. Ethnography made me realise that scientific research (and social constructivism) really can mean something, other than a sophisticated story made out of a handful of philsophical books. This thesis could not have been written without the help of many people. I am espeically grateful to Gerrit Kuster, Benno Rottink, and Rene Scholing of The Production Factory. Not only did they provide me with access to Pinkpop, their friendliness and humuour also supported me in every way during the fieldwork. Next to Carmalita, Jules & Bob Willekens, I also want to thank all the interviewees. I thank my supervisor Karin for her criticism and advice, without which I never would have made it. With this thesis, some eight years of studying come to an end. I enjoyed it a lot. It seems that CAST finally gave me the tools to enter society. I have learnt so much this year, not in the least from (the drinks with) my fellow-CASTees. Let them all be praised. Ì want to thank my friends, who supported me and probably have missed me these past months (I surely did miss them). And finally, I want to thank my parents, for supporting me through all those years of studying, and hearing me whine about it from time to time. 4 The blonde in the bleachers, she flips her hair for you Above the loudspeakers, you start to fall She follows you home, but you miss living alone You can still hear sweet mysteries calling you The bands and the roadies, lovin' 'em and leavin' 'em It's a pleasure to try 'em, it's trouble to keep 'em Cause it seems like you gotta give up such a piece of your soul when you give up the chase Feeling it hot and cold You're in rock'n'roll, it's the nature of the race It's the unknown child, so sweet and wild It's youth, it's too good to waste Joni Mitchell “The Blonde in the Bleachers”, from the album ‘For the Roses’ (1972) (though it’s Okkervil River’s interpretation (2007) that travelled with me these months) 5 Chapter 1 Introduction: open-air sound systems “Pinkpop, thank you! Hopefully see you next time!” The drummer finishes with a big ruffle on his cymbals, and the last played chords of the guitarist are shown on gigantic video- screens. People scream, and under an overwhelming applause, the musicians wave and slowly disappear in a black hole at the back of the stage. People keep cheering and screaming for more, but the screens die out, and immediately, another group of people appear on stage – a signal for the crowds to turn their backs and walk away. The people on stage start rolling up cables, putting the amps in flight cases on wheels, and carry the instruments away from the same spot where only five minutes ago, a band was cheered at by 80,000 people. A guy with in-ear monitors directs the stage-hands to roll the flight cases directly behind the stage, where huge trailer-trucks already are standing by at the adjacent loading dock. A mixer roles his huge console across the stage, and two engineers are checking the huge hanging banana-sized speakers with little devices in their hands. When the stage floor is cleared, another group of stage-hands appears with new instruments, amps and microphone cables. The new mixer starts a sound-check. Within half an hour, another band starts its show, and the packed trailers are already on their way to another gig. Hundreds of thousands people every year visit popfestivals. In a few days time, they watch and dance to many bands, eat and drink from a variety of choice, and sleep in crowded camping sites on the festival terrain. A popfestival resembles a small village, with the one big difference that for a few days, life seems to evolve around one thing only: music. People at popfestivals subject themselves willingly to massive walls of sound. However, the sounds produced are not solely created by their favorite bands. They are made possible through massive loud speakers, accompanied by trucks full of microphones, cables, mixing-equipment and hordes of technicians, sound engineers and stage-hands. Although many people look with eager in their eyes to the heavily secured fences of the backstage area, few of them will imagine hard-working engineers or wonder how this huge artistic and logistic event came into being in the first place. The sheer size and volume of the sound systems used alone makes one wonder: What does it take to address thousands and thousands of people with sound? What kind of sound technologies do you need? How many different groups of technicians are involved? And what role is left for the artists? 6 The day before show night, I find myself on stage watching huge speakers being hung. On the stage floor, two engineers are connecting speaker-elements to a hoist, hanging in the top of the stage-roof. The engineers are discussing two types of sound systems, and one engineer says. “I don’t like these new line arrays, they don’t sound rock ‘n roll!” The other replies: “That’s because you don’t have any experience with them, you only work with conventional stacks!” Surrounding the squabbling men, several light-technicians are busy attaching electronic devices to large trusses. Little hoists with metal chains are hanging everywhere. As I walk off stage, I see five people dragging a huge mixing console to a little tent, some 75 meters away from the stage. They pull the console inside the tent, where a mixer already is tinkering with a console. “Another Digico. Well, let’s put him behind this analogue one,” he says to the pulling site-crew. The different mixing consoles are surrounded by racks full of electronics. On top of the racks, two laptops show moving frequency-plots and the amount of decibels produced. At another console, a light- technician is controlling some of the electronic lights on stage. “Your people better hurry up,” the mixer says to the light-technician, “the band arrives at 16h at I want to do a proper line-check first.” The light-technician mumbles something, and walks away. In the meantime, two trailers arrive at the loading dock behind the stage. A man is watching the unloading site-crew, while talking fiercely in his telephone. It appears that there are not enough speakers. “I need extra Synco-speakers. Isn’t there a network-partner in the Netherlands who can spare some ten extra boxes? I don’t care, as long as they are here tomorrow morning!” the man says. When ending the call, he rushes into a container just behind the stage, which functions as mobile office. Inside, five people are working on their laptops. Phones are ringing constantly. A production-manager discusses his band’s technical specs with the stage-manager. The manager of the band doesn’t like the equipment on stage: “Our monitor-mixer brings his own digital console, but we need more power. Didn’t you see our rider in the mail?” The production-manager sighs and starts calling again. In ten hours, the festival will start and some 80.000 people will be standing at the gates. Developments in sound technology and the rise of the popfestival This short impression of the backstage life of a popfestival does not fit the magical and decadent atmosphere that people often imagine at ‘the backstage-area’. It not only shows 7 the logistical complexities of organizing such a huge event, but it also emphasizes the centrality of technology for the people working backstage. In this thesis, I will investigate this ‘hidden’ world of the sound system, which I will call the technological culture of the sound system. The concept encompasses exactly want lies at the heart of this thesis: the interactions between people and the technologies they work with. The concept of “technological culture” allows one “to investigate how strategically, the lines between culture and technology are constantly being redrawn, in the technical, social or political practice” (Bijker, 1995). The concept of technological culture is based on a social- constructivist notion of technological development, as I will explain below. The key- question of this thesis is: How has the changing technological culture of open-air sound systems affected the usage of sound technologies and the roles of different social actors at open air music festivals? The use of sound systems to address audiences – more specifically: sound- reinforcement systems (SR-systems) or public address systems (PA-systems) – dates back to the 1920s. Mainly due to a lack of amplifier-power at the time, sound systems were used for speech only. In the 1950s and 1960s, domestic speakers became affordable and available to the consumer-market. The industry grew in the post-war years and the field of audio engineering developed rapidly. In the 1960s and 1970s, during the cultural revolution of the music oriented hippie-era, the first large open air concerts and popfestivals came into being. One of the first festivals in Europe was the Holland Popfestival Kralingen, held in Rotterdam in 1970 as the European answer to the well-known Woodstock-festival, which was held in the USA a year before.1 Since then, popfestivals have become common and form a large part of the entertainment and music industry. The festival described above, Pinkpop, is the oldest festival in Europe. This year, the 39th edition took place in June. Famous other big festivals in Europe include Roskilde in Denmark, Glastonbury in Britain, Lowlands in the Netherlands, Rock Werchter in Belgium, Rock am Ring in Germany, and Sziget in Hungary. An average festival will have more than thirty bands from all over the world lined-up, to attract as many as 100,000 people. The rise of the popfestival would not have been possible without the rise of new sound technologies. The sound system industry has professionalized enormously since the first festivals were held in the 1960s. The power of the equipment has risen, as larger audiences demanded louder speaker systems. Sound systems have evolved from wooden boxes to large, flying speakers, controlled by specific amplifiers and software programs. 1However, the first open-air festival is Montery in California, held between 16-18 of June, 1967 and attracted about 50,000 visitors (Source: Oor’s Speciaal Jubileumboek 25 jaar Pinkpop, 1995) 8 Microphones, mixing consoles and other processing equipment have been integrated in digitally steered systems. Bands are now called productions, since their touring circus consist of trailers full of sound and lighting-equipment. Local audio rental retailers and manufacturers have mobilized themselves in large, global networks, where they protect and share their technologies. All these developments give rise to a highly sophisticated technological culture of the sound system. Over the last ten years in particular, two significant developments in sound technology have been taken up by the live music industry: the line array sound system, and digital technologies such as the digital mixing console. Line arrays are the most popular speakers in sound system-technology nowadays. They compete with conventional PA- systems, which matured in the 1970s. Conventional PA-systems consist of large speaker- boxes, often stapled at the side of the stage. Line arrays are smaller, and often look like half-shaped banana’s, flying in the roof of the stage. This speaker-system is based on new developments in acoustical science, and are said to give a higher control over the sound. The second dominant development concerns the rise of digital technologies; notably the digital mixing consoles. The digital mixing console, a multi-channel system to control the sounds of the band on stage, can now be fully programmed to the wishes of the mixer, and his presets can easily be stored. Questions immediately arise: How do engineers and mixers work with these technologies? Why are they so popular, and by whom exactly? How have these technologies changed the work of the engineers and mixers? What affects the choice of using a particular sound system or particular mixing console? The technological culture of sound systems: an STS-approach in sound studies This thesis focuses on studying the technological culture of the sound system at two popfestivals in the Netherlands. Observing and participating backstage at popfestivals is a good way of studying the workings of the sound system and the networks of people surrounding and supporting it. In answering the questions posed above, I will mainly focus on the rise of the two sound technologies mentioned: line arrays and digital technologies such as mixing consoles. However, the technological culture of the sound system is embedded in a wider context. The sound systems used for these huge events are produced by manufacturers and retailers, which are often part of (transnational) networks of industry. How did the technology of the sound system develop, and what technological culture did develop around it? These questions regarding technology and its relationships to science and society are central to the field of Science & Technology Studies (STS). Scholars in this 9 field study the relationships between and the co-production of science, technology and society. Since the 1980s, the main tenet of STS is the social shaping of science and technology (Bowden, 1995). Three major approaches can be distinguished: the social- construction of technology-model (SCOT), the large technological systems-approach (LTS), and actor network theory (ANT). These three different approaches all raise different questions regarding the technological culture of the sound system. I will draw upon different concepts derived from these theories, to create an adequate theoretical framework for studying the technological culture of open air sound systems. First of all, I will distinguish three dimensions of the technological culture: a material, a social and a symbolical dimension. The material dimension consists of all the technologies, techniques and artifacts that are being used, such as loudspeakers, mixing consoles, rigging-equipment (i.e. hoists), racks of electronics, flight cases, etc. The social dimension refers to the different social groups working with the technologies, their interactions, statuses and hierarchies. The general term of sound engineer in practice consist of several specializations, each with their own roles. The organization and coordination of the sound system belongs to this dimension as well. The symbolical dimension addresses the different meanings people attribute to the technological culture; meanings about sound, meanings about the workings of the sound system, and meanings about the work it entails and the technological culture itself (and even its history). The three dimensions interact with each other. Sub questions that can be asked are: How did the technology of the sound system develop? How are specific sound systems tailored to the demands of the artist, audience, mixers, local governments and other involved social groups? How do the people around the sound system interact and relate to each other? What is the perceived difference between analogue and digital? How has the concept of loudness changed? To answer these questions, and to study the relationships between the three different dimensions, I will combine several concepts of different theoretical approaches. The concept of ‘technological culture’, to begin with, is derived from the social- constructivist theories of Bijker (1995), and, as already explained, can be used to study the changing lines between ‘culture’ and ‘technology’. Technologies are all around us, and frame our behavior, though we rarely are aware of it. By strategically labeling something ‘culture’ or ‘technology’, certain perspectives or sets of behavior can be put to the fore. The popfestival can be considered as a part or specific instance of this technological culture. From Pinch & Bijker’s social construction of technology-theory (SCOT), I will also use the 10 notion that technologies can entail different meanings for different people and/or social groups. By looking at technologies ‘through the eyes’ of the different social groups surrounding the technology, these different meanings and interactions can be described. My theoretical framework is also based on another social-constructivist theory of technology: large systems theory. I will use (parts of) the large systems theory (LTS) of Hughes (1987) to understand the mechanisms involved in the development of the sound system as a large technological system, such as networks of electricity or telecommunications. Especially Hughes’ concepts of ‘reverse salient’ and ‘momentum’ help to understand how sound systems have evolved into large networks of industry. The systems-approach analyses the evolution of large technological systems, which contain ‘messy, complex, problem-solving components’, ranging from physical artifacts to organizations and regulations (Hughes, 1987). In such a system, all the components, physical and non-physical, are interconnected with each other; if one component is removed or changes, the other components will alter as well (Hughes, 1987). A typical example that Hughes studied extensively, are electric power systems. The power of a generator may be increased in an electrical power system, but then other components, such as motors, have to be adjusted as well, requiring different voltage or amperage. Regulations, protocols or other non-physical may have to be altered as well. As a motor or regulation still hampers the aimed power increase, it becomes a ‘reverse salient’. The concept of ‘reverse salient’ refers to those components in the system that ‘have fallen behind or are out of phase with the others’, thereby limiting the system’s growth (Hughes, 1987). When a large technological system has grown and its development seems to be autonomous, then a system has ‘acquired momentum’. A system acquires momentum when different organizations and/or people with different interest, have committed themselves to the system (Hughes, 1987). I will use LTS to study the mechanisms in the development of the line array-system, and how it evolved into a global industry. Thirdly, I will use notions from actor network-theory (ANT), as used by the scholars Latour, Callon and Law. ANT defines the connectivity between various ‘components’ not as a system, but as a network of human and non-human actors. ANT focuses on the transformation of the actors in a network; how do actors transform positions of other actors in a network, thereby translating the meaning of these actors? I will use ANT to study what the effects are of a changing actor (technology or social group) in the network of the sound system, on the rest of the network. In particular the concepts of ‘delegation’ and ‘prescription’ are useful to describe the effects of new sound 11 technologies (such as the digital console) backstage. Human actions and behavior can be delegated into technologies. A famous example of Latour (1992) is the seatbelt. Modern cars have sensors that can detect whether the driver wears his or her seatbelt or not. If not, an alarm will ring or the car won’t even start. Apart from the frustration that this can give (as Latour describes), the behavior that must lead to protect oneself in the case of accidents, is delegated to the car (Latour, 1992). The human actor (driver) and the non- human actor (the sensor and seatbelt) thus cooperate to attain the goal of safe driving. I will use concepts of ANT to analyse the network of the actors involved in the organization, set-up and workings of the sound system at the festival (i.e. booking agents, festival organizers, stage-construction companies, audio rental retailers, lighting-companies, etc.). By placing ANT on this ‘practice-level’ of the sound system, I hope to overcome the criticism of combining two theories that depart from different philosophical standpoints.2 Where to position this thesis? This thesis is best situated in the recently upcoming field of Sound Studies. Sound Studies differs from more traditional fields such as musicology and ethnomusicology. Musicology is the study of music in all its forms, while ethnomusicology studies cultures of music, often conducted in foreign, non-Western parts of the world. Recent studies of ethnomusicology, however, do pay attention to the role technology plays in (Western) musical cultures.3 Still, my approach differs from a pure ethnomusicologist, since ‘music’ is not my unit of analysis, ‘technological culture’ is. In the field of sound studies, the material production and consumption of music, sound and noise is the central theme (Pinch & Bijsterveld, 2004). Sound studies is somewhat broader than the fields of ethnomusicology, or history and sociology of music, since it draws on STS: What S&TS can contribute is a focus on the materiality of sound, its embeddedness not only in history, society, and culture, but also in science and technology and its machines and ways of knowing and interacting. (Pinch & Bijsterveld, 2004; p.636) Scholars in the field of sound studies examine various practices of sound production and consumption, from the tacit knowledge of studio engineers to musical instruments as 2 One could argue that ANT doesn’t match the systems-approach, as they both rely on different philosophical assumptions. ANT focuses on the material-semiotic relationships between its actants in a network: Callon & Latour do not distinguish between humans or non-humans in the network: in principal, they have an equal status in the network. LTS (and SCOT, to this point) is based on a symbolic-interactionist point of view, and gives more weight to the people, their interactions and the processes in which they give meaning to the world surrounding them. 3 See for example: Greene & Porcello (2005) or Auslander (1999). 12 technological artifacts.4 Thompson (2002) for example, investigates the development of architectural acoustics and how the developing electroacoustics in the 20s led to the construction of PA-systems for theatre and motion picture cinemas. Sterne (2003) gives a comprehensive account of histories of sound technologies, and how these sound technologies and machines have changed our way of listening. Pinch & Trocco (2002) studied the development of the Moog synthesizer and how analog remained popular over digital technologies. Auslander (1999) and Théberge (1997) study the relationship between ‘live’ and ‘recorded’ sound, and how digital technologies mediate between these two meanings and categories of sound. Several studies in the field of Sound Studies focus on sound reproduction. Especially sound machines as the phonograph, the telegraph, and the mixing console are often discussed. However, most of these studies center on recording only; the development of speakers and sound reinforcement systems is somewhat neglected. Moreover, many studies focus on the studio-engineers and mixers, whereas the role of mixers in live- settings, such as festivals, is seldom discussed. By studying the technological culture of the sound system at popfestivals, I hope to contribute to this small empirical gap in the dynamic and young field of sound studies. This thesis has several other aims. Sound is often seen as the forgotten dimension of industrialization and modernization. First of all, I want to show how processes of professionalization, specialization and standardization have led to the modern music festival as we know it. Secondly, this thesis aims to give an insight in the fairly ‘hidden’ world of the backstage of the festival, a subculture completely dominated by technology and rarely accessible for outsiders. Thirdly, and lastly, by studying the technological culture of sound systems in an interdisciplinary way, I hope to make a connection between the developments in the world of the sound industry, and the practices and meanings of people in the micro-world of the sound system at the festival itself. The key-question phrased at the beginning of the chapter, can now be refined: How has the changing technological culture of open-air sound systems affected the position of conventional sound reinforcement systems versus line array systems, analogue versus digital technologies, and the roles of different kinds of mixers and engineers at open air music festivals? In other words: how does the changing network of technologies, musical cultures, health and environmental issues, and social actors (such as the audio rental retailers, booking agents, events production companies, festival organizations, musicians, engineers, mixers, roadies, 4 See for example: Bull & Back (2003), Braun (2000), Greene & Porcello (2005), Sterne (2003), Théberge (1997), Taylor (2001). 13 stage-hands, etc.) influence the eventual material set-up and workings of open air sound systems at festivals? The first chapter will explain the methodologies used in this research. Qualitative interviewing and participant-observation, the methods of ethnography, are good ways of studying an unknown culture such as the backstage-culture at a festival. I will discuss the way in which I conducted ethnographic research at two festivals, how I acquired access to these festivals, and how the actual observations and interviews took place. In the second chapter, I will show how the line array gained momentum. After giving a brief and general overview of developments in speaker-technology, I will describe the state of the art of the contemporary sound industry at the MusikMesse in Frankfurt (Germany). Subsequently, I will describe the technical workings of sound systems, the differences between conventional PA-systems and line arrays. I will show how sound system retailers in a European network standardize their equipment, by which they increase the predictability of the available equipment of the festival. Finally, I will look at the practice of organizing a festival, the setting-up and rigging of a sound system, the choices to be made, and what constitutes the choice of a particular sound system. Chapter three describes the roles and tasks of groups of sound engineers, notably the system and monitor- engineer, the front-of-house-mixer (FOH-mixer) and the monitor-mixer. All have specific roles, and work with different technologies to facilitate the bands on stage. I will show how continuity and reliability are attained on stage, by using different technologies and ways of working. In contrast, I will argue that in this highly specialized and technological musical subculture, improvisation, communication skills, social finesse and tacit knowledge are highly significant for making this network function. The immediate context of the organization of the festival, the touring industry, will be described as well. The last chapter, chapter five, describes the role digital technologies play at the modern music festival. I will explain the incorporation of video-screens at live events, and show how its development relates to our contemporary music culture. By drawing on sound studies literature, I will describe how visual and auditory technologies relate to each other, and which questions it subsequently raises for the concept of ‘liveness’ at the popfestival. 14 Chapter 2 Hearing the empirical: the methodologies How to study the technological culture of open-air sound systems? The technological culture of open-air sound systems consists of large and highly sophisticated technologies, various social groups, manufacturers, retailers, and networks of organizations and people embedded in a world-wide industry. To investigate this culture, one has to go backstage – literally. In this chapter I will describe how I acquired access, which methods I used and how I handled my material. Mapping the network of the Sound System The sound system is part of the total logistical adventure of constructing a big festival, which in turn is part of a national (or even global) entertainment industry. Many companies are involved in the transformation of the grasslands into a temporary village for 60,000 people. The sound is but a part of the total festival organization. One needs toilets, fences, food, drinks, permits, festival organizers, bands, security, etc. The network of the sound system thus consists of various organizations and social groups, all involved in the coordination, choosing and material set-up of the sound system: the festival organization, the audio and lighting companies, the stage-construction company, the power-supplier, the booking agency, and the artists and their crews. Also health and environmental issues, materialized in the permits of the local municipality, musical cultures (of certain music styles, and the festival-culture in general), and the technologies themselves are part of the network of the sound system. The sound system itself – consisting not only of speakers but also of microphones, cables, amplifiers, processing equipment, mixing consoles, and much more – is provided by the audio rental retailer. As we will see, this audio company is linked to certain manufacturers and is embedded in a European network of other rental retailers. It’s this conglomeration of organizations and technologies that can be seen as the large technological system described in the previous chapter. To investigate the technological culture of the sound system, the groups mentioned above are especially relevant. The best way to observe the activities of these actors, is to witness an actual set-up and the workings of the sound system in practice, at the festival- 15 site itself. I deliberately choose to attend both a smaller and a bigger festival, from different market-segments, since my goal is to study the technological culture of sound systems in its diversity. I visited two festivals: Neterpop and Pinkpop. Neterpop is a small local festival in the little village of Netersel, in the South of the Netherlands. As a typical local festival, it’s run and organized by volunteers, and attracts some 6,000 people every year in a big circus- tent. For two days, I witnessed the construction and the workings of the sound system and its crew in practice. This relatively small festival was a good introduction into the world of the sound mixer and the sound system. The second festival is quite different. Pinkpop, as mentioned, is the oldest festival of the Netherlands and of Europe. 5 This annual festival attracts up to 90,000 visitors, and has many well-known bands lined-up. The organization and set-up of this festival is of course far more complex than that of the smaller Neterpop. For that reason, and for the bigger networks of organizations involved, my thesis will focus on Pinkpop. Wherever necessary, I’ll point at the distinctive differences between the two festivals.6 Large-scale events such as Pinkpop often hire one company to coordinate all the technical and logical processes of the festival. At Pinkpop, the Dutch technical event- company The Production Factory, based in Arnhem, is the key-player concerning all the festival’s technicalities. The Production Factory thus works closely with the festival- organizer, the stage-construction company, the audio and lighting-companies, the video- companies, the electrical power-supplier and the booking agency of the bands. This complex logistical process starts almost a year before the festival, and it entails many streams of information to and fro the involved organizations. Eventually, this results in highly detailed plans and drawings, that map every inch of the terrain; the location of all the food- and drink stands, the mobile toilets and water-suppliers, the backstage-areas, the barriers and fences, the stages and speaker-towers – all is taken into account. The Production Factory has many years of experience in organizing Pinkpop, but also works for smaller music festivals, stadium-events and music-gatherings where technical and coordinating experience is necessary. At Pinkpop, the Production Factory especially works closely with the festival organizer of Pinkpop Jan Smeets, the audio and lighting company Ampco Flashlight from Utrecht, the Dutch booking agency Mojo Concerts, and the stage- 5 In 1990, Pinkpop got enlisted in the Guinness Book of Records as the oldest, yearly held popfestival in the world. (Source: Oor Speciaal Jumbileumboek 25 jaar Pinkpop, 1995). 6 Another remark should be made here. Both popfestivals have mainly pop and rock-bands on their line-up. At dance-gatherings or other festivals that focus on electronic music, sound reproduction technologies such as line arrays can have different meanings (line arrays for instance, are sometimes set-up differently at dance- gatherings as to create a more intimate atmosphere). 16 construction company StageCo from Belgium. Mojo and the festival organizer are responsible for the bookings of the bands. The collaboration of The Production Factory with these companies is very tight; they all have years of experience together. Getting backstage The people working in the technological culture of the sound system all have their own roles, practices, merits and views on the culture they reside in. It ranges from the freelance rigger to the CEO who controls certain networks of sound reinforcement-retailers. To do justice to this complex empirical reality, and to fully understand the diversity within this technological culture, a combination of methods is suitable. As the theoretical interdisciplinary approach of STS induces a methodological interdisciplinarity, I use an ethnographic approach, which involves qualitative interviewing and participating observation, combined with (historical) document analysis. First, one has to get access to the sound system world. Access to the backstage-world of the festival is restricted, not only during the festival itself, but also in general. Less is known about the backstage-life of the people working there; the companies aren’t too eager to lift up the curtain for the general audience to see the actual life backstage. I acquired access to the festivals after some interviews with the involved companies. After interviewing the director Harry Zinken of audio rental company Purple Group, I acquired permission to tag along with the sound team at the festival Neterpop. At first sight, one might think it to be more problematic to get access at Pinkpop at Landgraaf. However, after a nice afternoon of interviewing the friendly people of The Production Factory, they offered the unique opportunity to join their team in the construction-phase of the festival and at the festival itself. For eight days in total, I witnessed the hectic technological culture behind the festival: the organization and coordination, the construction and building, and the actual engineering and mixing of the sound system.7 Through The Production Factory, I learnt which companies were involved, and which ones would be relevant to be contacted. Before the festival would take place, I arranged interviews with several people of the audio rental company Ampco Flashlight (an accountmanager, and the CMO). Notwithstanding the use of such scheduled interviews, 7Pinkpop is an annual held three-day festival during Pentecost. Since two years, a pre-festival is organised weeks before the actual Pinkpop: Pinkpop Classic. Pinkpop Classic is a mini-Pinkpop with many former, ‘old’ Pinkpop bands. I attended both Pinkpopfestivals (hence the eight days). 17 most of the interviews and interesting talks took part during the fieldwork at the festivals themselves. To understand the sound system itself, I delved into several handbooks of sound reinforcement, acoustics, loudspeaker-design and audio engineering and made myself familiar with the theoretical disciplines involved. Since acoustics and audio engineering rely heavily on physics and electronics, this proved not to be redundant. However, books can only tell half of the story. To get an impression of the massive industry of sound systems, I visited the Frankfurter Musik Messe 2008 in Germany, to investigate the state of the art in sound system-technology. At the Musik Messe, the biggest fair of the music-industry, companies from all over the world present their latest products, ranging from all kinds of music-instruments to stage- and event-technology. Next to observing what happened there, I did several interviews with important manufacturers of sound systems. In addition, I collected brochures, leaflets and product-releases to analyse how these companies framed and profiled their products. In addition to the study of the contemporary sound system, and to put the descriptions of the micro-world of the popfestival into a larger context, I analyzed the historical development of sound reinforcement-systems, and the loudspeaker in general. Developments in audio engineering are important for the understanding of the development of the sound system. Finally, in conceptualizing my findings, I used theoretical sources regarding histories of sound reproduction, sound industry and popfestivals as well. Making jottings on stage: ethnography at the festival-site I will now discuss the methods used and their role in this research more thoroughly. I had in-depth, semi-structured qualitative interviews with several key-persons, ranging from production-managers of sound reinforcement-companies to sound mixers and stage- builders. I tried to keep the number of interviews between the social groups as balanced as possible. During the interviews, people frequently tipped me to interview certain people in the business, which was very helpful. Soon it became quite clear that the world behind the festival in the Netherlands is quite small, especially on this high level: many people know each other. To avoid interviewing solely people who were part of the same social group, I deliberately varied the interviewees. For every social group, a specific topic guide was constructed on top of a general one. All interviews have been recorded, transcribed and 18 coded (see the appendix for a list of interviewees).8 In addition to these interviews, I had numerous talks with people I met during my fieldwork. These small interviews are part of the transcripts of the elaborated field notes. This leads me to the second important method for this research: participant- observation. Ethnographical methods such as observing and participating are the most suitable tools to study a highly specialized culture such as the ‘hidden’ technological culture of the sound system. As a lay-person regarding the specific sound-technologies, this observational (and participatory) method was a good way of obtaining first-hand information. Moreover, ethnography allows one to observe people working in their own context, which gives additional understanding regarding the behavior, status, and hierarchies between the people involved. Although ethnographic research can feel quite natural, a certain tension while observing and participating always exists: that of the insider versus the outsider. To what extent should you adapt to the culture you’re studying? Or do you want to explicitly position yourself as an outsider? Both can be the case, as the role of the ethnographer shifts in different situations, depending on the people and the relationship one has established with them. During my fieldwork, for example, I switched often between the observer-stance and the observer-as-participant role (Seale, 2004). In general, mostly because of my own experiences and shared interests with the people I met, I felt at ease in the spontaneous and friendly musical culture of the people working at the festival-sites. 9 However, my lack of specific technological knowledge secured a certain distance between me and the engineers at the very same time. Occasionally, this was made explicit when I had to explain my whereabouts and why I was jotting things down. This didn’t obstruct the research, but rather gave way to interesting conversations, as it invited people to interact. The people I’ve met were very friendly, and many showed genuine interest in my research (although it was often hard to explain what I was doing exactly). All the jottings have been transcribed into extensive field notes, which I later coded and analyzed. A benefit of the ethnographical approach is that the researcher becomes quite conscious and reflexive about his own position, his way of observing and his already present knowledge (and prejudices) of the culture studied. But as any other method, ethnography has its limits and constraints. First of all, the researcher has to establish some kind of relationship with the people he or she studies. Since I felt rather at ease, and the 8None of the interviewees demanded to be anonymous. 9My personal background certainly was helpful; as an experienced festival-visitor, music-lover and (amateur) musician, there were enough shared interests to communicate easily with the people in this technological culture. 19 people were quite spontaneous, this proved not to be a problem. The biggest constraint concerns the external and internal validity of the results: to what extent can the thick descriptions of the micro-culture be generalized to other cultures or greater populations? A good ethnography does not only result in a compelling story, but is also applicable to other festivals. I secured the external validity of my research by interviewing the top league of festival-organizers in the Netherlands. The Production Factory for instance, does not only coordinate Pinkpop, but many other festivals and events, and Ampco, the biggest audio rental retailer of the Netherlands, provides sound systems at many festivals, television shows, theaters, clubs, venues, and huge events such as stadium-concerts. The description of their work is thus representative for other festivals as well. To improve the external validity, and to relate the descriptions to a wider context, I used other methods – such as (historical) document analysis – to complement the ethnographic accounts. The internal validity of studying a technological culture depends on whether all the relevant social groups and organizations are included in the research. Where does the network of the sound system stop? The fieldwork showed me which social groups and organizations were relevant, as I stumbled upon them frequently. However, because of time constraints, I had to limit the number of interviews (and did, for instance, not interview a local civil servant regarding noise pollution), yet, in general, most of the relevant social groups and organizations are included. More practically, at the festival-site itself, I observed at several points and occasions. I witnessed the material set-up of the speakers and lights in the days before the festival, and since I was seated in the production office of the organization, I got a glimpse of the problems that occurred during these hectic days before the start of the festival. During the festival, because of my unlimited access, I could easily watch the arrival and the change-over of bands on stage, and their mixers in action. Witnessing the front-of-house mixing (the mix produced for the audience, in a little tent in front of the stage) was somewhat more difficult, as access to that area was sometimes restricted by either security, audiences or the (guest) mixers themselves. I spent a lot of time ‘hunting down’ these mixers, as they are very busy and stay only for a limited time at the festival-site. Since there were a lot of bands playing at these festivals, I could interview various mixers, and people from different social groups (i.e. roadies, stage-construction workers, etcetera). By verifying and checking my initial observations with different interviewees, I improved the reliability of my results. 20 Chapter 3 The rise of the banana: conventional sound systems and line arrays Sound system gonna bring me back up One thing that I can depend on Try to describe to the limit of my ability: Its there for a second Then it's given up what it used to be Contained in my music somehow more than just sound This inspiration coming and twisting things around Sound system gonna bring me back up One thing that I can depend on Operation Ivy, “Sound System” from the album “Energy” (1990) Two days before Pinkpop starts, I find myself walking in the sunshine on the festival-grass, still fresh and unspoiled. Watching over the huge festival terrain, a former horse race track, a lot of activity is going on. Little fork trucks drive around on the terrain, transporting fences and mobile toilets. Bars and stands are getting finished at the sides of the grasslands. Construction workers in their bare chests, wearing nothing but their specialized security- equipment, are walking towards a small tent, some 40m in front of the stage. At both sides of the tent, small towers made of construction tubes are standing in the sunshine. At the foot of one tower, I see five people pulling up speaker elements with a mechanic hoist. I watch them create some kind of string of speakers, and I count twelve speaker elements on the little cart at the foot of the tower. While walking towards the stage, I almost trip over large bundles of cables, which come crawling from underneath the stage to the little tent in front of it. I climb over a barrier, and walk through a small corridor underneath the stage. In this web of construction pipes, large speaker-boxes are standing in rows of 3 by 9. It feels as if I’m being swallowed by the huge stage, not in the least by the impressive size of the large speakers; higher than my own height of 1.65m. As I walk alongside racks full of 21 amplifiers and processors with flickering lights, carefully stapled at the sides of the small corridor, I hear hammering sounds coming from above me. At the back of the stage, I jump on the skewed ramp which connects the loading dock with the back of the stage. I hear guys shouting to each-other with an American accent, while rolling flight cases on the ramp to the hooded back of the stage. As I climb on the stage from the back, I see some forty flight cases, rolled together in groups. Little hoists with metal chains are hanging everywhere. Two people are busy connecting the same speaker-elements as I saw before, to the chains of the hoist. One by one, the speaker-elements are being hauled up. Surrounding the men, several light-technicians are busy attaching electronic devices to large trusses, lowered 1m above the stagefloor. Next to the large PA-speakers, a smaller string of speakers hangs, aiming to the side of the festival-terrain. Another little string of speakers hangs towards the spot where the artists will be standing tomorrow. Smaller speakers are lying on the ground. The people around me work in hasty way: time is running out. As I dodge a speedy incoming mixing console on wheels, I decide to walk to the other stage. To address thousands and thousands of people with sound, one obviously needs many powerful speakers. As this little impression of the set-up of a festival shows, a sound system consists of many different speakers, all with seemingly different functions. Moreover, it appears that there are two types of sound systems that can be used at open-air festivals: a conventional stack of large speaker boxes, and a relatively new system called line arrays. In simple terms, the conventional system contains stacks of large speaker boxes, often stapled at the sides of the stage. Line arrays are somewhat smaller, and consist of speaker-elements, hanging under each-other like a string. The line array-system is one of the most important developments in the sound system industry of the past ten years. The two sound systems differ in their material set-up, their usage and even in the experienced sound itself, as we will see. What constitutes the choice of using a particular system for a particular place? How has the line array become so popular? In order to understand the two different systems, we first turn to the industry of sound reinforcement, before we delve into the workings of the two systems in practice. The state of the art in speaker technology at the MusikMesse Sound systems for festivals are usually provided by (local) audio rental companies. As there are many small festivals in the Netherlands, so are there many small audio rental 22 companies. Only a few bigger audio companies dominate the Dutch market, especially when it comes to providing sound for such big festivals as Pinkpop, Lowlands or North Sea Jazz Festival. At Pinkpop, Ampco Flashlight from Utrecht is solely responsible for all the sound equipment of the three stages. Other examples of big players at this level are Purple Group from Schijndel, and StagePro Rental from Etten-Leur. While Ampco dominates the top market, Purple Group aims at a lower segment, and focuses mainly on national bands, and local festivals (such as the festival Neterpop). These kinds of retailers and rental companies often have various types and brands of sound system equipment in stock, and some companies are affiliated with particular manufacturers or are part of certain networks of manufacturers or retailers, as we will see later on. Some forty years ago, audio rental companies were scarce – moreover, professional sound systems were scarce. Bands would perform with only a few speakers for reinforcement, often self-made boom-boxes with wooden enclosures. The first open air festivals started in the 1960s, and somewhat later, bands bought their own sound sets. The Dutch band Golden Earring for instance, was one of the first bands in the Netherlands to travel with its own PA-system, after they had experienced the benefits of it on their tour in US in 1969. During that time, the first rental companies originated, as the CMO of the Dutch company Ampco recalls: If I look back at thirty years ago… we were all managers of bands. Back then, every band had its own equipment, rental companies didn’t exist yet. At a certain moment, the technologies already expanded so rapidly, that it wasn’t affordable for a band anymore to have its own PA-system. And it literally went like this: when the band fell apart, the manager bought their equipment and started to hire it out to other bands. That is actually how rental companies came into being. (Fred Heuves, CMO Ampco, p. 2) As the speaker-technology improved, so did the sound systems. Thirty years later, the sound industry had developed itself from self-built boom-boxes and improvised stages, to a highly professionalized global industry. Still, the development of the sound industry is closely related to the bands and their managers, and the touring industry plays an important role in the sound industry as yet. Every year, manufacturers, retailers, audio rental companies and the touring industry meet each-other at the Frankfurter MusikMesse. At this annual fair in Germany, 23 the latest developments in musical instruments and stage-event technologies are presented. For five days in a row, the fairgrounds of Frankfurt are turned into a cacophonic Walhalla for the all-round music-lover. Strolling on the MusikMesse, one can find all kinds of instruments, ranging from digital pianos and electric guitars to drum kits and violins. At the adjacent fair of ProLight & Sound, the focus is on ‘stage and event-technologies’: sound systems, speakers, mixers, lighting, trusses, LED- and video-screens, stage-decoration and other equipment for the professional sound event industry. In big halls, hundreds of manufacturers and retailers have their latest products on display. Presentations are given on important developments, and sales-persons talk to retailers to discuss future trends and potential orders. The Messe of 2008 attracted more than 112,000 visitors.10 For people in the industry, it’s the ideal place for networking and doing business. Sound system manufacturers sell their sound systems mainly to clubs, venues, audio rental companies, and touring productions. Bands and their agencies in the touring industry can be important clients of manufacturers. A touring band often takes its own sound system – varying from 2 to 5 trailers full of equipment – on tour, depending how successful the bands are. Bands in the top league of the entertainment industry – such as Metallica, Madonna, Red Hot Chilli Peppers and Celine Dion, to name a few – all have their own sound systems. At this level, the global touring industry is very capital-intensive, with budgets in the millions. After a tour, the used sound systems are often sold to a sound broker, who in turn sells it to cheaper markets (mostly in Eastern-Europe or in other non- Western countries). For the touring industry, this is an important way of making profits, especially in the US and in the UK. Manufacturers in turn, such as Meyer at the MusikMesse, put the bands that use their equipment to the fore in their marketing campaigns. However, most of the sound systems are not sold to touring bands, but to venues, clubs, and audio rental companies such as Ampco Flahslight or Purple Group. Looking at the stands, studying the product catalogues and talking to many salespersons and engineers at the Messe, reveal that the contemporary sound system industry is dominated by the development of one particular type of sound system: the line array. Every company has these characteristic speakers hanging in their stands, while conventional speakers are often not visible. The big players on the global market, such as Martin Audio from England, L’Acoustics from France, and JBL and Meyer from the United States, all present their latest line arrays in their new product catalogs. The dominance of these small, half-banana resembling speakers makes one wonder. Where are 10 Source: www.musikmesse.de. 24 the big, conventional speakers? How did the line array became so popular? What is the difference between the two? In order to understand the workings of the line array, we first have to understand more about the workings of a speaker, and developments in speaker- technology. The speakers are the end of the sound system-chain, it is the output-source where the electrical energy is transformed into acoustical energy or sound waves. Although there are different kinds of speakers, most speakers for sound reinforcement use an electromagnetic driver, which resembles – in its workings – a general electric motor (Davis & Jones, 1989). Inside the enclosure of the speaker, a coil of wire is surrounded by a permanent magnet. The coil is attached to a diaphragm, which in turn is attached to a cone or horn. When an electric signal enters the coil, a magnetic field is created, which causes the coil to move back and forth. The moving coil and diaphragm pushes air through a gap by which it directs the sound waves to the cone or horn. As the air is pushed through a gap, the air gets dispersed through the cone or horn-mouth with a force, creating a sound wave. This transducer-device is called a driver. This is of course a rather simplified explanation; there are many facets and characteristics to it, as well as elements and materials used under certain circumstances.11 For this research, however, it is not important to know every detailed technicality of the speaker; it suffices to know that a speaker transforms an electrical signal through a moving magnetic coil into a movement of air, which we perceive as a sound wave. Still, a few characteristics of the speaker are important for the understanding of the modern sound system, which are the frequency response (and bandwidth), the directionality, and the sound pressure level. Sound is nothing but vibrating air. Air vibrates and moves in wavelengths. The number of cycles these traveling wavelengths make per second is the sound’s frequency. A short wavelength makes more cycles, thus has a higher frequency, which we perceive as a higher tone. Low frequencies in turn have longer wavelengths. The human ear is capable of hearing frequencies between approximately 20 Hz to 20.000 Hz. Speakers also have a particular range, the frequency response: the range of signal frequencies that a sound system can handle, from input to output (Davis & Jones, 1989). Speakers also have filters, that cut off certain frequencies (usually the ones outside the range of the frequency response, as they can blow up the speaker). The bandwidth is the difference between the upper and lower cut-off frequencies, and is often used as measure of quality of the speaker. The frequency of a sound signal also influences its directionality, the way sound waves travel from the 11An important debate for instance is about the cone- or the horn-loading of the drivers. Usually, for sound reinforcement, horns are used. 25 speaker to the audience. High frequencies, with shorter wavelengths, travel less far than low frequencies. In open-air settings, high frequencies are more vulnerable; wind can alter their direction for instance. Long wavelengths are less influenced by external factors such as wind or obstacles; they can ‘fold’ themselves around it (the reason why bass-tones are often heard at big distances). Another characteristic of speakers (and sound in general) is the sound pressure level. The sound pressure level (SPL) is the level of sound per unit area at a particular location relative to the sound source (the speaker) (Ballou, 2005; Davis & Jones, 1989). The SPL is often measured in decibels (dB’s). The SPL is not to be confused with the power of speakers, expressed in Watts. A decibel expresses a ratio between audio levels. A decibel thus is relative to a reference point. The scale in terms of dB is logarithmic, which in practice means that’s easier to express large numbers.12 The difference of power of 2 watts to 1 watt is 3 dB. Thus whenever power is doubled, you have an increase of 3 dB SPL. I will return to loudness and the measurement of decibels later in this thesis. Regarding SPL and the directionality of sound, one final important physical phenomenon is important to mention here: the inverse square law. The inverse square law states that for each doubling of the distance from the source (speaker), the sound pressure level will drop by 6 dB (Davis & Jones, 1989). This law assumes a so-called omni-directionality, by which sound is radiated in circles (without reflective boundaries). As the radius is doubled, the power has to spread over four times the surface area, so the SPL falls off by the inverse of the distance of the source, losing 6 dB (Ballou, 2005; Davis & Jones, 1989). The construction of speakers for sound systems has long been focused on the improvement of these (and other) characteristics. The first speakers to address the public were mainly used for speech. However, on Christmas Eve 1915, the founder of the American Magnavox company E.S. Pridham, is said to have played Christmas carols for an audience of 50,000 using rocking armature transducers connected to phonograph horns (Eargle, 2004). The company Western Electric experimented with PA-systems in those years, and theatres and cinemas in particular showed interest in the new sound systems. As Thompson (2002) claims, the sound systems were first used to instruct large crowds during the production of silent films, but the cinema directors soon were eager to “let the movies themselves talk” (Thompson, 2002). Cinema eventually paved the way for the development of larger sound reinforcement systems, as people were very enthusiastic about the 12The most important reason for this is that our ears’ sensibility is also logarithmic (Davis & Jones, 1989). An example: instead of saying the range is 32,000 to 1, we say it is 90 dB (as 90 equals 20 log x/y, where x and y are the different signal levels. Source: Rane Professional Audio Reference: www.rane.com). 26 combination of motion picture and sound (Eargle, 2004; Thompson, 2002). However, sound reinforcement of music outdoors remained difficult, notably because of the lack of power and bandwidth. Improving power and frequency response would be the main issues to deal with for the sound reinforcement industry up until the sixties. The commercial growth of domestic loudspeakers took off in the fifties, as their size reduced, which caused an ‘explosion of interest in domestic sound reproduction’ (Cooke, 1978). More funds for research became available, and this led to an increase in amplifier power (Eargle, 2004). Many sound systems used for bands often only reached a bandwidth of mid-frequencies. In the mid 1960s, these systems couldn’t compete with ‘the screams of pop audiences’ anymore, which required an increase in level (Webb & Baird, 2003; Eargle, 2004). In the early 1970s, sound systems appeared, with the low, mid and high frequencies all in separate speakers-boxes. Speakers-boxes now could be stacked and coupled; bass bins, for example, could be clustered together by which power was increased (Webb & Baird, 2003). By cleverly stacking and stapling, engineers could adjust the power of sound, by adding more low-cabinets or high-cabinets to increase the coupling in those frequencies. In the 1980s and 1990s, 3-way speakers were introduced that housed the low, mid, and high frequencies in one enclosure. These more packaged systems could be flown over the stage, instead of stacked at the side (Webb & Baird, 2003). These systems were called ‘point-and-shoot’-systems, since each speaker directed sound to its own little field in the audience. However, interference was a problem; sound waves caused big variations in frequency response over the audience area. Interference means that sound waves cross each other, by which some waves reinforce each other, while others phase each other out. (Webb & Baird, 2003). The interference of the ‘point-and-shoot-systems’ changes with both frequency and listener-position, which the audiences perceives as noise (Heil, 2001). Over the years, recurring problems characterized the development of speakers for sound reinforcement. Issues such as power, bandwidth and efficiency, the reduction of distortion (unwanted noise that alters the input signal) and feedback, finding more efficient materials for cones, and discussions about measurement-techniques, often recur at the pages of the Journal of the Audio Engineer Society. However, by the end of the 1970s, the sound system matured as the biggest problems of power and bandwidth had been overcome (Eargle, 2004). In the 1980s, the focus and the subjects somewhat changed, mainly due to new electronic and digital techniques. New theories were developed as well. One of these theories is the so called Wavefront Sculpture Theory (WST), which forms the 27 theoretical basis of the line array-system. The key word is no longer power or extended bandwidth, but directionality. In 1984, the French physicists Marcel Urban and Christian Heil (the founder of the French company L’Acoustics) introduced their Wavefront Sculpture Theory (WST). Heil & Urban used theories from optics – more specifically: the Fresnel-zone theory – to study complex interferences of sound waves. Their goal was to couple different sound sources vertically to achieve a more coherent coverage of sound for the audience. When sources (or speakers) are coupled in a certain way, the interference will reinforce the sound waves, thus creating a higher sound pressure level. The WST gave criteria under which such vertical coupling should be made, by placing a kind of lens (a ‘waveguide’) in the enclosure that splits the sound waves. Heil & Urban’s application of the theory, the line array, is able to produce one coherent wave front (Heil, 2000). The behavior of the sound waves of the line array differs from the sound waves coming from a conventional PA. Sound waves from conventional speakers travel in a spherical way, comparable with the waves created when throwing a stone in the water. Because of this omni-directionality, the sound is radiated all around the source. The inverse square law states that spherical waves lose 6 dB when the distance from the speaker is doubled. If you are 10m away from a speaker instead of 5, you will hear 6 dB less. The line array of Heil & Urban creates two types of sound fields: the so called near-field (Fresnel-region) and far-field (Fraunhofer-region). In the nearfield, the sounds travel cylindrically: because of the vertical coupling, the sound waves only travel in two directions (instead of radiating all around). This results in a loss of only 3 dB when distance is doubled, instead of the 6 dB for conventional PA, which means that line arrays have a higher sound pressure level than conventional PA systems. The farfield starts when the cylindrical sound radiation loses its pressure and becomes spherical. This is dependent on the length of the line array and the frequency of the sound (Heil, 2000). In addition to its efficiency (of 3 dB), the line array in general has a longer throw of sound and a more even coverage over the sound field, in comparison with a conventional system. Fig. 1. Radiation of conventional horns (left) and line array (horns). 28 The name of Christian Heil frequently pops up in conversations and articles about line arrays. Though the French physicist of L’Acoustics is widely seen as the one who is responsibly for the popularity of the line array, the idea of the line array as a vertical speaker column was not new. The vertical speaker column was used since the 1920s for reverberant places such as churches, railway stations, and cinemas. The benefit of the directionality of these ‘sound columns’ was widely known. For instance, the acoustics of vertical directionality were already described by Olson in his famous work ‘Elements of Acoustic Engineering’ in 1946, and the audio engineer Hilliard described the benefits of vertical speaker columns for motion picture sound in 1970 (Eargle, Scheiman & Ureda, 2003). Bands in the 1950s and early 1960s also used vertical column speakers, as they were more suitable for smaller stages such as in pubs (Mellor, 2006). However, the sound columns were only used for vocals (though the band The Grateful Dead did experiment with vertical column speakers for all of the artists, resulting in their famous and intriguing ‘Wall of Sound’). 13 The reason that vertical speaker columns were only suited for vocal sound, is that their bandwidth was very limited; music needs a broader range of frequencies than the human voice. The problem of sound columns consisted of the vertical coupling of sound sources. To coherently couple two sound sources together vertically, the distance between these two sources is relevant, in order to prevent complex interferences of the sound waves. Physics states that the distance of two line sources cannot be bigger than ½ wave length, in order to create a coherent coupling. This is no problem for the lower frequencies, but the higher the frequency, the smaller the distance between two speakers needs to be. For instance, to reach 17 kHz in a vertical column, the two speakers should be at a distance of 1 cm of each other - which in practice is quite impossible (Willekens, 2007). Heil & Urban found a solution for this problem. In simple terms, they used theories derived from optics to create a kind of lens, the waveguide, and placed it inside the speaker-enclosure. Sound waves of one driver (high, mid or low) have to pass the lens, which splits the sound wave into smaller sources. In this way, at least 80% of the total surface of the line array could be coupled, which, as combined with other conditions, proved to be enough to reach a coherent wavefront, as Heil calculated (Heil, 2001). This is 13 The American Grateful Dead were eager to test and experiment with new equipment, and their engineer Owsley Stanely constructed a unique sound system in the 1970s that became widely known as the Wall of Sound. Each instrument (and sometimes each part of the instrument, as in the case of the bass-guitar) had its own big column of speakers; the sound system in fact consisted of 11 independent systems. The goal was to create a ‘natural’ sounding stereo effect, both suitable for the audience and as monitor-system for the band (Source: http://dozin.com/wallofsound). 29 of course a somewhat simplified explanation, though it suffices here to know that Heil & Urban invented a solution for the problem of high frequencies in vertical speaker-columns. The line array is an innovation of the early vertical speaker column, and the WST proved to be a solution for the old high-frequency problem. This high-frequency problem can be called a reverse salient in the vertical sound system. A reverse salient, as Hughes describes, is that part or component of the system that falls behind, by which it hampers the development of the system (Hughes, 1987). The problem of vertically coupling of high- frequencies prevented engineers over the years of using these sound systems for music purposes. As we look to Hughes definition of inventions, we could say that the WST is a conservative invention, based on already known theories and systems. However, the application of the WST, the line array, can be perceived as a radical invention. A radical invention, according to Hughes, inaugurates a new technological system, and: …are often improvements over earlier, similar inventions that failed to develop into innovations (Hughes, 1987, p. 58) The first line array, the Vdosc of L’Acoustics, was introduced at the market in 1994. It took some years of training and experimenting before the system became accepted in the audio industry (Heil, 2002). Some 15 years after the introduction of Vdosc, almost all manufacturers at the Messe sell line arrays. Manufacturers differ mostly in their ‘waveguides’ (the lenses) and the enclosures of the speaker-elements. Discussions arise whether some systems are ‘real’ line arrays, or just mere look-a-likes. Conceptions of ‘science’ play an important role in this process of competition. The science of acoustics culminates in the coherent coverage and predictability of the sound of the line arrays, as the many brochures and product catalogues of the various sound system manufacturers try to substantiate. The booklets stress the controllability, preciseness and ‘scientific rigor’ of the speakers. Line arrays are said to be ‘highly intelligible’ (RCF), to have ‘unrivalled clarity and precision’ (L-Acoustics), ‘a high efficiency’ (Renkus-Heinz), ‘smooth predictable coverage’ (Martin Audio) and ‘accurate directivity control’ (Duran Audio). Some manufacturers even claim that the science of the sound waves is the only thing left: The science behind the sound industry is extremely difficult. Sound is nothing more than mathematics and physics, there are no preferences, only technical solutions. 30 There is almost no difference anymore between the companies here, the differences are only technical, and very small. (Stephane Roche, APG Sound Systems) Or, as engineer David Brooks of L’Acoustics describes at the MusikMesse: The WST is a step to a better audience experience. It’s just plain physics, but it is the standard now. We really caused a paradigm shift in the industry (David Brooks, L’Acoustics) Thus far, I have described the development of speaker technology for sound systems mainly from the perspective of engineers, the manufacturer and their marketeers. In general, the dominance of line arrays are seen by many manufacturers and their representatives at the MusikMesse, as either the next ‘logical step’ because of developing insights in the disciplines of acoustics and physics, or as the ‘natural reaction’ to the market: ‘sound engineers demand it, so we built it’. Although this provides a general interpretation of a quite sophisticated industry, necessary to understand the complex workings of modern speakers, it tends to see the technological development of the line array as a rather autonomous process. Despite the imaginative power of Christian Heil as the heroic inventor of the line array, such a story implies a deterministic view on technology, as if the line array itself dictates its own development. Moreover, it doesn’t help answering the question why line arrays are so easily picked up and demanded for by sound engineers, and what role the other actors in the network of the sound system play. Which choices do they make in practice? To what problems is the line array seen as a solution? As Hughes explains, a technological development seems ‘autonomous’ when it has gained a high level of momentum (Hughes, 1987). As will become clear, it’s not only the application of new insights of acoustics to sound systems that gave line arrays this momentum. One sound engineer, for instance, draws a parallel between the upcoming music-styles in the 1990s, such as hiphop, grunge and dance, and the success of the array: In my days with the Urban Dance Squad [hiphop-crossover band from the early 1990s], especially in the beginning, the whole idea of sound became different. The norm in the world was changing, regarding the timbre and the quality. On a certain moment you get the more modern music styles such as hiphop, where they wanted far more bass than the people had in the rock ‘n roll-days. The Beatles played with 31 six boxes at the sides in a stadium. People will have heard some screaming, but the lows certainly weren’t there. (Hugo Scholten, system engineer Ampco, p. 6) Instead of explaining the popularity of the line array system by referring solely to the technology itself, we quickly return to the festival. First, I will give an account of how certain audio rental retailers have joined forces in a European network, in order to increase the compatibility and effectivity of their expensive sound system equipment. Secondly, I will turn to the set-up of the festival itself, to see how people work with the sound system in action. Synco: the organization of predictability The global festival and touring industry are big business. To get well-known and popular bands such as Metallica and Rage Against the Machine to Pinkpop takes some heavy negotiations. Festivals are dependent in their organization of the line-up of booking agencies and coordinating bodies. Most festivals, like Pinkpop, delegate their programming to booking agencies or concert promoters that license touring bands. In the Netherlands, Mojo Concerts is the biggest booking agency, providing bands for most big festivals and venues. Mojo is part of the global agency Live Nation. The American concert promoter Live Nation, originally specialized in the television-market, owns many national promoters, booking agencies (such as Mojo), and music festivals all over the world. The majority of established pop-bands are licensed under Liver Nation (such as U2, Metallica, The Rolling Stones, Madonna) which means that Live Nation controls and manages their touring business, but often also their recordings, merchandise, broadcasting, media and digital rights, and ticketing. Live Nation also invests in real estate, and owns and constructs a lot of venues, stages and concert halls all over the world, by which they can promote their own bands, at their own festivals, in their own venues. Live Nation is by far the biggest player in the global touring and concert promoting-industry, and its business is still expanding. For Pinkpop, the negotiations with the bands and their agencies are done by Mojo Concerts. As there are many festivals across Europe in the summer, American bands have to choose which ones to attend. In the period of June to August, there are two European blocks of festivals: Pinkpop and Rock am Ring in the beginning of June, and Werechter, Glastonbury, and Roskilde in July. Exclusivity of bands is important for the festival to 32 attract as many fans as possible. Rob Trommelen, director of Mojo Concerts, reveals how touring agencies cleverly take this into negotiation: It took us five months to get that band. They continuously put you under heavy pressure. Rage is busy with their reunion, so it’s actually all about getting as much money as possible. […] Their agent just put the two blocks [of European festivals] against each other. We negotiated fiercely. In the end, we had to make a choice; they just wanted more money. But halfway in June, they will return to the States. (Rob Trommelen, Mojo Concerts. In: BN de Stem, May 18, 2008) In a later phase in these negotiations, when the band is already booked, technical wishes and demands enter the negotiation-process. Before the festival takes place, a lot of communication, coordination, and planning has taken place between the tour-managements of the band, the booking agency, and the local audio retailer, about the equipment the band will provide and the available equipment at the festival-site. Compromises have to be made by the local audio company as they will provide a standardized set of equipment for all bands. Problems can occur in this process, as bands can make strong demands regarding available equipment (or backstage treatments). Occasionally, a touring band demands a particular sound system in the process of negotiations. Especially the bands in the top league, such as Rage Against the Machine or Metallica, have the status and power to demand such a thing. Although the audio company tries to facilitate these wishes, they are often not met: There are bands that specify this [a specific set of line arrays], and still won’t get it from us. Metallica for instance, specifies very clearly that they want Meyer line arrays. And they have the status to ask such a thing; if they would demand this at Mojo, we’ll go into a discussion. But for me it is like this: this is the system we offer, this is what you get. If there’s a problem, then this is the starting point. Last year for instance, we had the Red Hot Chilli Peppers, and they demanded Vdosc. For a while, it looked as if we would hang two different PA’s. Eventually, we didn’t do that, for obvious reasons: weight, costs. In the end, they were happy. In principle, if bands come to a festival in the Netherlands where Ampco provides the sound, then they know that things will work out. In general, they know our quality 33 norms, which are quite high. And they should especially be known. (Hayo den Boeft, production manager Ampco, p. 3) In general though, bands don’t push their technical demands to the fore at a festival: Most of the time, bands acknowledge the fact that they are playing at a festival, and that it is different from a single show on their own. They have to adjust to general tendencies. If you make sure, as a PA-company, that your gear has a certain level of quality, then you practically never have any problems. (Natasja Geerdink, sound engineer / plugger Ampco, p. 3) What’s important here to notice is that the quality of the sound systems used, and the local engineers, has to be known by the traveling crews and bands. In the technical riders (technical wishlist) of most bands, almost only known and established brands of sound systems are requested. As we will see later in more detail, all the actors involved are quite dependent on each other; traveling band-crews want to know what to expect of the quality of the equipment and the engineers at the site. As the quote of the sound engineer above shows, a known level of quality of the equipment and the engineers is necessary to reassure (and sometimes convince) bands of the equipment used by the local audio retailer – especially when the brand of the equipment is not known to the bands. From this perspective, it can be argued that the distinctive sound of the line array compared to the diversification in sound of conventional equipment also influenced its popularity; the predictability of the sound of line arrays is higher than that of conventional PA systems (the distinctive sound of the digitally steered line array is described in more detail in the next paragraph). Predictability is another word for reducing uncertainty, which, according Hughes, is the key goal of every large technological system (Hughes, 1987). The quotes above show that the audio retailer Ampco has built a reputation of quality, that – in most cases – can reassure bands of having a proper and good sound system to work with. In order to show how this predictability in quality is attained by the sound industry in this phase of festival-organization, we turn to a network of organized audio rental retailers called Synco. Synco has been founded by Ampco in 1997, the audio rental retailer responsible for the sound system at Pinkpop (Purple Group, the sound system of Neterpop has recently been added to its network). It is a European network of audio rental companies, with 34 partners in Belgium, France, Portugal, Spain, Romania, and the UK. The main goal of the network is to share sound systems to increase the load factor and the turnover-rate of the equipment used. The load factor is an indication of the return of investment, as it is the ratio of average output to the maximum output during a certain period (Hughes, 1987). As Fred Heuves, founder of the network and CMO of Ampco, explains: We’re not manufacturers; we’re not here to sell boxes. We only work with companies that strive for the same quality-norms, and who are willing to share technologies with each other. Which gives a great benefit: if you want to do something but you don’t have the materials at that moment, instead having to make an investment of 200,000 euro, you can check the network if that material is available. If it is available somewhere, then the partner that rents it to you is equally happy, since their equipment that was in stock is now being used. (Fred Heuves, CMO Ampco / Synco, p. 7) By joining, every new partner in the network commits itself to spend a percentage of the investment-budget every year in Synco-material, ‘because we want to keep growing’ (Fred Heuves, p. 4). There are many similar networks of sound system retailers or manufacturers in the sound industry, such as the network of L’Acoustsics (manufacturer of the Vdosc line array). Most of them try to maintain a sort of exclusiveness, and offer training and education-programmes to protect the quality of their product. Synco is no exception. Partners in the network are obliged to use only the standardized Synco-equipment, to ensure compatibility and exclusiveness. Furthermore, partners can only buy the Synco- systems at Synco, as they are obliged to sell everything back to Synco if they should leave the network. However, Synco distinguishes itself from other networks by the degree of standardization of its equipment and the ‘soft determinism’ it exerts on its partners and the involved engineers. This type of standardization can be described as what Schimdt and Werle call coordinative standardization (Schmidt & Werle, 1998). In studying the process of institutional standard-setting in telecom and information technologies, Schmidt & Werle (1998) show how standards are being set. The term they use to describe this, is coordinative standardization, which, compared to regulative coordination, is aimed at compatibility of goods and services in a network, allowing for the components to work together (Werle, 2000). Coordinative standardizations are not imposed by a legal or political body, as regulative standards are, but rather result from 35 actors that are interested to coordinate their activities with one another (Schmidt & Werle, 1998, p. 120). Coordinative standards are thus relational, and are aimed at reducing transaction costs between the actors. The basic idea behind the Synco-network is precisely this goal of lowering transaction costs by striving for compatibility. The exchange of sound equipment between the retailers can only function, if all the sound systems-components in the network are compatible. Synco contracted manufacturer Martin Audio (UK) to make an OEM-system that would be branded as Synco. Then, they standardized every part of the sound system, to create one identical system: loudspeakers, stage-monitors, amplifiers, controllers, racks, and even cables. Power distribution-sets, and entire PA-sets based on capacity are also being standardized. This has many benefits, as Fred Heuves, a genuine system-builder in Hughes’ terms, explains: Most companies have three to four sound systems in stock. That means that your logistics in the warehouse is quite complicated: every sound system has its own amplifiers, its own rigging and often its own wiring. Many mistakes can be made there. In the Synco-system, everything is identical, so you don’t have to think what to take in the warehouse. All flight cases are standardized, cable-cases, microphone- cases… At Ampco, we even fully standardized the first 12 sound systems we have, and numbered them set 1, set 2, etc. Whereas people in the warehouse used to prepare racks of effects on the basis of specifications of the band… what effects do the Red Hot Chili Peppers need again? Now everything is standardized. Maybe there’s more equipment in one set than you actually need for that production, but that’s already compensated by the time you’ve won, and the fact that you don’t have to screw-off all the racks to get the effects out. And then you see other Synco- partners wanting to do this as well, so we tell them: if you are standardizing, do it in this mode, in this form, so that it becomes a product-number that we can swap again. (Fred Heuves, CMO Ampco / Synco, p. 11). Synco thus not only standardizes its equipment, it also routinizes the work of the people. By reordering the material world of the sound system and its engineers, to rephrase Hughes, into Synco-compatible components, Synco reduces uncertainty and creates predictability in the planning. Coordinative standards do not only protocol how equipment should be standardized, but they can also specify the relational properties of the technical artifacts in 36 the sound system. This is especially relevant for the digital technologies, such as the digital mixing consoles. Digital technologies pose new challenges for the standardization in the Synco-network. As we will see in the next chapter in more detail, the development in analogue mixers has more or less resulted in a one console being the norm at festivals. As chips and DSP’s (Digital Signal Processing) continue to evolve in their capacities and resolution, general (implicit) norms for digital consoles are not set yet. Different digital consoles have different software-philosophies, which does not allows one to easily use another digital mixing console. Furthermore, now two types of consoles can be connected into the sound system: analogue and digital ones. These problems can be overcome, as Schmidt & Werle (1998) describe in the case of digital telecommunications, by specifying and standardizing the way connections can be made to the components. These interface specifications is precisely what Synco is working on: What are the most costly components in stock? The mixing consoles. A digital Digico D5 or SD7 costs a fortune. If you could standardize the multi-cable [the cable which transfers the signals from the console to the amplifiers], then you could easily swap the consoles. So you increase the effectivity again. More and more we are looking into that, how we can improve our system in such a way that those components become exchangeable as well. (Fred Heuves, CMO Ampco / Synco, p. 5). Another practical example of a coordinative standardization of interfaces, is the patchlist which standardizes all the mixing channels used by the bands (to be explained in the next chapter). These standardizations create a reduction in transaction costs (communication and time are costly during the festival). In the coordination-phase of Pinkpop, some bands demand other well-established sound systems such as Meyer or Vdosc (the ‘Rolls Royce of the sound industry’, according to director Harry Zinken of Purple Group). It can be argued that it’s because of the coordinative standardization that the audio company can create and assure ‘calmness in the basic system’, as production manager Hayo den Boeft calls it, by which traveling crews are able to work well with the local equipment and engineers. Furthermore, the quality-norms of Ampco’s equipment and engineers are known to the traveling crews and bands, by which they are able to reassure touring bands of the qualities of the equipment and the engineers in the organization-phase of the festival. The coordinative standardization of 37 Synco thus also contributes to the reputation of the audio company. The Synco-partners in other countries help create this reputation, as Fred Heuves states: I think we provide sound systems for everything that Mojo does in the Netherlands. And that’s just because Mojo want to deliver a certain quality, and they know they get it from us. […] Live Nation also promotes in Romania, for instance. We facilitate Kylie Minogue and Metallica there, with our Romanian partner. So Live Nation will know: hey, that group also works over there. And they will know which quality to expect. (Fred Heuves, CMO Ampco / Synco, p. 13) By coordinative standardizing their equipment, Synco and Ampco manage to make their quality-norms known – important in the organization and negotiation of the popfestival. Stack and fly: the rigging and construction of sound systems “I think I will add some extra outfills there,” system engineer Ebs explains, “…to make sure that we have a balanced sound image there at the sides. I’ll drop down the mids and highs, we only need some extra power.” On a beautiful sunny morning, I’m standing on stage with a system engineer of Ampco and coordinator Gerrit of the Production Factory. Hugo is responsible for the entire sound system of the main stage of Pinkpop. As we stand in front of the stage, Gerrit says to Hugo: “Those guys of Metallica worked all night. And they’re still busy with the lights. That will be one hell of a show tonight!” On stage, I see many people in black crew-shirts busy connecting various electronic devices to the lowered trusses, and checking gigantic LED-screens at the back of the stage. Last night, I saw five trailers full of equipment arriving at the festival site. The crew of Metallica just came from their tour in Spain, and they are now working together on stage with the crew of Ampco, unloading little flight cases and rolling speaker elements. Ampco’s riggers are waiting to start with the construction of the main PA, since the trusses with the light-devices are still lowered down in the middle of the stage. Finally, the trusses are hauled up by several hoists at the corners, and the engineers of Ampco can start with the speakers. First, I see a rigger climbing in the top of the stage-roof, carrying a rope. At 12m high, tightly secured to the truss, the rigger attaches the rope to the rooftop, by which pulls up heavy metal chains. The chains disappear in half-opened flight cases, carrying three little hoists. Once the metal chains are connected to the truss in the roof, the motorized hoist 38 can pull itself up, by slowly rotating the chain. When the three hoists hang in the top of the roof, a metal frame is connected to the metal chains with big bolts. In the meantime, several stage-hands are rolling trapezoid cabinets from the load dock ramp to the side of the stage. They rip off the back of the cabinets, uncovering the front of a speaker-element. The hoist pulls the frame up until it is 1m high. Two engineers connect the back of one speaker-element to the metal frame, with special bolts. Then, they both lift the front of the speaker-element, weighing about 130kg, to connect that side to the frame as well. Before the hoist pulls the speaker-element upwards, another element is connected to the element on the frame. Every time the engineers connect the front, some pins are put in different holes at the sides of the element to secure it to the element above it. The engineer occasionally looks at a paper sheet laying in front of him. One by one, the elements are connected to each other. Every time an element is connected, the hoist pulls the string upwards so that the engineers can attach the next element more easily. In less than half an hour, a string of 16 elements is flying in the stage-roof, the three hoists carrying a weight of approximately 1,200 kg. One engineer pulls out a small laser device from his pockets, and lays it underneath the lowest element. Fig. 2. The ‘flying’ of a line array. As the stage-crew starts with the second string on stage-right, I walk along with Benno, the technical-coordinator of Production Factory, in the direction of the Front of House-position: a small tent at a distance of 40m from the stage; the domain of the mixer. Engineers are already unloading huge mixing consoles in the little tent. We bump into Hayo, the production-manager of Ampco. Benno says to Hayo: “The weather is good, it’s nice and warm and there’s not much wind. When all the people are here tonight, the warmth will stay, so the sound will be great!” We approach a huge tower, some 75 meters 39 away from the stage. This is the light-tower where the follow-spots will be hung. At both sides of the light-tower, two large poles are constructed. As we walk to the poles, we see five engineers performing the same procedure of hauling up line array-elements as on the main stage. They are standing on a small platform, constructed on the secured foot of the pole. While occasionally checking a detailed Auto-CAD drawing, the engineers pull up the elements one by one. The line arrays in the poles are a bit smaller, consisting of 12 elements. Hayo tells me that in total, for all three stages at Pinkpop, Ampco used some 5 trailers full of sound equipment. Later that day, I witness the construction of a conventional sound system at another stage, in a circus-tent. As I enter the tent, I smell fresh grass, mixed with tent canvas, and immediately memories of past festivals enter my mind. While my eyes slowly adjust themselves to the darkness in the tent, I see five people on stage, pulling and pushing a big speaker-box on top of another one. I notice that the speakers of the conventional sound system are bigger than the ones of the line array I just saw on the main stage. The engineers have to pull the speakers over some construction pipes, to place the speaker inside of a small tower of trusses, which in turn carries the lighting-trusses. After some heavy pulling, and some cursing, a stack of four by four speakers is created at the side of the stage. As I approach them on stage, I see that the speakers are labeled: ‘HF’, ‘MF’, and ‘LF’. The three different speakers are stacked in little towers. Thick cables from each speaker are bundled and curl towards a large mixing console on the side of the stage. With a little device in his hand, a guy measures the position of the speakers. He shouts something to his colleague, standing on top of the tower of speakers. The guy on top pushes the speaker a little to the side, and the engineer starts measuring again. Finally, after some measuring, pushing and pulling, two little walls of sound were created on both sides of the stage. Constructing a sound system for a festival – in practice called: rigging – is not a stand-alone practice. As the description above shows, the set-up of the sound system can be quite chaotic, with many different engineers walking around and depending on each other. Three actors are especially important in the practice of rigging: the stage-construction company, the lighting company and the sound system retailer. As the construction of the sound system of a festival has to be done in one or two days – time is costly – many different crews of different companies have to work together on one stage – including crews of touring bands. To make their work go smoothly, almost everything has been planned in 40 Fig 3 & 4: Left, a Synco-ine array at Pinkpop. Right: a Synco-conventional stack at Neterpop. advance. The coordinative standardization of the sound equipment, as described in the previous paragraph, helps to smoothen this process by creating a reputation of quality to reassure the bands of the proper equipment. The next step is planning the practical construction, in which other actors are involved. Before a sound system can be constructed by the engineers, a lot of communication and consultation already has taken place between various actors. The audio company starts by making an estimate of the amount of visitors, based on the experience of past years (of the sound company and the festival organization). They look at the type of stage that will be constructed; every stage implies a necessary capacity of the sound system. Standardized sets of sound systems are specified in output-power, for crowds up to 60,000 people, as we have seen in the previous paragraph. The account manager of the audio retailer then chooses a basic system, one which is capable and has to power to address the expected amount of people with sound. When a basic sound system has been specified by the account manager of the audio company, several calculations are made by the system engineer or sound designer. With special software programs, the engineer can predict exactly how many speaker-elements are necessary regarding the acoustics of the venue or the size of the terrain, and how they should be ‘flown’ or stacked. The program makes it easy to experiment with various configurations, amounts of speakers, or different sound system set-ups. By toggling with different settings in the software program, finally a sound- 41 plan is constructed in which all the speakers of the sound system are presented in heavily detailed AutoCAD drawings. The audio company then communicates this with the stage- constructing and lighting company. The riggers of the stage-construction company make calculations regarding the maximum weight and position of the equipment that will be flown in the stage-roof. In addition to the speakers, they also have to include the many trusses and bars of lights – including the light-equipment bands take with them. The riggers are responsible for the weight in the stage, and in their calculations, they already pinpoint the positions where all the mechanical hoists will be hanging, the so called centers of gravity (see appendix for a drawing of the line array). For Pinkpop, The Production Factory has the responsibility to make the sound- plan, the light-plan and the stage-plan in correspondence with each-other. They receive the specifications and demands of the bands. Practical considerations are also taken into account in this phase of coordination. For instance, regarding the size and amount of speakers: Nowadays, the speakers are very small, they’ve made them in such a way, that they fit in a trailer. Four by two, that’s it. So if I need 80 speakers, how many trailers do I need? And then they say: well, we should take only 60 Benno, because otherwise we’ll need a second trailer (Benno Rottink, p. 15). Another problem can be the total weight: when the constructed stage cannot hold the amount of weight, either the lights or the amount or the size of the speakers have to be changed. Occasionally, this can lead to clashing interests, especially between the lighting- and audio-companies, which calls for some strategic coordination, as technical coordinator Benno Rotink explains: Sometimes, I put a sound-tower in the drawing in advance, just to claim the spot before the lighting-guys want to hang something there (Benno Rottink, p. 22) Since both the lighting and the audio equipment for Pinkpop are provided by Ampco Flashlight, the problems in cooperation in this coordination-phase are very scarce. The information flows to and fro the Production Factory, Ampco, the bands and the stage-construction company, have to result in a compromise: the technical equipment has to be safe, and it has to be capable of providing sound over the whole terrain. As we 42 have seen from the examples, practical considerations regarding the logistics of speaker- transport, strategic modeling of drawings, and reassuring bands of the quality and standards of the chosen equipment and its engineers (by referring to the reputation of the company), are strategies to translate the different interests of the actor into one compromise for all. In this phase of the organization, line arrays have important benefits. Their elements are smaller, which makes them easier to transport, giving significant logistical benefits. In addition, and in contrast with conventional PA-systems, the sound fields of the line array can be predicted and modeled by using software-programs. The system engineer or sound designer can thus experiment which set-up is most suitable for the terrain, before the actually rigging takes place. This results in heavily detailed AutoCAD drawings of the sound system, and a book with listings of equipment available. In sum, the predictability of the line array gives more control to the groups involved in the organization process. It’s this same sense of control that makes the line array in practice popular with different social groups. However, as we will see, the sense of control of the line array is limited, and does not appeal to every social group. Fig. 5. A screenshot of the software-program ‘Viewpoint’ of Martin Audio, by which engineers can toggle with different parameters and calculate the behavior of the line array’s sound waves. 43 The actual rigging and constructing of the different speakers in the sound systems is done on the basis of these plans and drawings, under supervision of the system engineer. A sound system consists of many different speakers, as the description above shows. Each has its own function in the sound system. Engineers talk of the different speakers in terms of their function. They speak of the main PA, outfills, infills, sidefills, subs and delay- stacks. I will briefly describe these parts of the sound system. The main PA is the biggest set that delivers the sound to the largest part of the crowd. At both sides of the stage – in jargon: stage-left and stage-right, viewed from the stage, facing the audience – speakers are set up. Especially for big fields covering larger crowds, some extra speakers are often put at the side, the so called outfills. The outfills create sound for the people standing at the outer sides of the field. Next to outfills, there are infills and sidefills. Infills are sometimes put on the ground in front of the stage, since sounds waves coming from the high line arrays, sometimes fly over the people standing in the center up front. Sidefills are mainly used for the artists on stage: they are hung on stage-left and stage-right, directing towards the middle of the stage. For the lower frequencies, sound designers use subs: large speakers, often placed under the stage, that are especially designed for the lower sounds. Finally, an important group of speakers are the so-called delay-stacks. They have the same function as the main PA; since sound waves on a certain level of dB cannot travel further than 75m, a delay-stack is constructed to provide the back of the field (that is often much larger than 75m) of sound. With all the different speakers, the system engineer is able to provide every corner of a large terrain with sound. All these speakers together form the sound system. 14 As the description in the beginning of this section shows, the rigging of conventional systems and line array differ. The rigging of a conventional PA consists of cleverly stapling and clustering the boxes, while the rigging of line arrays requires great precision and accuracy. The line array can only function properly if all the calculated angles, height and positions are exactly the same as on paper. The software program that the system engineer of Ampco uses for these calculations is EASE (similar to Viewpoint).15 With EASE, you can make 3D acoustic models of a sound field (either in a room or in a field), and calculate the acoustic absorption, frequency response and length of the sound 14 The exact specifications of the sound systems at Pinkpop: 16 Synco W8L Longbows with 12 Synco WS318X subs on each side, and 16 Synco W8L Longbows with 12 Synco STS SUB18RR on each side as outfill. At the videoscreen at the back of the mixing-tower, were 9 Synco W8LMs hanging on each side and 10 Synco W8L Longbows on each side as the delay-stacks. On stage lay a number of Synco CW152A wedges, a Synco STS LO215 drumfill and 6 Synco W8LCs sidefills with 4 Synco STS SUB18RRs (Source: www.apr.nl). 15 Other well-known programs are MAPP (Meyer) and ViewPoint (Martin Audio), amongst others. 44 fields, under a variety of configurations. This software cannot account for everything, as system engineer Hugo explains: When you’re rigging, you find out that some things in the program do not fit in practice. The angles of the speakers for instance; if you’re top element is 0°, en the lowest element 30°… on paper it is correct, but when you’re flying the array, the top element will be 0°, but the lowest happens to be 25° or 22°. Because when you’re going to fly the elements, there’s always a bit of leeway between them, and then the weight pressures it together by which the angle decreases. From experience I always take into account that the array will stretch a bit, so I have to give the elements at the start of the curve some more angles, then it will work out fine. That’s one of those things you’ll learn in practice. (Hugo Scholten, p. 2). The smallest deviation in angles or height can lead to shifting sound image, or result in the phenomenon of ‘lobing’. Lobing occurs when sound waves interfere with each other, thus creating ‘lobes’, peaks or holes in the sound field. Interference of sound waves have to be prevented at all times, since this influences the sound image, which the audience perceives as noise. The engineer can prevent lobing to some extent by tuning, patching and tweaking his sound system. This is mostly done by a digital technique called ‘delaying’. In order to create a coherent sound field, the system engineer has to toggle with the phases of the sound waves, by using laptops and software-programs that tweak the output of the speakers. The goal is to get all the phases of the different speakers (front fills, side-fills, etc.) aligned. As one system-engineer explains: You have speakers at different places, and if you’d switch on the system, it would sound terrible because one speaker is up-close, while the other is at the other end of the field. So you have to get these sound-times aligned. We do that with phasing or timing, I prefer to talk in terms of timing. By toggling with the timing, you can adjust the sound a bit. […] You can direct the beam a bit to the left or to the right. You can thus aim the sound with it. There are limits… that’s physics, but you can toggle a bit. (Hugo Scholten, system-engineer Ampco, p. 3 By delaying the output of every speaker, the system engineer can create one coherent sound field, without shifting sound images. Although much can be calculated, patched and 45 controlled with by software programs, experience and knowledge of the system is indispensable for this delaying of sound waves. The rigging of conventional speakers is somewhat easier than that of line arrays, as it requires less calculations. However, since the speakers of conventional PA are generally bigger, it takes more time and more engineers to construct it. The rigging of a line array in practice takes less time and less engineers, although it demands more calculation, supervision and improvisation of the system engineer than with conventional PA. In sum, the significance of the work of the system engineer has increased by the usage of line arrays instead of conventional PA in order for the sound system to function. In practice the system engineer must rely on his experience in order to meet the calculated requirements. Rephrasing this in other words, one can say that the line array gives more control to the system engineer, while at the same time its system is more vulnerable. This vulnerability is further increased by the drawbacks line arrays can have in their sound fields. The projected sound field of the line array has borders. Certain lobes, the so called side-lobs, occur at the borders, and are always part of the radiation of the line array. Furthermore, once you step outside of these borders, the sound immediately dissolves or ‘flies away’. So, as the critique goes, ‘it’s either too loud or it’s not there’: If I’m in a big hall or at a festival, then everywhere where I walk, at the centre or when I’m getting a beer… everywhere, it must sound pleasant. With a line array, if I go for a beer on the corner… they all try their best to provide coverage everywhere, but why can’t I just stand at the corner, peacefully drinking a beer while enjoying the music at the same time? That happens all time, doesn’t it? (Benno Rottink, The Production Factory, p. 3) Another often heard side-effect of the line array’s sound is its easy shifting. If there’s some wind on the terrain, the high (and sometimes mid) frequencies tend to fly away quite easily. This of course happens with conventional PA as well, but because of the ‘beam’ of sound, the line array is much more sensitive regarding this image shifting. Many engineers and mixers acknowledge these side-effects of the line array, and try to compensate them by delaying certain speakers or adding extra outfills for example (as the description in the beginning showed). Still, it seems that these side-effects of the line array are taken for granted. 46 The drawback of the falling off sound at the borders of the sound field also has benefits, especially for the festival organizer. Festival organizers, at least in the Netherlands, have to apply for permits at local governments to organize a three-day festival. These permits are almost always given, on the basis of certain conditions – the most important of them the control of nuisance and noise pollution. At certain measure-points, the level of dB’s should be limited. Local civil servants will visit the festival, and measure the amount of dB’s produced at that very moment. If the festival frequently crosses these dB-limits, it loses its permit. Some festivals hire specialized sound-measure companies, to provide extensive measurements: We advise festival organizers to use line arrays, because you can control the sound better with them, and their radiation falls off more easily (Erik van der Veek, dB- control, p. 3) Line arrays are better suited for preventing noise pollution and to live-up to sound restrictions than conventional systems. As the sound falls of more drastically outside the sound field, sounds travel less far to the environment of the festival terrain, thus preventing claims of noise pollution. All the companies involved in the organization of the festival are dependent on meeting these restrictions, as the director of Ampco says: In principle, noise is pollution. That’s very important for the industry nowadays; if you are able to control the sound, you can live up to the sound restrictions and festival licenses. And there will be European norms, so they’ll be more stringent. We experience this already in Portugal, and England is also already much more strict. If my boss [the festival organizer] loses his permit, than we all have a problem (Fred Heuves, p. 14) When it comes to the quality of sound, the opinions of the mixers of course differ. Because of the relatively larger sound pressure of the line array, the loudness of the line array is often perceived as a form of ‘nearness’; especially mixers attribute the words ‘direct’, ‘in your face’, ‘loud’, ‘fat’, ‘detailed’, and ‘aimed’ to the line array. Many mixers prefer the line array because of its accuracy and control: 47 I like the line arrays. Depending on which one it is, it’s just a little more accurate. (Bruce Jones, FOH-mixer of The Counting Crows, p. 1) I prefer line array over conventional. It’s better working with them, it’s better controllable because you can direct the sound. Some say a big conventional system outdoors is better. I don’t agree, I think line arrays are always better, because you can steer it. (Hugo Scholten, system engineer Ampco / FOH-mixer, p.4 ) The conventional PA, in turn, is frequently described as being ‘warm’, having more ‘power’ and ‘balls’ by its proponents, and when compared to line arrays, these engineers call line arrays ‘cold’, ‘sterile’, and lacking a ‘punch’. The lack of lower sounds is a frequently heard criticism: Yesterday, I worked on a Vodsc, one of the first line arrays. I really hate it. I can’t get any drums out of it. It doesn’t have any low. It does have very deep low, the sub-low, that’s there. But that’s not what you always want, you also want to have just a bass-drum. I can’t get it out. (Arian van Egmond, FOH-mixer Golden Earring) The choice between a conventional PA or a line array remains a matter of taste and style for mixers. In general, the sound of the line array appeals to the mixers, who claim to have a more ‘accurate’ sound (the direct sound of the line array can also be seen as an answer to an interesting symbolical shift in our music culture, as we will explain in the last chapter). However, the sound of the line array does not appeal to all social groups: certain festival- visitors that want to enjoy a band from a distance, at the bar for example, often find themselves standing outside of the sound-beam. To sum up, it can be said that line arrays improve the sense of control and predictability for all the actors involved, despite the perceived drawbacks. For system engineers, the line array increases their sense of control of the acoustics and the steering of the sound, along with many tweaking possibilities. For mixers, although opinions and tastes differ, the line array gives more accuracy in their sound, and gives their band a kind of ‘nearness’, which is said to contribute to the experience of the audience. For the riggers, the line array entails a specialized yet routinized way of working, and easier handling because of the small boxes. Next to this perceived logistical benefit, the easiness of predicting suitable 48 sound systems at forehand is seen by the audio rental company as an organizational benefit. Finally, for the organization of the festival (and the coordinating parties), the line array is way of controlling and preventing the nuisance and noise pollution, which are of increasing importance for the permits and festival licenses. The line array thus appeals to different interests, and serves as a solution for different problems. Especially the practical conveniences of the line array in the organization and constructing of sound systems (the compact boxes and its controllability) add to the momentum of the line array. 49 Chapter 4 No input, no output: sound engineers, mixers, and the choice between analogue and digital consoles Hey Mr. Soundman, could you turn me up? I don't know your name But you don't know me either You lie to your friends And I'll lie to mine Let's not lie to each other Hey Mr. Soundman, could you turn me up? Guided by Voices, “Hey Mr Soundman” from untitled 7” split Guided by Voices/Grifters (1994) “Three minutes left!” I see the stage-manager mouthing and signaling with his hands to the artists on stage. I am standing with some ten people at the side of the stage, watching the punkrockers of Bad Religion starting their last song. Behind me, a mixer bends over his huge mixing console, and I see him adjusting the faders and pads while talking to another engineer who is looking on a sheet of paper. In front of them, another mixer is working on his own console on a small platform of 20 cm high, while keeping his eyes continuously at the artists. Across the stage, some crew-members line up, ready for action. Suddenly, the loud punk music stops, and all is silent. The crowds start to cheer, and the artists quickly leave the center of the stage. As they walk towards the back, they hand their guitars over to the rushing roadies and towels are given in return. Other crew-members quickly walk up on the stage, and start disconnecting cables and taking away the microphones. I see the stage- manager pushing people gently to the side, so the crew-members can pass. Amplifiers are carried away, and the drum kit, constructed on a riser of 2m by 2m, is rolled to the back of the stage. I see the mixer pulling out cables from his console, and disconnecting the racks of effects. When he is finished, a crew of three drags the huge console to the back of the stage. In ten minutes, the stage is completely empty; all the equipment is piled up in a little tent behind the stage, out of sight of the audience. 50 Then, new crew-members walk up on the stage and start building up new equipment. Microphone-stands are placed, and amplifiers and a drum kit on a riser are rolled to the middle. Together with two engineers, a mixer pushes his console forward to the spot where the other console just stood. I see an engineer connecting cables to the microphones, and roll the cables to a black toadstool-looking device where they all seem to come together. From the black box, big thick cables go the mixing console, while another bundle of cables disappears in a hole in the bottom of the stage. One engineer hastily starts plugging the thick cables in the back of the mixing console, while the mixer keeps adjusting his faders. In the meantime, the band-crew has filled the stage with a piano, amplifiers, and a thick carpet. Engineers are sitting on their knees, connecting cables to little speakers, placed all over the stage in front of the microphone-stands. The stage-manager shouts “Ten minutes left!” One guy climbs on the riser, and starts drumming. Then I hear a voice coming out of the little speakers on stage: “Line 1, line 2, line 3…” The voice then starts to talk and the mixer on stage replies with a microphone in his hands. The mixer shouts something to an engineer standing near the drum kit. The engineer starts adjusting a little microphone that is placed in front of the bass-drum. Other engineers are placing microphones in front of the guitar- and bass-amplifiers. The guy behind the drum kit leaves while another engineer starts playing the bass. After fifteen seconds, the mixer waves to the bass-player and he stops, leaving the bass to lean at the amplifier. All of a sudden, the crew-members are gone and the stage is empty. The mixer is still pushing buttons and turning knobs on his console, when the artists of another band walk on to the stage, where they are welcomed by a loud cheering of the audience. In this chapter, I will focus on the rise of digital technologies at the cost of analogue one’s at the festival, notably the choice between the analogue and the digital console and its consequences for the role and status of sound engineers and mixers. I will show how the profession of the ‘live sound engineer’ has gradually differentiated into several mixers and engineers, supported by stage-hands and roadies. Live sound engineers work on the sound distribution, both for the audience and for the artists themselves. The most well-known sound engineers in the live-setting of a music festival are the mixers: the front-of-house mixer (FOH-mixer) and the monitor-mixer. Although they are responsible for the artistic reinforcement of the artists’ sound, they rely heavily on the technical facilitation of engineers, notably the system engineer, the monitor-engineer, and the plugger. These engineers work together with the mixers to create a technical and social environment that 51 enables artists to perform. In sound studies literature, the craft and art of mixing is mostly dealt with in the context of the studio. The live-setting at a music-festival differs enormously from the regular live concert and, even more from the ‘quiet’ and peaceful environment of the studio-setting. First, I will describe the touring industry behind the festival. Second, I will explain the roles of the different sound engineers, after which I will zoom in at the lively and hectic backstage culture on stage. What constitutes the choice for an analogue or digital console? How does it influence the work of the mixers in practice? How does it relate to other technologies used backstage? The touring industry: a traveling circus The global festival and touring industry is big business. To get an established band such as Rage Against the Machine or Metallica to Pinkpop takes some heavy negotiations. Festivals are dependent in their organization of the line-up of booking agencies and coordinating bodies. Most festivals, like Pinkpop, delegate their programming to booking agencies or concert promoters, that promote and license touring bands. In the Netherlands, Mojo Concerts is the biggest booking agency, providing bands for most big festivals and venues. Mojo is part of the global agency Live Nation. The American concert promoter Live Nation, originally specialized in the television-market, owns many local promoters, booking agencies (such as Mojo), and music festivals all over the world. The majority of established pop-bands are licensed under Liver Nation, such as U2, Metallica, The Rolling Stones, Shakira, etc., which means that Live Nation controls and manages their touring business, but often also their recordings, merchandise, broadcasting, media and digital rights, and ticketing. Live Nation also invests in real estate, and owns and constructs a lot of venues, stages and concert halls all over the world, by which they can promote their own bands, at their own festivals, in their own venues. Live Nation is by far the biggest player in the global touring and concert promoting-industry, and its business is still expanding. The negotiations with the bands and their agencies for Pinkpop, is done by Mojo Concerts. As there are many festivals across Europe in the summer, American bands have to choose which ones to attend. In the period of June to August, there are two European blocks of festivals: Pinkpop and Rock am Ring in the beginning of June, and Werechter, Glastonbury, and Roskilde in July. Exclusivity of bands is important for the festival to attract as many fans as possible. Rob Trommelen, director of Mojo Concerts, reveals how touring agencies cleverly take this into negotiation: 52 It took us five months to get that band. They continuously put you under heavy pressure. Rage is busy with their reunion, so it’s actually all about getting as much money as possible. […] Their agent just put the two blocks [of European festivals] against each other. We negotiated fiercely. In the end, we had to make a choice; they just wanted more money. But halfway in June, they will return to the States. (Rob Trommelen, Mojo Concerts. In: BN de Stem, May 18, 2008) Once the bands are booked, a lot of communication, coordination, and planning will take place between the tour-managements of the band, the booking agency, and the local audio retailer, about the equipment the band will provide and the available equipment at the festival-site. Touring bands, especially from overseas, usually have to follow a tight schedule. On back-to-back shows, they travel for weeks in touring busses from venue to venue, in the summer-season often from festival to festival, arriving only a few hours before show time and often immediately leaving after the show. Bands travel with a crew, consisting of their own mixers and stage-hands (roadies). Most established bands have their own FOH-mixer on tour with them. These traveling mixers often take their own mixing consoles and racks of effects on tour, although they can also use the mixing console provided by the local audio rental company. The local company can also provide a front of house-mixer for the band, though usually bands prefer to have their own mixers with them, if they or their booking agency can and will afford it, since they are important for the bands in numerous ways, as we will see. Before the traveling crew arrives at the festival-site, information has been exchanged between the tour-management of the band, the booking agency and the audio retailer, which results in a thick-scenario book. In addition to the technical riders, which describe what kind of mixing console, instruments, and other equipment (wedges, microphones, cables) the band will bring, the book contains stage plots (drawings which show artists and instruments are placed on stage) and patchlists specifying which instruments have to be patched to which mixing channel. These lists, which can be seen as another example of coordinative standardization, are important for the people on stage to work together, as we will see. A distinction should be made between the smaller, local festivals and the big events such as Pinkpop. Generally, the smaller the festival, the less people have to do the work. Engineers of the local audio company that provide and facilitate the sound system, are 53 often also mixers. At smaller festivals, as big crews are costly, he or she is often involved in the set-up and breaking down of the sound system as well. Sound engineers, especially at a local level, usually work for long periods of time, often until the early morning, which make many engineers describe their job as ‘idiotic’, and that ‘you have to be very passionate to be able to keep up with this’. Although new courses and studies for system engineering and mixing have emerged in the past years, many engineers have no specialized education other than training courses provided by the audio rental retailer. Most engineers learn by doing and especially the engineers at big events have years of experience, as many of them have toured across the world as the mixer or crew-member of a band. Furthermore, many engineers have the same skills or at least know what the others are doing, though they are often specialized in one area. Mixers: sound engineers in the wild The front of house-mixer (FOH-mixer) is the engineer that often is referred to as ‘the sound guy’, as he or she is the most visible representative of the sound system to the audience. The FOH-mixer is usually located in a small tent or fenced area, some 25 to 40 meters in front of the stage. He or she is responsible for delivering the band’s sound to the audience. Basically, the craft of the FOH-mixer consist of equalizing and balancing the sound sources coming from the stage to a particular and coherent sound. The mixer operates on his board, or mixing console. A general console has 48 (or 62) channels, meaning that the console has 48 slots for input sources, either microphones or instruments. In addition, the mixer often uses a rack of effects which contain compressors, gates, and other specialized effects or filters. 16 With this equipment the mixer can adjust, filter, or suppress certain frequencies at specified channels. The connecting of microphones, instruments or effects to certain channels on the console is called patching. Usually, especially on live shows or club-shows, the mixer will have a sound-check with the band. At the festival however, the mixer only has time to perform a quick line-check: a procedure to check if the incoming signals are not distorted and are connected to the right channels on the board. The guest FOH-mixer is supported by a local engineer: the system engineer. 16Compressors are used to level the sound waves; it flattens the amplitude of the frequencies to avoid big volume changes in output. Gates (noise gates) are used to prevent unwanted noise of open microphones or other sound of the environment; only sound of a certain volume goes through the gate to the console. Patching consists of addressing and connecting microphones or effects to certain channels on the mixing console. 54 The system engineer functions as the general technical host of the traveling mixer. The system engineer calibrates the basic sound system in such a way that it is suitable for all the guest mixers that have to use it: What I try to do, especially at festivals, is to create a PA-set as if it is one giant studio-monitor. Everyone [guest engineers and mixers] who are plugging into my system, must feel: yeah, this sounds exactly as my own mixing console. So in principle, it should be possible to mix every kind of music on this system. If a PA sounds good, you can have either a classical orchestra, or dance, or rock on it. And of course I can adjust my system for particular styles of music. (Hugo Scholten, system engineer of Ampco, p. 3). In general, the system engineer makes sure that the basic preconditions of the sound system are suitable for the guest mixers to plug in their own mixing consoles and racks of effects, by toggling with the basis parameters of the speakers and processing equipment. The system engineer in practice helps the FOH-mixer with the patching compressors, gates, and effects to the channels at the board. System engineers have technical knowledge about acoustics, electronics, and often conduct calculations as to ‘tweak’ the system’s equipment to get the most balanced output possible. However, many system engineers have mixing experience as well, and are able to do both the technical part and the more artistic sound mixing. Often, when a band doesn’t bring its own FOH-mixer, the system engineer takes over. For a perceptive festival-visitor, it is difficult to grasp what that ‘sound guy’ actually is doing during the show. Though one can see the mixer continuously turning knobs and pushing faders, a difference in sound is difficult to hear. Mixers have trained ears, developed by years of experience, and they often have a specific sound-image in their head. Some speak in terms of ‘coloring’, as one mixer explains: If you the acoustics are right, then you can really color things. […] To let the bass nicely work with the kick [of the drum kit], so that the lows turns into a unity, you know. The more you can get these sounds working together, the more beautiful the whole sound will be. You can have a balanced sound, but it has to become a unity. That’s a continuous battle. The guitar for instance, has sometimes the same frequency-range as the vocals. That cutting-sound in the middle-range…sometimes 55 it just doesn’t work out together. […] And if your input is good, then you don’t need to equalize that much, then you only use it to color it a bit: a little bit more freshness, a little bit more low. (Bob Willekens, mixer & system engineer Purple Sound, p. 3) Horning (2004) describes this as aural thinking. Aural thinking is the ‘ability to detect sounds embedded within a dense matrix, a knowledge of what to listen for, what to tune out, and the ability to know when your ears need a rest’ (Horning, 2004). Aural thinking is about positioning instruments and voices in the mix, which can be considered as an art. Horning (2004), and also Kealy (1979), show how the technology of multi-track recording changed the ‘mental architecture’ of the sound engineer in the studio. Multi-tracking gave engineers the possibility of individually mix and manipulate instruments, giving them a say in the artistic product of the band. Kealy (1979) described how this changed the status of studio- mixing from a utilitarian craft to an artistic identity, as sound mixers became more and more embedded in the commercial and cultural world of the artists (Kealy, 1979). It is precisely this aural thinking that defines the artistic dimension of the live mixer. Still, there remains a major difference between the studio-setting and the live-setting. As we learn from the quote above, the mixer can only start ‘coloring’ the sound if the acoustics have been dealt with. Whereas the studio is a controlled environment, constructed to record and to prevent unwanted noise or acoustical reflections, the acoustics for the live mixer remain a challenge, as each show has different surroundings, a different stage, and a different sound system. In addition, the live mixer does not have the time and the ability to do a proper sound-check, as time is always short at the festival. One FOH-mixer describes the process of a change-over at a festival: I have half an hour time for the change-over. But I don’t really have half an hour; Di-Rect [Dutch Band] is off stage in 10 minutes, we’ll be set on stage in 5 minutes. Then everything gets patched, and I’ll go listening to each channel if everything’s right. Then I have a backliner who sits for a minute behind the drum kit, and checks the bass and everything. Just when I’m thinking that everything should work, the band suddenly walks up on stage and then it’s on. The first seconds are crucial, because at that moment you realize: shit, it sounds different then I’d expected – that’s always a bit exciting. At the end of the first song, everything should be there. (CJ Otten, FOH-mixer VanVelzen, p. 8) 56 Fig. 6. The FOH-position at Pinkpop, with the mixing console (analogue), and racks of effects. In approximately three minutes (the time of an average song), the mixer should have a reasonable total-image of the band’s sound; if the mixer should make big changes in volume or equalizing, the audience will notice. And generally, mixers want to prevent drawing attention to their work: “We are service-men, we only hear people if something’s wrong” is a much expressed credo. Artists are very dependent on their FOH-mixer for delivering their sound to the audience, which involves trust of the band in the ability of the mixer to do the job effectively: “The difficulty of being a FOH-mixer is that you can screw it up entirely, and the band won’t even notice.”, as one mixer puts it (CJ Otten, FOH-mixer VanVelzen, p. 2). Not only the change-over and the first songs require great speed and improvisational skills of the mixer. Mixers are kept busy during the show, especially .as a band consists of many artists: We’re going for clarity. But the thing is, there’s a whole lot going on... there’s three guitar-players that change instruments every song, it keeps you really busy. You have to make sure to close some mics if they play certain songs. They have two versions of almost every song, an acoustic and an electric version. Everybody sings, 57 so there are 7 guys singing. I’m up in 62 channels, it’s a lot to keep up with. We just try to keep a balance and keep it all goin’ with that many people and that many instruments and stuff. (Bruce, monitor-mixer Counting Crows, p. 1,2) Though this mixer has an ‘artistic’ idea of the sound-image in his head, that is ‘clarity’, he is most of the time busy just following the band, as they change instruments and songs. The general working conditions of mixers at the festival are hard, especially compared to those in a studio. As the quotes and descriptions suggest, live mixing differs from studio-mixing not only because of the acoustics, but especially because the time- pressure at the festival requires great speed and improvisation-skills. Mixers are very busy to capture and reinforce the performance as accurately as possible, under the given restraints of acoustics and time. Live mixers do not have the luxury to re-do things, or to toggle heavily with equalizing during the performance. In that sense, live mixing resembles the early studio-mixer in the pre-multitrack-age, who had only one take to record a session. The work of the early studio-mixer in this ‘entrepreneur-mode’, in the words of Kealy (1979), is more utilitarian than aesthetic, meaning that the mixer is more occupied with capturing the sound in one take, than with coloring and aural thinking (Kealy, 1979). This does, however, not mean that live mixing is not about coloring the sound, though it can be argued that aural thinking in the live-setting is subjected to the utilitarian work of ‘getting the sound right’. Live mixers thus need to be very experienced with the sound of the band they are mixing. Moreover, similar to with studio-mixing, live mixing can also be considered as a ‘social art’. The critic and audiophile Canby (1956), as Horning describes, used this term to describe how studio mixers in the mid -1950s were not solely responsible for the outcome of the process; mixing involves teamwork (Horning, 2004). At today’s festivals, there are indeed two teams that closely collaborate: the guest mixers and their crews, and the local engineers and their crews. So far, we’ve discussed the work of the FOH-mixer and how it differs from studio- work. But there’s actually another, separate and somewhat ‘hidden’ sound system at the festival, which is equally important: the monitor-system on stage. Artists on stage cannot hear their own sounds reinforced by the PA-system, as it is directed to the audience. Amplifiers (such as guitar-amps) alone are not sufficient to cover the entire stage, so artists use a separate sound system, controlled by the monitor-mixer and the monitor-engineer. The monitor-mixer creates the individual mixes for the band-members; each musician can get his or her own mix, either on monitor-wedges that are placed in front of them, or on 58 ear-phones. Often, depending on the size of the stage, additional speakers or line arrays are hung on stage as well, the so called side-fills. The sound system at a festival thus consists of two separate sound systems: the main PA and the monitor-system on stage, each having its own engineers and mixers. Fig. 7. Monitor-mixer Phil Wilkey (Incubus) in action at Pinkpop. The work of the monitor-mixer at the festival shows the dependencies of artists, mixers, and engineers and why live-mixing can be seen as a ‘social art’. The monitor-mixer generally must have the same skills as the FOH-mixer, regarding technical knowledge, the placement of microphones, and how to color the mix. However, as the monitor-mixer works purely for the artists, and is located right beside them on stage, communication and social skills are central to his work. As one monitor-mixer describes: Well, the monitor-man is very important for every band. You have to keep the band happy on stage before anything, the happier they are, the better they play. Part of the job is not technical, but talking to them, understand them, always watch them… that’s very important: when they look, they want to see you looking at 59 them. So you have to make yourself seen. Once you get to know the band well, it’s easier. (Phil ‘SidePhil’ Wilkey, monitor-mixer Incubus, p.1) Other monitor-mixers even go as far to say that the job consists of ‘80% psychology or communication, and 20% mixing’. Keeping the peace and making the artists feel that they can rely on the monitor-mixer, is important for the artists to feel in control and to deal with the pressure of performing for large crowds. In general, FOH-mixers have a higher status, mainly because the live mixer also has to live up to the pressure and responsibility of real- time delivering sound to literally thousands and thousands of people. Some mixers are known for their sound (Big Mick from Metallica for example) and many contemporary music-magazines regarding stage technology and sound (such as Sound On Sound), have regularly interviews with famous FOH-mixers. However, many mixers and engineers do acknowledge the importance of the monitor-mixer for the eventual sound. As one FOH- mixer puts it: “If the monitor-mixer does a good job, the band will play better, and I get better input”. Live engineering and mixing is a team effort, as many engineers also acknowledge: You’re together responsible for the same thing. If someone doesn’t do his work well, everybody suffers. So you have to cooperate well together. That’s actually the most important thing, that you know what the other is doing and that you can rely on each other. (Natasja Geerdink, plugger/engineer Ampco, p. 4,5) To get the sound from the artists to the audience involves many people, all working in the shadow of the artists. Mixers and engineers – both local and traveling – have to collaborate with each other in order for the sound system to work: the system engineer and the front- of-house-mixer work at the main PA, while the monitor-engineer and the monitor-mixer work together on stage. The actual mixing requires speed, improvisation skills, and, especially for the monitor-mixer, social skills – all acquired over years of experience. Artists themselves rely heavily on mixers and engineers, for both getting the sound to the audience as hearing their own music on stage. The dependence of artists on their crew should not be taken lightly. Every stage is different, with different sound systems and different engineers. Many things can go wrong, and in the end, it’s the artist that is being looked upon when engineers make mistakes, or when equipment refuses to work. On big events as Pinkpop, artists play for more than 80,000 people, and often their sound is broadcasted on television 60 or radio at the same time as well. To live up to this pressure and to deal with this responsibility, continuity and reliability of the sound, the people, and of the equipment are of most importance for the artists’ creativity to flourish. How do bands and engineers accomplish this? And what role does the mixing console play in this? Continuity, reliability and rock ‘n roll On stage, the monitor-mixer and monitor-engineer are supported by all kinds of stage- hands and roadies, some of them being ‘local’ (working for the local audio company), while others are part of the traveling band-crew. All people on stage are coordinated by either the stage-manager or the plugger. The stage-manager is responsible for the general logistics, time-management, and access-issues on stage. He or she redirects engineers, mixers, and roadies where to go to, and keeps an eye on the artists in case they are exceeding their time- limit during their performance. The plugger, mostly working for the local audio company, deals with the more technical coordination on stage. He or she directs the guest crew- members and engineers where to put amplifiers and other equipment, collaborates with the visiting crews on the miking (the placing of the microphones), and is responsible for the power supplies on stage, as well as the wiring of all the microphones into a unit called the stage-box, from where the signals travel to the FOH-mixer and monitor-mixer. The latter is often called the infrastructure, and its wiring can get very complex as mixers use 24 to 48 (or even 62) microphones. To trace how continuity and reliability are attained on stage, I will now zoom in at the backstage-culture. The people and the technologies on stage can be seen as belonging to a network of humans and artifacts, in Latour’s sense of the word. A network according to Latour is not so much characterised by its close connections, yet by its enabling and constraining associations and interactions: “it [network-pole of actor-network] refers to something entirely different which is the summing up of interactions through all kinds of devices, inscriptions, forms, and formulae, into a very local, very practical, very tiny locus” (Latour. In: Law & Hassard, 1999 p17) The social, according to Latour (1987, 1992), is not something that should be described either at the micro or macro-level of analysis, but should rather be seen as a circulating 61 entity: it works through the interactions and associations between people and the technologies they use – the so called actors or actants. Actors on stage (thus engineers and mixers, but also consoles, microphones, cables, etc.) have to get aligned to have the network functioning. The actions and behaviour of the actors working backstage are aimed at reducing uncertainty, and creating continuity and reliability, of people, equipment and sound. The competences needed to acquire this, are distributed in the network, between humans (engineers) and nonhumans (mixers, lists, etc.). Human actions can be delegated to nonhuman technologies, and technologies in turn can prescribe certain behaviour or alignments. A famous example of this Latourian delegation is the seatbelt in cars. In order for people not to get killed during accidents, engineers have put seatbelts in cars. In modern cars, these seatbelts are often accompanied with sensors which register whether or not the driver actually wears his seatbelt. These cars will set off an alarm or even don’t start if the seatbelt isn’t strapped tight. The human has no choice, but to wear the seatbelt; by which his or her action is thus delegated to the sensor. By using the concepts of prescription and delegation from Latour’s actor-network theory, I will show which social and technological strategies the actors use on stage, both during the change-over as during the performance of the artists, to limit errors and live up to the time-pressure. It also shows what constitutes the choice of a mixer, for using either an analogue or digital console. In recent years, new technologies have come to the fore on stage that both aim to reduce the difficulty and uncertainties of live engineering and mixing. Two of them are especially important in the context of the festival, and relevant here. The first is the digital mixing console (compared to the analogue one), which has become increasingly popular among mixers – especially travelling mixers. The second, closely connected with the first, is the in-ear system for artists, which replaces the monitor wedges on stage. Both technologies are frequently used nowadays by professional bands in the touring industry. How do these actors fit in the network of mixers, engineers, and other equipment, and what effect do they have on the other actors? Until a few years ago, mixers only used analogue mixing consoles. Some brands have managed to become the standard of what mixers and engineers expect on tour. In the technical riders of the bands, a console such as the Midas Heritage 2000 frequently pops up. However, digital consoles are winning terrain. The digital console works with a chipset, which makes the digital signal processing (DSP) possible. As the general computer-chip market grew rapidly the last ten years, so did the DSP for mixing consoles. A digital console transforms the analogue signal of a sound source into digital bits, processes it, and 62 then transforms it back into an analogue signal again. The more capacity the DSP has the faster and more accurate the console can transform the signals. The accuracy is addressed in terms of resolution, which is the measure of the sampling-rate: the slicing of an analogue signal into chunks of digital bits. The higher the resolution, the smaller the chunk of zero’s and one’s, and the higher the quality of the sound. The first digital console came on the market some 13 years ago. New digital consoles now have reached levels as high as 192 kHz (in comparison: a CD has a digital resolution of 44 kHz). As we already know, analogue consoles are accompanied by rack-mounted effects. Because of the increasing DSP, the new digital consoles have these effects incorporated as digital effects, which results in less equipment to take on tour. This logistical benefit is worth mentioning (think about the logistical benefits of the smaller line array-boxes), as one mixer describes: I used to have a Yamaha 2500, a huge console… I had six racks of effects, gates, reverbs, compressor… The console I use now is about 1,5m, and I have just one little rack. It’s so damn useful. To drag around with that huge console didn’t work, and all those effects… Now I can just stand everywhere. And for my back it’s really handy too; I mean you can take that thing with only two people. (Arjan van Egmond, FOH-mixer Golden Earring, p. 3) How important these logistics (and the increased resolution as well) may be, there’s a more far-reaching reason for mixers to use a digital console rather than an analogue one: the digital console has the capability of saving the settings of the mixer. This can, of course, be a gigantic benefit for the mixer: Before the show starts, I already set-up the console, labelled everything. I’m able to really set-up the console just how I want it. So the basis is already there. If it’s for a tour, then you just save it, and that’s that. The next time you may make some minor changes. But I continuously have the same… and I tune that compressor only once. (CJ Otten, FOH-mixer VanVelzen, p. 7) I just put my stick in, and then you have six years of fine-tuning ready. Everything is set, I’ll just load the next song and that’s that. […] You have the basis. The small details I still have to do by hand. They won’t play the same each time. […] But my 63 effects and reverbs are set, I just have to turn them on. (JW Stekelenburg, FOH- mixer Ilse de Lange, p. 6) By using a pre-set, configured in at home or at other events, mixers are able to test and fine-tune their settings each time they have a show. They just bring a usb-stick with their preset saved on it, and they are ready to mix. In other words, much of the balancing and equalizing of the mixer at the festival is delegated to pre-configured sets on the digital console (to be precise: on the usb-stick). The delegation of some of the mixer’s actions frees the mixer from a lot of work that previously (with the analogue console) had to be done on the spot, with the time-pressure of the change-over. The digital console gives the mixer more certainty, which reduces much stress, as most settings of the mix are already there. Fig. 8. The Digico D5 digital console. How handy the digital console may be, it does require a different way of working. Many digital consoles have software-like menu-structures, and some (such as the Digico D5) have touch-screens. It requires quite some experience for mixers to actually work with it: You must have the feeling that if you turn some knobs, that it matches in your head with what’s happening. […] And subsequently, it’s important that you can keep a certain speed. The norm is still how fast you can work on an analogue console, 64 where you have everything just in your reach. […] It all has to do with experience. If you go on tour with a digital desk, and you work with it day in and day out, then things can go fast at a certain moment. I know people, who have their one hand at the touch screen, while the other is racing over the knobs. (Hayo den Boeft, system-engineer / FOH-mixer Ampco, p. 6) In order to get the necessary speed for live-mixing, one needs a lot of experience with digital consoles. For many mixers, it’s the feel and the lack of direct control that makes them prefer the analogue desk: One uses a DigiDesign, while there’s Digico’s at the site; both have a completely different software-philosophy. You can’t combine them. Furthermore, you just have less speed on a digital console. That’s my experience. I prefer working with an analogue desk, because I can work at it with both eyes closed. With a digital desk, you’re always tied to the screen at which you have to look. And at an analogue console, because I’ve worked with it for years, my fingers just know where to go to, so you can keep your focus at the band. (Peter Schmitz, accountmanager Ampco / monitor-mixer, p. 3) The visual control of the digital console can thus distract mixers from looking at the artists, which increases the importance of careful listening. Some mixers like the need for speed and direct control, and the adrenaline that comes along with it: It’s more challenging with analogue consoles. Especially when you’re not familiar with it. Then you have to get to know the machine, explore it’s limits. I think the best live-sound comes from these experiences, because you are forced to make quick choices, to pinpoint the sound in an earlier state than with more familiar or digital consoles. (Jeroen ‘Ebs’ Ebskamp, system-engineer / FOH-mixer Ampco) The digital console gives the mixers certainty, as some of the aural thinking is delegated to the console, which reduces the anxiety and adrenaline-rush as the mixer above described. In addition, the digital technology has some specific prescriptions. As the mixer above describes, the touch screens need visual attention of the mixer, thereby drawing away the visual contact with the artists (in the case of monitor-mixers) while at the analogue desk, 65 the mixer has an overview of all faders and knobs. In addition, digital experience is needed as mixing at a digital console involves another way of thinking. For some, the strolling through menu-structures doesn’t feel “natural” compared with the pushing of real sliders and turning of real physical knobs. While every mixer can work at any analogue desk, mixers can only work with that specific model of the digital console, for which they have their settings made at forehand. The set-up of a different digital console is too difficult and takes too much time, which mixers don’t have at the festival. The developments in digital consoles haven’t yet resulted in specific models being the norm or the standard (though few digital consoles are quite popular: the Digico D5, the Digidesign, and the digital Yamaha M7CL). For this reason, analogue consoles still remain part of the standard equipment of local audio companies. But travelling mixers increasingly use digital consoles, when their booking agency or touring promoter can afford it. Even though some of them prefer the analogue for reasons I have showed, the reassurance of having ‘their own sound’ at every location often prevails. The second technology that aims to increase the reliability and continuity on stage, are the so called in-ear monitor-systems. In-ears are mostly custom made ear-phones, which are connected to wireless transmitters, which in turn are connected with the monitor-mixer. The in-ears replace, to some extent, the wedges on stage. As already noted, bands use monitor-wedges to hear their own performance, placed in front of the artist’s spot on stage. The monitor-mix can provide a customized mix for each band-member. Artists still have their own amplifiers – in the case of guitarists, bassists and keyboard- players. In addition, some sidefills, usually small line array’s, are used as well, directed to the middle of the stage. The sound on stage can be overwhelming and turn easily into a cacophony. Feedback for example, arises when microphones pick up sounds that they shouldn’t reinforce, for instance when the microphones of the vocalists pick up the sound of the guitar-amplifiers. The sound on stage is also dependent on the type of stage (materials used), the equipment used, and the backlash of the main PA which hangs at the side of the stage (though sounds are directed forward, some sound gets back to the stage). The digital consoles have made the use of the in-ear system more easy, as individual mixes can be saved as well. To receive the sound more directly, and thus hear it more clearly, artists more and more use in-ear systems. The popularity of the in-ears is connected to that of the digital console, as it’s easier to make (and save) customized settings. In-ears have some benefits, but also constraints: 66 Like I said, no input no output… if I don’t have input [distortion from microphones or no clear incoming signals], I can give more on those ears than on some wedges. The feedback-story for instance: artists want more and more and more sound… Now you have it straight into your ears. It’s another approach, and some people can’t work with it. […] Some artists have the feeling that they’re closed-off, that they lack the feeling of playing together. And what is the most important thing of a band? Playing together. You have to be able to feel each other. But others are really happy with it. The benefit is: you can walk to the left or to the right, and the sound remains the same. With wedges, that’s just not possible. (Koen Benschop, monitor-engineer / mixer Ampco, p. 4) With in-ears, artists have the same sound at every show, without being dependent on changing variables such as the stage-floor or the wedges of the local audio company. Although it can give them more freedom to walk around the stage, as the mixer states, it can also change the way that artists play together; it is known that some singers wearing in- ears, become more introvert as they hear themselves so directly and close. To overcome the problem of feeling cut-off from the group, some artists use only one ear-plug, and combine it with regular wedges or sidefills. Other technologies are available as well. Drummers, for instance, can use certain monitors that make the drum-seat vibrate as they hit the kick or bass (also known as ass-kickers). The in-ear system replaces the monitor-wedges on stage, which has its effect on the monitor-mixer as well as he now creates individual mixes for the artists’ earplugs: You have to do your job well, because they hear everything. Before, you could cheat a bit but now you have to do it properly. Because with in-ears you hear absolutely everything, you have to mix good. […] When you first start with it, in the beginning, you have to set it all up. Normally then it stays the same. When you have ears, it helps when you are in a bad arena or room or something. (Phil ‘Sidephill’ Wilkey, monitor-mixer Incubus, p. 2) The in-ear system thus gives more responsibility to the mixer, as he now is the only one responsible for the sound on stage. A well-known problem, by some described as the ‘loudness wars’, is that some artists keep cranking up their amplifiers in order to hear 67 themselves better, either in the heat of the moment or when the sound of the monitor- wedges doesn’t suffice according to them. The in-ear system prevents this to some extent, since artists now hear their sound directly in their ears, giving them a more direct and close hearing of their own sound. The in-ear system thus delegates the problems that can arise with using wedges - feedback, different sound each show because of different wedges – to the mixer. But it also translates the problem of controlling the behaviour of artists (in not cranking up their amplifiers) to a problem of balancing and equalizing by the mixer. In general, the in-ear system provides an expected sound for the artists, which is beneficial for the show as it increases the reliability and continuity on stage: If the monitors aren’t good, the show won’t be good – simple as that. Not everybody has complicated wishes, but the sound on stage has to be good. The sound should be familiar. If you’re on tour, in essence it’s more important to have the same sound every day than having perfect sound every day. (Hayo den Boeft, system-engineer / FOH-mixer Ampco, p. 6) Both the digital console and the in-ear system require specific behaviour, especially of the mixers and the artists. But the technologies also require, what Latour calls, an ‘alignment of set-up’ : the mobilisation of well-aligned resources to create predictable behaviour with the actors (Latour, 1992). In simpler terms: all the engineers must be equipped to reach the goal of continuity. In this case, the set-up consists foremost of engineers and stage-hands constructing the stage during the change-over. These actors should be aligned in order for these technologies to function. More specifically: in order to get that same sound every show, in-ear systems and digital presets of sound images and individual monitor-mixes are not enough. As one mixer states: no input, no output. The input at the festival is not only dependent on the artists’ performance, but also on the placement of the microphones by the plugger, and the patching of the microphones to the mixing console. As already stated in the beginning of this chapter, each band provides its own patchlist, which shows the mixing channels of the mixing console, and how the sources, either microphones or instruments, have to be patched to it by the system or monitor- engineer, and whether a source needs certain effects, such as compressors or gates (see appendix for an example). The patchlist tells the plugger how to set-up the wiring of the cables to the stage-box, and the mixers use it to connect the cables from the stage-box to the specific channels on their consoles. On the basis of all the patchlists of the bands, the 68 local plugger creates a general list, standardizing as much mixing channels as possible. For instance, the drum kit, almost always consisting of the same parts, will have specific channels that remain the same during the festival. Before the festival starts, these lists, as the result of a negotiation-process, have the status of materialized consent for all the involved engineers. Engineers know what to expect, as these lists are distributed to all at forehand. In the practice of a set-up, its function changes as these standardized lists become central to the alignment of the actions of the actors on stage – they coordinate their actions. This seems rather self-evident, though it is essential for the system and monitor-engineer to know how the sources of the band (of the 24 to 48 or even 62 microphones) should be addressed to which channel, as the whole procedure often cannot take longer than ten minutes. The lists thus limit the time that engineers have to spend on communication, and by standardizing as much channels as possible, some patching can be routinized, which limits the room for errors as well. The actions of the plugger and the mixers are, in other words, to some extent inscripted in these lists. However, the lists do not speak for themselves that easily: It also means improvisation and interpretation. At a certain moment, you know what is important and what you should take into account, and which things you can think of: well, it’ll work out. […] If a band has a list with many specific microphones, all of one brand, then you can see that it’s probably a sponsor-deal. And that they don’t need those microphones from us, and if so, we have good alternatives (Natasja Geerdink, plugger Ampco, p. 2) Bands can be sponsored by all kinds of corporations, from clothing to music equipment and microphones. However, as the quote shows, this does not mean that the artists actually use the equipment, and if they do, then it’s their own responsibility as the plugger describes. Furthermore, at the festival-day, when the engineers finally meet each other, it appears that the lists are seldom completely accurate. Many things can change when a band is on tour, regarding their set-up and usage of certain equipment, so “the reliability of the technical riders differs greatly” (Natasja, Geerdink, p. 3). Though the lists align the actors and reduce certain uncertainties, the experience of the engineers, and their social skills to communicate and cooperate, are still important for the network to function. A good example of the tacit knowledge used in practice on stage is the so called miking. Miking – or microphoning – consists of selecting and placing specific microphones 69 to capture the sound coming from amplifiers, drum kits, and other instruments. In her article Engineering the performance, Horning (2004) shows how the tacit knowledge of miking is part of studio recording. Microphoning can be seen as the cultivated technique of placing performers before the acoustical recording horn, in earlier recording days (Horning, 2004). The technology of equalizers and the development of new microphones gave way to the ‘aural thinking’, and the balancing of the separate instruments. The placing of the microphones is a tricky business. Each instrument and amplifier needs specific microphones to capture the sound for reinforcement, and as the sound of each instrument differs – a kick on the drum kit has different sound waves than a guitar amplifier – the placement of the microphones used differs as well. As Horning (2004) described, the art of placing microphones is a skill learnt mostly by experience. In the practice of a change-over at a festival, it’s the plugger who is responsible for the miking, often together with guest-crew of the band. As the quote above already shows, the plugger must have the knowledge of many specific microphones to be able to interpret the technical riders of the bands. In addition, she needs to have experience how to place these microphones. Microphoning a live-band does not differ that much from miking in the studio, though, again, it is the dynamics (and the speed) of the change-over that makes this work out differently in a live-setting. If an instrument or amplifier is badly miked, the mixer faces distorted sounds or, the most familiar problem, feedback of the sound, or no incoming signal at all. And the plugger must keep an eye on the microphones during the show as well: I can’t do anything as monitor-engineer without a good plugger on stage, who fixes my problems if some signal doesn’t come through. […] A plugger is actually very important on stage; a monitor-mixer or FOH-mixer cannot leave his console during the act. So if something happens, a microphone falls over…the plugger gets into action. Without a good plugger, you can’t do anything. (Koen Benschop, monitor- engineer Ampco, p. 1) Though many patchlists of bands also state which kind of microphone-stand should be used for their microphones, indicating how the microphone should be hung, the placing of them remains a skill for an experienced engineer, upon which a monitor-mixer is greatly dependent. 70 In order for the network to function, digital consoles, in-ear systems, and standardized patchlists alone are not enough. Bands thus remain dependent on the experience and tacit knowledge of local engineers and, moreover, the quality of local technologies such as microphones. As touring bands increasingly use digital consoles and the in-ear system, the dependence on the local team of engineers and their equipment can easily become a problem, as it influences their own (expected) sound. To attain even greater continuity, traveling bands in recent years have adopted yet another (technological) strategy to limit the errors that can be made during miking, and the whole change-over in general. In order to reduce this dependency (and difficulties arising due to communication- problems), professional bands nowadays increasingly bring their whole own infrastructure to the festival-site – especially at smaller festivals, where they don’t have any experience with (or trust in) the quality of the local audio company. This infrastructure includes amplifiers, wedges (for backup), mixing consoles, microphones and even cables. As one mixer states: There was also a phase in between… that you partly used your own stuff, and partly that of the venue. And then it was always your stuff that was blamed for causing problems. So we said: let’s not do that anymore. If you are a professional band, you bring everything yourself. Your own mixing console, the whole cabling, and I just provide 32 XLR’s [microphone-plugs] to the front, and that’s it. (CJ Otten, FOH-mixer VanVelzen, p. 3) By having all the equipment the same on each show, bands and crews know what to expect, and are largely in charge of the change-over, though some helping hands of local crew-members are often still needed to get the job done in the short period of time. The consequence however, of this standardization of equipment for the band, is (again) an increased responsibility of the engineers. As the same mixer states: I now have the console I always wanted, and always the same rack of effects. So if I screw up now, it’s only me that’s responsible. That was different back then. Then you could blame everything, form the power-hub to the audience. (CJ Otten, FOH- mixer VanVelzen, p. 4) 71 Bringing the whole infrastructure is not always possible, as logistics or budgets of booking agencies or touring productions do not allow it. To sum up, the digital consoles paved the way for touring bands to use in-ear system as well as their own infrastructure. The digital console, the standardized lists and the band’s own infrastructure increase the reliability of the engineers on their own equipment, and reduces the risks of unexpected surprises, or problems due to communication or cooperation with local engineers. In the end, the continuity on stage should benefit the artist’s performance. But bands still rely on the communication-skills, tacit knowledge, and experience of engineers to make the change-over succeed. In other words: the actions and necessary behavior of the human engineers cannot all be delegated to the nonhuman actors. People appear not so easy replaceable. This technological culture does raise some questions. If artists can more easily rely on their equipment and people, to get the same sound every show, how does that affect the output, the performance itself? With all these digital technologies and presets, what’s left of the artistic performance? Why does a band have to sound the same each show anyway? And what role do other digital technologies, such as video-screens, play at the event of the festival? In other words: what means ‘liveness’ today? These questions are addressed in the following chapter. 72 Chapter 5 Live performances at the pop-festival: seeing and hearing in the digital age A faint bass tone is buzzing in the distance. I am standing in between some 80,000 people who eagerly await the moment that the American band Rage Against the Machine enters the stage. Fans are anxiously cheering and shouting as a loud resonating alarm, resembling a police-siren, starts ringing. Suddenly the musicians appear; they slowly walk on the stage, and form a line. The artists, hands behind their backs, wear orange American inmate-suits and have black sacks put over their heads. Video-screens at the sides of the stage show the artists one by one, in close-up. The sound of the alarms gets louder, and after four minutes of standing still, thereby creating a sinister though promising atmosphere, the four band- members grab their instruments, and start the set with one of their hit-songs called ‘Bombtrack’. The song, energetic funk-metal with political activist lyrics, immediately turns the audience into one massive jumping crowd, singing and shouting at the band and each other when familiar sentences come along. The sound is loud and it feels as if it’s just in front of me. The festival-terrain is crowded, and I am standing in front of a large LED-screen, positioned behind the FOH-tent. The stage is some 75 meters away, and I see many people looking at the video-screens instead. The screen is very bright, and it shows the rhythmic licks of the guitarist, the heavy hammering of the drummer, and the expressive singing of front-man Zach De La Rocha in great detail. I see many people taking pictures of the video-screens. As I am pondering about this observation, my brother asks: “Are they using some kind of filer on the screen? It looks just like YouTube.” Getting into an interesting conversation was not possible, as the band starts their hit-song ‘Killing in the Name’, often described as the anthem of alternative rock music in the 1990s. While I feel the upcoming contagious energy of the music and the dancing people around me, I recall an earlier show of RATM at Pinkpop in 1994, where the massive, jumping crowds created a measured earthquake of 1 on the scale of Richter. As I exchange smiles with the people surrounding me, I willingly drown myself in the music. 73 It is often stated by scholars who historicize the development of the loudspeaker, that there are two historical periods: the mechanical age, and the electroacoustical age. These observations characterize the developments roughly from 1880 until 1970, from the first magnetic coils and horns to the ported 3-way system. These generalizing labels invite one to say that since the 1970s, we have entered the digital age. But what does that mean? Musicologists and historians of sound reproducing technology have shown that new sound technologies, such as the phonograph or the multi-track tape-recorder, have evoked new music practices, changed the work and status of engineers, and even gave way to new music styles. In turn, the new technologies only flourished once they became embedded in certain cultural, commercial or industrial contexts, and often, consumers needed to be ‘trained’ in new ways of listening. In other words, changes in the fields of production and consumption are relevant for the development of sound reproduction technologies. In the past chapters I have argued that the production of sound at the festival partly is made possible by a process of standardization in the technological system. Sound technologies such as the line array and the digital mixing console increased, to some extent, the control engineers have in the set-up of the sound system and during the music performances itself. I furthermore described what role the touring industry plays in choosing a particular sound system, and how these sound technologies are connected to networks of manufacturers. In this chapter, I will focus on the role digital technologies play at the music festival as a mediatized event, how its development can be understood in context of other sound technologies, and what questions it subsequently raises. In addition to the line arrays and the digital console, artists increasingly incorporate digital video or LED-screens in their performance at the festival. I will address the questions these developments raise and situate them in an historical context: What role does the visual play in experiencing live-music? How does this affect our conception of ‘liveness’? How do other sound technologies affect our way of listening and our expectations regarding live- performances? In the past, new sound technologies have raised similar questions. The tension between the dichotomies of live and mediated events, copy and original, repetition and uniqueness are subjects to recurring debates, as we will see. The co-evolution of visual and auditory technologies As the description of the concert of Rage Against the Machine shows, the live-show at the festival is not about sound alone. Live concerts are increasingly featured with large video- 74 screens. At large-scale events such as Pinkpop, the back of the stage consist of a huge video-screen. During the show, animations or clips are shown, sometimes the same that are used on TV. In addition, big screens are placed at the sides of the stage, and to address the people in the back of the field, another big screen is located behind the FOH-position. During the show, these video-screens capture and show the movements of the artists on stage in great detail. Cameras are continuously filming on stage, in front of the stage and sometimes from the more distanced FOH-position as well. Some artists, such as those of Metallica, even have little cameras mounted on their guitars and instruments, giving the audience an up-close view of sliding fingers and hammering drum-sticks. Backstage, in a trailer that is packed with technology, a ‘mixer’ (as the term originally comes from the film- industry) chooses from the various viewpoints and camera-positions what is seen on the screens. The question arises: How do these visuals affect our music experience of live- music? What role do they play in our concept of ‘liveness’? Katz (2004) describes a history of sound technologies along the lines of recordings versus live musical performances in specific time-periods. Katz analysis distinguishes several differences between recordings and live-performances that, as he claims, shaped music and musical life ‘in the age of recording’ (Katz, 2004). Katz distinguishes seven ‘distinctive traits of sound recording’, and a few of those are especially relevant regarding the experiences and expectations of people of sound: the portability, repeatability, and the (in)visibility of sound. Recordings take the sound from its original context (live- performances), thereby losing the visuality of the performance. Moreover, recordings can be repeated over and over again. These characteristics have changed the way we experience music. Because of these characteristics of reproduction, many authors have argued that sound reproduction technologies, especially recordings, have diminished the live- experience. A famous and recurring example in debates about liveness is the critique of Walter Benjamin (“The Work of Art in the Age of Mechanical Reproduction”, 1936). Benjamin states that reproductions lack an ‘aura’ compared to live-performances, as they are not bounded by a unique space or time anymore (Frith & Goodwin, 1990; Katz, 2004). Reproductions thus lose their authenticity, according to Benjamin. A similar critique is that of Schafer, who coined the term ‘schizophonia’, to refer to ‘the split between an original sound and its electroacoustical transmission or reproduction’ (Schafer, 1994 [1977]). What this view neglects, according to Katz (2004), Sterne (2003) and others, is that technologies of reproduction (such as sound systems, studio-recording, MP3-players) also 75 shape new or different kinds of music practices, performances and ways of listening. In the 1950s and 1960s for example, large sound systems became the centre of a vibrant music culture in the ghettos of Kingston, Jamaica, constituting the rise of ska and reggae, as well as sound battles that led to ‘dubbing’ records (Stolzoff, 2000). The devaluation of the sound reproduction in the favor of live-performances furthermore assumes a hierarchy of the senses. This “audiovisual litany”, as Sterne (2003) claims: …renders the history of the senses as a zero-sum game, where the dominance of one sense by necessity leads to the decline of another sense. (Sterne, 2003, p. 16) The visual is often seen as the more ‘distanced’ sense, and it is also the most dominant one. Our language is the simplest example of this audiovisual litany: there are far more visual metaphors than auditory (how to describe the sound of ska for instance?). To overcome this hierarchy between the senses and sound reproduction and live-performances, several authors describe ways in which visual and auditory technologies relate to, construct, reinforce or ‘remediate’ each other. The development of loudspeakers for example, is closely connected to the emerging film-industry in the 1920s and 1930s. Chanan (2003) describes how “the talkies”, the talking pictures, have advanced the development of sound technology, especially of microphones and loudspeakers. In a process of trail-and-error, several sound companies, such as Western Electric and General Electric, competed with each other to create a system that could synchronize sound on film in the 1920s. A standard loudspeaker-design developed for RCA, created by Kellogg at General Electric, emerged form this process, and was eventually not only used for sound-on-film, but also for radio, gramophone, and television-sets (Chanan, 2003). The sound-on-film in turn, raised another problem: audiences didn’t want to see all the microphone set-ups ‘breaking the illusion’ of the talking picture (which especially applied to Hollywood musicals) (Chanan, 2003). This led to the development of the sensibility and directionality of microphones, which in turn benefited the recording industry, at that time suffering form the Depression (Chanan, 2003). Chanan shows how the sound companies (and the film industry, the radio broadcasting, and recording industry as well), competed with each other for control of different sectors of the cultural industry, thereby advancing the development of sound reproduction in the process. Eventually the development and competition of record industry with radio resulted in the widespread of studio-recordings. 76 In the early days of the phonograph, people were stunned that they could hear performances, without even seeing the performers (Katz, 2004). Before that time, sound could only be perceived live, in a given setting at a given time. Recording engineers in the days of the gramophone tried to resemble the live-setting of performances, such as the concert-hall (Chanan, 2003; Katz, 2004). However, performers soon adjusted their music to the reproduction; jazz-musicians for instance, had to limit their solos to make their songs fit on the disc that only lasted a few minutes (Katz, 2004). In these years, many (such as Adorno) criticized the recordings as ‘atomized, fragmented’ and the repeatability ‘fetishized’ the music (Chanan, 2003). As the quality of equipment grew, so did the quality of the recordings. When stereo was introduced, and later the multi-channel mixing consoles, sound recordings were given a spatial effect (Chanan, 2003; Frith, 1996). Soon, sound recordings became widespread. Now, we enjoy a recording quality sound with an ‘analytic clarity’ and ‘tactile proximity’ that people 50 years ago never could hear (Chanan, 2003). Live performances nowadays are said to resemble their recordings. A good example is rock music, such as that of Rage Against the Machine (RATM). The show of RATM resembled very much their studio-work. Sometimes, a distinctive chord or note could be heard. However, live performances of rock bands in general just sound as the record. For that reason, live performances of rock music are closely related to their recordings: It makes little sense to speak of live performance of rock apart from recording, since rock music is made to be recorded: it is constructed along principles derived from recording practices, inspired by earlier music heard primarily on recordings, etc. Even if a group is unlucky enough not to have recorded, epistemologically their music is still recorded music (Auslander, 1998, p. 84) As rock music is born in the studio, as a style, the live performance of rock resembles its recordings. Though Auslander makes a hierarchical distinction between the recording and the live-performance, he acknowledges that both are dependent on each other. The live performances reinforces the recordings, or as Auslander (1998) states, live rock performance is precisely about ‘establishing the authenticity of the recorded sound’. The live and the recorded thus redefine each other, in the case of rock music. However, to be able to reproduce the studio-album on stage is not enough: rock needs ‘visual evidence’ 77 that the artists are the legitimate makers (Auslander, 1998, p. 79). They need to see the pop musicians do something (Goodwin, 1990, p. 269). In this perspective, the video-screens at festivals function as ‘visual evidence’, as a legitimization to show that the music heard (which resembles the album) really is made by the artists. However, because of the refinement of the digital sampler in 1981 and the debut of MTV in the same year, which lead to a ‘disjunction of the musician, in both the studio and the live concert’ as Auslander quotes Goodwin (1993), Auslander continues by assuming that “video is the primary experience of music in a mediatized culture”. Auslander claims that music video’s have taken the place of studio recordings as the primary referred text; live performances no longer resemble studio recordings, but music video’s (Auslander, 1998). As we apply this to the video-screens at Pinkpop, then the shots of the artists should mimic MTV video clips. The early phonograph recordings mimicked live-performances, and the rock performance in turn idealizes its recordings, or, according to Auslander, music video’s. Just as the example of the development of sound technologies in the 1920s showed that different forms of media competed for a place in the cultural industry, here too different forms of media compete, however now in a more ontological way, for a dominant definition of liveness. Does the recording define the live-performance? Or does the live- performance define the recording? Or is it the video that defines the live-performances? Where to place the primary referred text? Regarding the constitutive relationship between live performances, studio recordings, and music video’s, I claim that neither the studio recordings nor the video clips are the primary referred text for the live-performance (of rock music) at the festival. Instead of seeking which form of media is the most ‘authentic’, I will look at the everyday practices of sound consumers in our contemporary music culture to explain the liveness at the pop-festival. Liveness: come closer! Digital technologies play a significant role at the popfestival. The digital sound technologies are said to produce different sounds than their conventional or analogue alternatives. Especially mixers attribute the words ‘direct’, ‘loud’, ‘detailed’, and ‘aimed’ to the line array. In contrast, the conventional PA is frequently described by its proponents as being ‘warm’, ‘fat’, and having more ‘power’, and when compared to line arrays, these engineers call line arrays ‘cold’, ‘sterile’, and lacking a ‘punch’. Similar terms are attributed to the digital mixing 78 consoles by mixers and engineers. Digital mixers are said to sound ‘clinical’, ‘sterile’, ‘metallic’, and many mixers and engineers agree that the analogue console sounds more ‘warm’. In addition to these attributes of the digital-quality sound, the line array’s loudness is a typical one. Because of this relatively larger sound pressure of the line array, the loudness of the line array is often perceived as a form of ‘nearness’; sound that is ‘in your face’, as some mixers describe it. This ‘nearness’ contributes to a ‘new music experience’ as many brochures of line array manufacturers tell us. The digital sound of the mixer, the perceived ‘nearness’ of the line arrays, and the video-screens at the festivals all point at an interesting symbolical change in the experience of sound in our music culture. As Katz (2004) describes, portability of the sound is an important trait of the age of recording. Over the years, the portability of sound increased by the successive development from LP’s, tape-cassettes, CDs, walkmans to the extensive use of MP3-players nowadays. We clearly live in a headphone-culture. However, this individual listening is not self- evident. Just as mixers had to learn the ‘aural thinking’ when the technology of stereo-use and multitracking came into being, as we saw with the live mixer, consumers too had to learn how to individually listen to recorded sound in the early years of the gramophone. Before the technology of recording, enjoying music was always a communal activity (Sterne, 2003; Katz, 2004). Jonathan Sterne, in this respect, shows nicely how individualized skills of listening were acquired. In his comprehensive work The Audiable Past (2003), Sterne traces cultural origins of sound reproduction technologies, and how new ways of listening, “audile techniques”, came into being. Sterne (2003) shows that the new skills of listening that the phonograph or the telegraph gave rise to, without the visual performance of a live-band, could be found a century earlier, in the practices of doctors who used the stethoscope. By mediate auscultation, the act of listening to a patient’s body, emerged in the early 19th century, doctors learned to restructure their auditory space (Sterne, 2003). The use of the stethoscope resulted in a way of listening, by which doctors could distinguish which sounds were “interior”, and had diagnostic meaning, and which ones were “exterior”, which were to be ignored (Sterne, 2003). Sterne shows that this ‘audile technique’ constructed a ‘private acoustic space’: the doctor had to separate his hearing from the other senses, as to intensify his listening, in order to distinguish between the “interior” and the “exterior”. The private acoustic space, as Sterne argues, is a precondition for individualized listening – a way of listening that a century later was reinforced by the use of the gramophone, the radio, and the headset. 79 The individual listening to portable sound reproduction technologies, such as the earlier walkman and the now common MP3-player, requires a private acoustic space, as Sterne calls it. Unlike the doctor in the 19th century, people nowadays are seemingly accustomed to have a private acoustic space. The sound of the modern mobilized walkman or MP3-user, as Michael ‘Professor iPod’ Bull (2004) describes, reorganizes the users’ relation to space and place in another way: These technologies of accompanied solitude appear successfully to deliver a desirable and intoxicating mixture of noise, proximity, and privacy for users on the move. […] Mobile privatization is about the desire for proximity, for a mediated presence that shrinks space into something manageable and habitable. Sound, more than any other sense, appears to perform a largely utopian function in this desire for proximity and connectedness. (Bull, 2004, p. 177) For modern users, the MP3-player is way of dealing with the space and place while on the move. By listening to their favorite tunes, people want to ‘make the public spaces mimic their desires’ (Bull, 2004, p. 177). It is precisely this utopian function of sound that characterizes the listening experience at the pop-festival. The way of listening to MP3-players requires a private acoustic space, as Sterne describes. In other words: Sterne (2003) shows how hearing without the visual required a new, individualized way of listening. It can be argued that his argument can be reversed as well: the very incorporation of the visual at the festival also enacts a new way of listening – a way of listening that had to be learned through the successive development of other sound technologies in our contemporary music culture, in particular the digital sound of the MP3-player. Music nowadays is everywhere. Not only do people hear music in elevators, buildings, and from radios blasting in the open-air, but people surround themselves constantly with sound, either in their private space at home, or in public space. In the streets, trains and busses (and even in shops), one can see many people, wearing headphones or earplugs, listening to their favorite music on their portable MP3-players. At home, people have surround-sound cinema-sets, with many speakers placed around the TV, which creates the feeling one sits ‘in’ the sound. In other words, these digital sound technologies give people the opportunity to surround themselves constantly with direct sound, often of digital-quality. 80 Users are active consumers of technology, but they are culturally defined as well. The ‘new’ way of listening that the video-screens in combination with the line arrays at the festival create, is contingent with the sonic expectations and desires of proximity and connectedness that the digital sound technologies such as the MP3-player create. This corresponds with what Porcello (2005) calls “techoustemology”: the implication of forms of technological mediation on individuals’ knowledge and interpretations of, sensations in, and consequent actions upon their acoustic environments as grounded in the specific times and places of the production and reception of sound. (Porcello, 2005, p. 370). If someone interprets the sound of a real gun as more ‘fake’ than guns in the movies, the “techoustemology” is revealed in the sonic conventions of the film sound mixes (Porcello, 2005, p. 370). It refers to the sonic expectations we have, created by all kinds of sound conventions. The remark of my brother about the video-screen resembling YouTube, can be seen as a form of techoustemology, as it refers to the digital quality used on the internet. The video-screens give a feeling of presence and nearness. This closeness, nearness of presence, consists not only of a visual representation, but is also relational to the space and place. One rock promoter describes that the audiences want to ‘share a space with the artist’ (Goodwin, 1990, p. 269). To speak of a community would be to exaggerate the desired sense of connectedness, but one can argue that people not only share the space with the artist, but also with the other festival-visitors (festivals differ with the more ephemeral live-concerts, in that visitors stay there often for three days in a row). Liveness at the pop-festival does not so much appeal to authenticity of recordings or video clips, as well as to the more general desires of proximity and connectedness – functions of sound that can be found in the way people listen their MP3-players. The distinctive sound of the line array, the increased sound pressure level that results in sound that is ‘in your face’, clearly reinforces this. Both the video-screen and the sound technologies of the line array, the seeing and the hearing, constitute the way of listening and liveness at the pop-festival in the digital age. 81 Chapter 6 Conclusion “Wow! Have you been backstage at Pinkpop?!” “How did you get that all-area pass?!” “What does it look like backstage? Did you have a drink with the artists?!” These questions were often fired at me when explaining to friends and others about my fieldwork at Pinkpop. Their astonishment nicely reveals the romantic idea people have of ‘the backstage-area’, where artists supposedly have decadent orgies of drinking, partying, and wild sex with the fans. Unfortunately, I did not encounter any of these activities (thereby losing a fascinating dimension of participant-observation). I did observe, however, far more interesting things at this backstage-area. In this thesis I have shown what actually happens backstage. I have given a look behind the curtain, a glimpse of a hidden world. What did I find? The backstage world of the festival proved to be a world which is dominated by technology, where many people work hard in a race against time, in the shadow of those praised artists. As one roadie strikingly points out: “artists are actually not part of what happens here backstage”. The work backstage is devoted to one goal: to serve thousands and thousands of people with their favorite music, and to keep the show going on. Doing ethnographic research at the festival was not only a unique and pleasant experience, it also proved to be a good way to witness “science in action”, to quote Latour. To ‘trace the engineer’ became a very real practice as I spent many hours hunting down mixers, who hid themselves in the backstage-area or their touring busses. Artists were often surprised to see that their mixers now got to be interviewed instead of them (Barry Hay of Golden Earring for example, was obviously jealous). Ethnography shows the richness and diversity of the setting where sound technologies are being used. The way people work with the equipment, and how they perceive these sound technologies and the sounds produced by them, can only be described by observing and questioning on the spot. However, theories can provide a framework by which emerging patterns in field descriptions scan be explained and put in a broader context. 82 In this thesis I have focused on the development of two sound technologies: PA- systems and mixing consoles. By attending the MusikMesse in Frankfurt, it became clear the sound reinforcement industry is dominated by one sound system: the line array. Interviews and documents showed how science plays a role in the propagation of this new sound technology, as brochures stressed the ‘scientific rigor’ of these new speakers. Many people at the Messe conceived the rise of the line array as mere application of new acoustical insights, or as the result of the demands of a growing touring industry. By describing the sound system as a technological system, I have shown how the line array gained momentum, and how it can be seen as a reinvention of the vertical column speaker, as it was used since the 1920s. In practice, the line array gained momentum by giving a sense of control and predictability to different social groups. The smaller boxes made it easier for the rigging engineers to handle, and it resulted in logistical benefits for the touring industry. The controllability of the sound beam gave more control to the mixers, though the vulnerability of the sound field increased as well. For the festival organization, the controllability of the line array proved to be a good way of preventing noise pollution, important as regulations become more stringent. In addition to the theory of Hughes, standardization theories could further explain how important standardization of the sound system is for the sound system industry and the festival, as the Synco-network nicely illustrates. Not only have sound system retailers joined forces to increase the effectivity and loadfactor of their equipment in stock, they also standardized their equipment in such a way, that quality can be reassured in the negotiation-phase of the festival, necessary to convince bands of the quality of the local equipment and engineers. This coordinative standardization also structures the actions of the actors on stage, as the example of the partchlist shows. Ten years after its introduction, the line array became the norm for festivals and touring bands, as its standardized sound quality also contributed in its popularity in the organizational phase of the festival. On stage, several new sound technologies have come to the fore in the recent years of which the digital console is the most important one. The time-pressure on stage at festivals has led to the usage of all kinds of technologies to reduce the uncertainty and improve the reliability in this technological culture. By viewing the engineers and technologies on stage as a Latourian network, the differences of the analogue and the digital console that constitutes the choice between the two made by mixers, could be described. To aim a greater continuity, time-costing actions are delegated to the digital console, which gives the mixer the reassurance of getting the same sound at every show, as 83 the settings can be saved. Although analogue consoles remain important at festivals as they have been the norm for years, the usage of the digital console is rising; the digital console also led to the usage of other technologies, such as the in-ear system, that reassures the artists of hearing the same sound on their individual mixes. In addition, traveling crews increasingly bring their whole infrastructure with them on tour, as to reduce the dependence on the local equipment and crew, by which traveling crews try to keep in charge of the change-over. Though the digital console has paved the way for this integrated system on stage, traveling crews remain dependent on the tacit knowledge and experiences of engineers, as the example of miking showed. Though the concepts and theories of STS described above proved to be adequate to describe the process of gaining control of sound, equipment and people, they lack a proper vocabulary to understand the sound at the modern festival itself. Why do modern festivals look the way they do? And why do the technologies used sound the way they do? Both the line array and the digital console are frequently being said as sounding ‘clinical’, ‘cold’, ‘sterile’, while proponents of the analogue alternatives see the analogue as being ‘more warm’. In addition, the loudness of the line array is often described as ‘nearness’. By using literature from the emerging field of sound studies, I was able to connect these attributes to our modern music culture. In the last chapter, I have shown that the ‘nearness’ of the line array, and the increasing incorporation of video-screens at large-scale events such as Pinkpop, can be seen as an answer to successive developments of several sound technologies, especially headphones and MP3-players, that constitute the expectations people have of ‘liveness’. It shows that seeing and hearing sound at the modern festival is intertwined, as other histories of sound technologies also suggest. In the beginning of the thesis, I posed the following research question: How has the changing technological culture of open-air sound systems affected the position of conventional sound reinforcement systems versus line array systems, analogue versus digital technologies, and the roles of different kinds of mixers and engineers at open air music festivals? I distinguished between the material, the social, and the symbolical dimension of this technological culture. As I have shown in the previous chapters, the most important change in the technological culture at the festival is the need for control, resulting in an increasing standardization of equipment, people, and sound. I have shown in this thesis how the festival has become a highly professionalized event. As sound is 84 often seen as the forgotten dimension of modernization, surely the festival can be seen as the apex of this modern sonification. The three dimensions cannot be separated, but how they relate to each other can be described. The standardization of equipment in general routinizes the work of the engineers involved. Though new sound technologies, such as the digital console and the in- ear system, aim to reduce actions and behavior of the engineers, thereby limiting errors made by those ‘irrational’ humans, at the same time they increase the need for social skills. As is the case with many complex technological systems, their increasing complexity results in an increased vulnerability as well. The rigging of the line array is a good example. System engineers increasingly have to use specialized computer-steered equipment to calculate and tweak the optimum set-up. For their system to function, they rely heavily on the rigging engineers, who hang the line array in the stage-roof. Though detailed AutoCAD-drawings align their actions, both have to cooperate to make sure the angles are right. That the material standardization of the technological system is powerful, can be shown by looking at the meanings engineers attribute to their technologies. Many engineers and mixers claim that they prefer the sound of the conventional PA or analogue consoles, although they actually work with line arrays and digital desks. Though this attribution of symbolical meaning to these consoles can partly be seen as a form of what Pinch calls “technostalgia”, it also shows that meanings of sound are subordinated to the practical conveniences these new sound technologies entail; the fact that a digital console is smaller, and works with preconfigured sets, outweighs the perceived ‘sterile’ sound for many mixers. In a broader sense, however, as the last chapter has shown, the digital sound can also be seen as answer to a symbolical change in our expectations of sound: as people surround themselves constantly with direct, digital-quality sound, it becomes the ‘referring text’ at the festival as well. As the case of the digital mixing console shows, bands can more or less have the same sound at every show, as its presets are saved. In this perspective, the standardization of sound technologies also led to a standardized sound. Since the first festivals were held in the 1960s, the festival and sound industry have grown and professionalized immensely. The involved organizations have professionalized themselves in global networks, the equipment has been standardized, and the profession of the sound engineer has been specialized – especially in the last ten years. Sound systems at pop-festivals have developed from self-built boom boxes to integrated systems, linked together via digital technologies. This thesis has tried to show how the rise of sound 85 technologies at the festival can be understood, and how they are linked to other contexts such as the touring industry, the sound industry, and our conceptions and expectation of sound in general. In STS-literature and in sound studies literature, descriptions of the work of the engineers in a live-setting, and the sound technologies they use, are generally absent. The world behind the backstage-fences is certainly an interesting one, and with this thesis, I hope to have given a contribution to a better understanding of sound technologies in such vibrant live-settings as the pop-festival. 86 References Adorno, Theodor (1990) [1941]. On Popular Music. In: Simon Frith & Andrew Goodwin (Eds.). On Record: Rock, Pop, and The Written Word (pp. 301-314). New York: Pantheon Books. Auslander, P. (1998). Liveness: Performance in a Mediatized Culture. London: Routledge. Ballou, Glen M. (Ed.) (2005). Handbook for Sound Engineers. Third Edition. Oxford: Elsevier. Berg, Peter van de (2008). Geen popfestival zonder Mojo. Interview with Rob Trommelen, Mojo Concerts. In: BN de Stem, May 18, 2008 Bijker, Wiebe., Thomas Hughes & Trevor Pinch. (Eds.) (1987). The Social Construction of Technological Systems. New Direction in the Sociology of Technology. Cambridge, MA: MIT Press. Bijker, Wiebe (1995). Democratisering van de Technologische Cultuur (oratie). Maastricht: Rijksuniversiteit Limburg. Bowden, G. (1995). Coming of age in STS: Some methodological musings. In: Markle, Petersen, Jasanoff & Pinch (Eds.). The handbook of science and technology studies (pp. 64-79). Beverly Hills, Ca: Sage Publications. Bull, M. (2004). Thinking about Sound, Proximity, and Distance in Western Experirence: The Case of Odysseu’s Walkman. In: Veit Erlmann (Ed.). Hearing Cultures: Essays on Sound, Listening and Modernity (pp. 173-191). Oxford: Berg. Chanan, Michael (1995). Repeated Takes. A Short History of Recording and its Effects on Music. London/New York: Verso. Cooke, Raymond E. (1978). Loudspeakers: An anthology of articles on loudspeakers from the pages of the Journal of the Audio Engineering Society Vol. 1 – Vol. 25 (1953-1977). Kent: AES. Cooke, Raymond E. (1984). 87 Loudspeaker Volume 2: An anthology of articles on loudspeakers from the pages of the Journal of the Audio Engineering Society Vol. 26 – Vol. 31 (1978-1983). Kent: AES. Davis, R. & Jones, R. (1989). Sound Reinforcement Handbook. Milwaukee: Hal Leonard Publishing Corporation. Eargle, John (2004). Historical Perspectives and Technology Overview of Loudspeakers for Sound Reinforcement. Journal of the Audio Engineering Society 52, 4, 412-432. Eargle, John, David Scheiman & Mark Ureda (2003). JBL’s Vertical Technology: Achieving Optimum Line Array Performance Through Predictive Analysis, Unique Acoustic Elements and a Dedicated Loudspeaker System. White Paper AES Convention, October 2003. Evers, Paul (1995). Oor’s Speciaal Jubileumboek 25 Jaar Pinkpop. Amsterdam: Uitgeversmaatschappij Bonvaventura. Erlmann, Veit (Ed.) (2004). Hearing Cultures: Essyas on Sound, Listening and Modernity. Oxford/ New York: Berg. Feld, Steven (1994). From Schizophonia to Schismogenesis.. In: Charles Keil & Steven Feld (Eds.). Music Grooves (pp.257-289). Chicago: The University of Chicago Press. Frith, Simon (1998). Performing Rites. On the Value of Popular Music. Cambridge, MA: Harvard University Press. Goodwin, Andrew (1990). Sample and Hold: Pop Music in the Digital Age of Reproduction. In: Simon Frith & Andrew Goodwin (Eds.). On Record: Rock, Pop, and The Written Word (pp. 258- 274). New York: Pantheon Books. Heil, Christian (2001). Principles of Verticality. In: Live Sound Magazine, January 2001. Heil, Christian, Marcel Urban & Paul Bauman (2002). Wavefront Sculpture Theory. Audio Engineering Society (AES) Convention Paper, 111th Convention, September 21-24 (reprint) Horning, Susan Schmidt. (2004). Engineering the Performance: Recording Engineers, Tacit Knowledge and the Art of Controlling Sound. Social Studies of Science 34, 5, 703-31. 88 Hughes, T.P. (1987). The Evolution of Large Technological Systems. In: Wiebe Bijker, Thomas Hughes & Trevor Pinch (Eds.). The Social Construction of Technological Systems. New Direction in the Sociology of Technology (pp. 51-82). Cambridge, MA: MIT Press. Katz, Mark (2004). Capturing Sound: How Technology has Changed Music. Berkeley and Los Angeles: University of California Press. Kealy, Edward R. (1979). From Craft to Art: The Case of Sound Mixers and Popular Music. Work and Occupations, 6, 1, 3-29. Latour, Bruno (1992). Where are the missing masses, sociology of a few mundane artifacts. In: Wiebe Bijker & John Law (Eds.). Shaping Technology-Building Society. Studies in Sociotechnical Change (pp. 225-259). Cambridge MA: MIT Press. Latour, Bruno (1987). Science in action: how to follow scientists and engineers through society. Cambridge: Harvard University Press. Mellor, David (2006). Line Arrays Explained: The Science and the Magic. Sound on Sound, March 2006. Porcello, Thomas (2005). Afterword. In: Paul D. Greene & Thomas Porcello (Eds.). Wired for Sound: Engineering and Technologies in Sonic Cultures (pp. 369-381). Middletown: Wesleyan University Press. Pinch, Trevor, & Karin Bijsterveld (2004). Sound Studies: New Technologies and Music. Social Studies of Science 34, no. 5, 635- 48. Pinch, Trevor, & Wiebe Bijker. (1987). The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other. In: Wiebe Bijker, Thomas Hughes & Trevor Pinch (Eds.). The Social Construction of Technological Systems. New Direction in the Sociology of Technology (pp. 18-50). Cambridge, MA: MIT Press. Schafer, Murray R. (1994). The soundscape: our sonic environment and the tuning of the world. Rochester: Destiny Books. 89 Schmidt, Susanne K. & Raymund Werle (1998). Coordinating Technology: Studies in the International Standardization of Telecommunications. Cambridge, MA: MIT Press. Seale, Clive (Ed.) (2004). Researching Society and Culture. London: SAGE. Sterne, Jonathan (2003). The Audible Past. Cultural Origins of Sound Reproduction. Durham: Duke University Press. Stolzoff, Norman C. (2000). Wake the Town and Tell the People: Dancehall Culture in Jamaica. London: Duke University Press. Thompson, Emily (2002). The Soundscape of Modernity. Architectural Acoustics 1900-1933. Cambridge, MA: MIT Press. Webb, Bill & Jason Baird (2003). Advances in Line Array Technology for Live Sound. High Wycombe: Martin Audio Limited. Werle, R. (2003). Institutional Aspects of Standardization: Jurisdictional Conflicts and the Choice of Standardization Organizations. Köln: Max-Plank-Institut für Gesellschaftsforschung. Willekens, Bob (2007). Line Array’s: Bob Willekens. Tilburg: RockAcademie. Websites www.musikmesse.de. www.rane.com www.apr.nl www.synco-network.com www.dozin.com/wallofsound www.martin-audio.com www.lacoustics.com www.purplegroup.nl/ 90 List of figures and illustrations Cover: The Vdosc line-array at a concert of Ben Haper. Photo by Evil Vince (used with permission). Fig. 1. Radiation of conventional horns (left) and line array (horns). Source: Willekens, Bob (2007). Line Array’s: Bob Willekens. Tilburg: RockAcademie. Fig. 2. The ‘flying’ of a line array. Source: W8L Manual. Synco Network. Fig 3 & 4: Left, a Synco-ine array at Pinkpop. Right: a Synco-conventional stack at Neterpop. Source: photographs, taken during fieldwork. Fig. 5. A screenshot of the software-program ‘Viewpoint’ of Martin Audio, by which engineers can toggle with different parameters. Source: Willekens, Bob (2007). Line Array’s: Bob Willekens. Tilburg: RockAcademie. Fig. 6. The FOH-position at Pinkpop, with the mixing console (analogue), and racks of effects. Source: photographs, taken during fieldwork. Fig. 7. Monitor-mixer Phil Wilkey (Incubus) in action at Pinkpp. Source: photographs, taken during fieldwork. Fig. 8. The Digico D5 digital console Ssource: retrieved from www.digico.org July 30 2008 91 Appendix I List of interviewees Gerrit Kuster (director) & Benno Rottink - The Production Factory Harry Zinken – Director Purple Group Fred Heuves- CMO Ampco / Synco Peter Schmitz - Accountmanager / monitor-mixer Ampco Hayo den Boeft – Production-manager / system engineer Ampco Bob Willekens - System engineer Purple Group Hugo Scholten - System engineer / FOH-mixer Ampco Koen Benschop - Monitor-engineer / monitor-mixer Ampco JW Stekelenburg - FOH-mixer Ilse de Lange CJ Otten - FOH-mixer VanVelzen Arjan van Egmond - FOH-mixer Golden Earring Phil ‘Sidephil’ Wilkey - Monitor-mixer Incubus Erik van der Veek - Engineer dB-Control Natasja Geerdink - Engineer / plugger Ampco Matt Dufty - FOH-mixer Pete Murray Bruce Jones - FOH-mixer Counting Crows Big Mick – FOH mixer Metallica (email) 92 Appendix II The line array at Pinkpop 93 Appendix III Example of a patchlist 94 95