- Mobile Performance
- Electric Speed
- Schematic as Score
- (Re)purposed Clothes
- Collaborative Spaces
- Device Art
- Digital Dub
- Rise of the VJ
- Sample Culture
You may need: Adobe Flash Player.
When R. Murray Schafer coined and defined the term ‘soundscape’ in the late ‘60s, ambient sounds were recognized as one of the three compositional elements. These ‘keynote sounds’ are so ubiquitous that they often don’t consciously register, and because of this ubiquity they are an honest representation of the character of an environment and people who inhabit it. In most parts of modern cities, for example, traffic is the dominant keynote sound. A small beach may be defined by crashing waves and the cries of seagulls even though these sounds come and go. To be identified as a keynote, it’s more important that the sound is consistent in a given place than constant. Despite the prominence of ambient sounds in an environment and the fundamental characterization they provide, they aren’t as privileged as the other two pillars of Schafer’s soundscape: Sound signals and soundmarks. Existing in the foreground, sound signals are consciously listened to (e.g. ambulance siren, doorbell), and it is the soundmark (derived from ‘landmark’; a unique, punctuating sound such as ringing church bells) that “deserves to be protected.”1
This soundscape model turns sound into narrative. Field recordings capture a period in time and space that can never be reproduced, yet is easily reduced to keynote, signal, and soundmark. An understanding of the underlying system that produced the soundscape is worked at backwards, from a multitude of recordings that will only ever hint at the infinite soundscapes that might emerge from a given acoustic environment. In Unit Operations: An Approach to Videogame Criticism, Ian Bogost writes “Any medium can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning.”2 The acoustic environment is such a system, and the soundscape its expression.
Videogames with 3D virtual worlds are especially suited for furthering our understanding of soundscapes as a collection of unit operations—Bogost’s term for the aforementioned “interlocking units of expressive meaning”—that emerge from the underlying system. Looking to vastly simplified simulations of the complex acoustic environments that exist in the real world might seem like a step backwards from the richness of field recordings, but it’s a necessary step if we’re to begin thinking of sound as a procedural system and not narrative. I do not, however, mean to dismiss the value of narrative or soundwalk studies, or to place the sometimes sterile virtual acoustic environment on a pedestal above the messy beauty of the real world. “Unit operations,” Bogost writes, “can help us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience.” Videogames can be used to develop approaches to studying and understanding the world in all its overwhelming complexity.
The following section will examine Half-Life 2’s virtual acoustic environment, and specifically its ambient sounds in all their varieties. Half-Life 2 is a first-person shooter in which players navigate through various near-future urban, coastal, and alien environments. I have recorded a short soundwalk through a portion of City 17, one of the first areas the player is able to explore. It is no substitute for actually playing the game, but can give a sense of the fidelity of the simulation and types of sounds used.
Half-Life 2’s Source engine uses ‘Soundscape’ as a technical term for a script that takes several .wav files and mixes them together on top of a looping base track, to be played when the player is inside a certain space. Sound effects are divided into logical groups (distant trucks, alarms, aircraft, etc), and every few seconds (within varying random ranges) a sound in each group is played in a random location with a random volume. The resulting ‘soundscape’ is a varying drone of city activity on the edge of hearing that seems to be originating from beyond the level’s façade, outside of the geometry. All the other sounds that make up the game’s actual soundscape at runtime are layered on top of these.
This background noise is an illusion designed to make players believe they are in a place with depth, and not an enclosed, artificial environment. It’s background noise in not just an acoustic sense, but a spatial one as well. These sounds suggest an environment full of activity larger than the one actually constructed for the game. Geographer Yi-Fu Tuan names this type of space outside of direct observation (ambient space, if you will) the unperceived field: “This unperceived field is every man’s irreducible mythical space, the fuzzy ambience of the known which gives man confidence in the known.”3 I take this to mean that our senses of sound, direction, place, and everything other than direct sight and experience, create an ambience that reinforces and gives a reality to space we can’t see but does in fact exist. Games like Half-Life 2 take advantage of our innate ability to sense space in our unperceived field by using ambient sound to provide the illusion of massive environments without those environments actually needing to exist. The expressive meaning of certain ambient sounds in Half-Life 2—to give reality to unperceived space—can also be applied to the real world. The sound of traffic in an actual city isn’t just atmosphere, but subconsciously processed evidence of radiating streets forming blocks and neighbourhoods, giving us confidence in our unperceived reality.
Filmmakers are well acquainted with the ability of off-screen sounds to suggest the existence of a world outside the frame. However, unlike players of Half-Life 2, film audiences have no control over the frame and what is off-screen or not. The control a filmmaker has over what is seen and what is heard at any given moment is not available to the game designer attempting to simulate a somewhat realistic acoustic environment. As Ben Abraham suggests in a blog post on a particular soundscape in Half-Life 2, the aesthetic equivalent to the off-screen sound of film can be found in virtual sounds that lack a physical source. One area of Half-Life 2 contains the sound of wind chimes, but without an actual object serving as a source.4 The sound source exists in space—it can be found in the editor, and approached in the game—but is emitted from thin air. Even in a virtual environment with complete freedom to explore, sounds can be made perpetually off-screen by eliminating a visible source. “Factual errors abound in the unperceived field,” writes Tuan, as the sheer size and complexity of the world requires our mental construction of surrounding space to be heavily informed by sounds that will always be “off-screen.”
When Yi-Fu Tuan says “sound dramatizes space” he seems to mean all sound, not just those that speak to the conscious mind. That sounds in any of Schafer’s soundscape categories can contribute to this dramatization is one of the reasons I prefer framing sound as individual units of expressive meaning, and not as belonging to tiered groups based on some measurable, but ultimately subjective concept of frequency and pervasiveness. In Half-Life 2’s City 17, the sound of flying surveillance cameras serves a much different purpose from the distant drone of traffic, despite both essentially being keynote sounds in the recorded soundscape.
City 17 is a police state, where the player and other citizens are under constant surveillance by the seemingly omnipresent City Scanners—flying cameras that emit a steady stream of humming, warbling, and clicking as they float around the streets following people and snapping photos. Their freedom of movement allows the sounds of being watched to pervade the environment. There are no distinct zones of surveillance and privacy, as one might find in a place with wall-mounted security cameras—City Scanners can’t be everywhere at once, but they can go anywhere. Even when a scanner cannot be seen, the sound of one above or behind the player is usually there, a subconscious reminder of total oppression. If ambience is defined as the character and atmosphere of a place, then it seems to follow that ambient sound can be used to measure the extent of a police state’s vision and control over a populace. Four or five City Scanners were encountered in that short soundwalk through City 17, yet I’m certain I could find twice as many silent CCTV cameras in a similarly short walk on any downtown street that normally doesn’t provoke the unease of surveillance. The ambient sound of the City Scanners isn’t the only factor contributing to the dramatization of City 17 as a police state, but it’s certainly a powerful one. Noticeably absent from City 17 are any propaganda posters or indoctrination along the lines of “Big Brother is watching you.” When surveillance is an ambient reality of daily life, such reminders are redundant.
My understanding and application of Ian Bogost’s critical framework outlined in Unit Operations has been superficial at best, but I believe there is value to be found in approaching an acoustic environment as a procedural system that creates a possibility space of potential soundscapes. Videogames, by simulating complex systems as a matter of course, are ideally suited to teaching the kind of procedural literacy needed to view the world through the lens of systems thinking. The way that ambient sounds affect our experience of the world are too many and varied to be neatly categorized as keynote sounds. Ambient sounds only become such as a result of our interaction with an acoustic environment, creating an experience that feeds back into the system. The soundwalk I recorded in Half-Life 2 is not nearly as elaborate as the soundwalks recorded in the real world, but what the game has provided is insight into the invisible, ambient processes that all soundscapes emerge from.
(1) R. Murray Schafer, “The Tuning of the World,” 1977.