Sound / Image Talks 2015

Talks 1 – Saturday Morning (11-13)

 

The image as trigger of imagined sounds

Victoria Karlsson – University of the Arts

Abstract:

As well as a visual imagination, do we also have an auditory one? This paper explores the idea of inner, imagined sounds, and how images and photographs can be used to access and trigger these sound worlds. Discussing research and practical experiments undertaken in understanding ideas of an inner sounds world, the paper asks if we can hear images, as well as see them.

Biography:

Victoria Karlsson is a sound artist interested in the emotional and subjective aspects of sound and art. Investigating sound as both an inner and outer experience, she explores how we think about, remember, dream about sounds, and how this influences our experiences of sounds in our everyday. She is currently undertaking a PhD Research Degree at University of the Arts, London. Her research investigates sounds in thoughts, asking if we hear sounds in our minds, what they mean to us and where they come from.

 

The Orphic Turn

Daniel H. Foster – University of East Anglia

Abstract:

Twenty years ago, W. J. T Mitchell recognized a “pictorial turn” in academia. Ten years ago Jim Drobnick heard academics taking a “sonic turn.” I would like to offer a new twist to these turns, one that gives a broader perspective of the world than those offered by academics about academia. Call it an “Orphic turn.” I begin with a myth:

After his wife Eurydice’s death, Orpheus descends to the underworld and uses music to persuade Hades to return her to life. Hades relents but on one condition. While making the return journey, Orpheus must not look back at Eurydice. And yet, tragically, inevitably, mere steps away from the mouth of hell, Orpheus makes his fatal error. On the verge of success, he turns around to see if his wife is still following him. By turning to see what he has previously only been able to hear—the voice and footsteps of Eurydice as she follows him out of the underworld—Orpheus betrays his most important sense, his sense of hearing. Orpheus, the world’s greatest minstrel, cannot in the end rely upon his sense of hearing to guide his wife safely back to life. Having just conquered hell, he cannot conquer himself. And if Orpheus cannot rely only on his sense of hearing, what hope have the rest of us? This is what this myth seems to be telling us. Even the most sonically gifted among us find acousmatic listening an almost impossible task. Why?

Through the figure of Orpheus this paper explores how and why we sometimes feel the need to see in order to hear.

Biography:

My PhD is in Comparative Literature from the University of Chicago. As an associate professor in theater at Duke University, I widened the department’s ideas about theater to include audio performance, new media, and performance studies. Now, as a senior lecturer in theater in the UK, I’ve been developing the Sound Studies Project at the University of East Anglia’s School of Literature, Drama, and Creative Writing. I’ve held various fellowships at Cambridge, University of Pennsylvania, and the American Academy of Arts and Sciences. I’ve recently finished The Minstrel’s Progress: British Bards and American Blackface, 1750 to 1850. This book asks what is a minstrel? And, more particularly, what did the bardic figures created by Cambridge scholars like Thomas Gray have to do with poor, white immigrants jumping “Jim Crow” in American minstrel shows? My previous book, Wagner’s Ring Cycle and the Greeks (Cambridge 2010), addressed the relationship between cultural politics and performance while explaining the larger aesthetic stakes in recycling old stories and ideas for new audiences. In my other writing and research, I raise questions about the intersection of politics, technology, and aesthetics in Romantic poetry, German art song, blackface, radio theatre, and tragedy, comedy, and epic in nineteenth-century opera.

 

Snap-stick, (Slapstick), Crack and Rustle: locating the sonic-signifier

Kevin Logan – CRiSAP, UAL

Abstract:

This is a proposal for a perfromative-presentation, it was originally conceived and presented at the In This Neck of the Woods symposium that took place at CSM, King’s Cross on the 4th June 2015.
Examining a particular sonic-signifiers employed in narrative, it looks toward the practice of field recording and phonography for its methodology.
Using a trope that is common within cinema, in particular the genres of horror and thriller, as a device to examine the sonic-event. This ‘device’ is the dual sonic-signifiers of ‘a twig crack’ and the ‘rustle of dried leaves’ underfoot. These simple noises are complex indicators of location, mood, and plot in filmic language. I will briefly look at how the sonic can re-site the imagination, playing off ideas of urban and rural sonorities against each other.
This playful performance-lecture considers the language of cinematic sound and its relationship to site. It examines the ‘obstinate-object’, a specific term within my recent research, whilst addressing the traditional rift between ‘performing’ and ‘knowing’. As the straightforward lecture format is antithetical to my research premise my presentation will take this pedagogical model as an incitement. Touching upon theoretical areas that are currently under close scrutiny by such disciplines as Sound Studies, Performance Studies, and Philosophy, I intend to add to them through forms of embodied knowledge sharing.

Biography:

Kevin’s cross-disciplinary practice spans over two decades, comprising performance, installation, digital media and sound composition/design. He has exhibited and performed internationally, has had sound works on compilation CDs, audio-visual works screened in festivals worldwide. He has also had theoretical and experimental texts published in print and online.

He is currently a PhD candidate at the CRiSAP (Creative Research into Sound Arts Practice) research center at LCC, University of the Arts London, where his research explores the sonic through gesture, mediation and performance.
He is also a founder member of the collective thickear formed in London in 2011.

 


Talks 2 – Saturday Afternoon (14-15:30)

KEYNOTE:

“L’audio-logo-visuel; la re-division sensorielle”

Michel Chion – l’Université Paris III 

Abstract:

At the beginning of the Eighties, things seemed simple: there appeared to be space for a dual division of cinema into sound and image. Of course, I had already sought to break up their false symmetry, by showing (in La voix au cinema, 1982), that ‘there is no sound track’ and that it’s more accurate to define cinema as a space for images on one hand (a space defined by a frame), and for sounds, on the other, sounds that don’t have their own frame. But I still considered the sound/image division as pertinent.
 As early as 1990, however, I believed that a third element came into play: quite simply the ‘text’, present in 99% of all films in a dual oral/ written form, and not addressed by most cinematographic analyses.
In L’Audio-vision (1990), my analysis led me to the concept of an audio-logo- vision, because language cannot be associated solely with sound, if it involves voices, or with an image, if it involves written characters.
 So what about a division into three: text, image and sound? No, because it would be arbitrary to study a text pronounced or read, independently of the rest. We can also divide the cinema into ‘shown’ and ‘said’. ‘Shown’ refers to what is shown in concrete form as sounds and images. ‘Said’ is what is formulated and verbalized by characters, in voiceovers, in songs, etc.
 Can we therefore reduce cinema to two components, the ‘said’ and the ‘shown’? It’s not that simple. 
It’s worth noting that sound and image share some aspects, including that of rhythm. There is not a special ‘sound rhythm’ or a special ‘visual rhythm’. Even if the ear can pick up rhythms faster than the eye, the aspects of slowness/speed, regularity/irregularity, steadiness/acceleration/ deceleration that may characterize a given rhythm are the same for the eye and the ear. Rhythm is a ‘trans-sensorial’ notion, i.e., one for which the conventional division into the five senses does not apply. 
Is the text considered a non-divisible aspect? Of course not, because a read and heard text (sound film uses them both) is fundamentally different, and this difference is not comparable to the difference between sight and sound.
 Lastly, there is what I call the audio-division, by which I mean the division created 1) inside what we see on the screen, by what we hear from the loudspeakers, and 2) inside what we hear, by what we see on the screen. Depending on whether I can see the source of the sounds I hear, the sounds are divided into ‘onscreen sound’ or ‘acousmatic’, and take on a different meaning. Depending on whether I can hear the sounds corresponding to the actions, objects and characters that I see on the screen, these sounds seem different to me. The sounds re-divide the image in the film, the image re-divides the sounds in the film.

Biography:

Michel Chion, born in 1947 in Creil (France), lives in Paris. He is a composer of musique concrète, a writer and a director of films and videos. He has worked as a sound designer collaborating with many different film and video artists, and also composes his own audio-visual films.

As a writer and researcher he has authored over thirty books on sound, music and film, including the seminal text AUDIO-VISION, translated into a dozen languages, including English.

He teaches at the University of Paris III and is invited to frequently present talks and seminars in many countries; he is a member of the advisory board of the journal THE NEW SOUNDTRACK (Edinburgh University Press).

A Former member of the Groupe de Recherches Musicales from 1971 to 1976, and editor to the monthly Cahiers du Cinema, 1981-1986.

Chion posts many texts, free to download, on his website Michelchion.com alongside his blog. The site also contains details of his discography, his bibliography, his music catalog, a glossary (bilingual, French and English) of his concepts, and more accurate biography.

Her has received many prizes and awards, including: 1978 Grand Prix du Disque for his concrete music Requiem (1973); Jean Vigo Award 1984 and Grand Prix at Clermont-Ferrand and Montreal in 1984 for his short film Eponine (1984), Cinema of the Best Book Award for his essay The cinema Music (1995); ; Favorite of the Academy Charles Cros for his 2014 video liturgy La Messe de Terre (1996-2013).

 

Mapping the materiality of off-screen sound

Lucy Fife Donaldson – University of St Andrews

Abstract:

There are many ways that sound and image inform one another in film and television, working in concert and in contrast, and through varying combinations. More particularly, sound plays a vital role in the phenomenal transformation we experience in depictions of a world on-screen, translating (or rendering, to use Michel Chion’s term), extending and filling out its dimensionality. Material environment and dramatic feel can be located in even the smallest details provided by sound. Using detailed examples, from Barton Fink (Joel and Ethan Coen, 1991) and Mad Men (AMC, 2007-2015), I will explore the contribution of off-screen sound to our comprehension and material apprehension of diegetic space. Off-screen sound performs an important functionality, describing the qualities of a fictional world in order to sustain credibility and enhancing the physical scope of an unseen environment. Moreover, off-screen sound has an important expressive and affective contribution to make, ‘fleshing out ’ the world beyond the limits of what we do see. In such instances, sound enlarges the scope of the image and affectively illustrates what isn’t visible, extending the materiality of the fictional world beyond the frame. The qualities of the sound itself, bring us closer to the experiences and interactions of people and things in the diegesis, defining their material qualities through sonic elements such as force, rhythm and reverberation and thereby encouraging a sensuous engagement with them.

Biography:

Lucy Fife Donaldson is Lecturer in Film Studies at the University of St. Andrews. She is the author of Texture in Film (Palgrave Macmillan, 2014) and her research focuses on the materiality of film style and the body in popular film and television. She is a member of the Editorial Board of Movie: A Journal of Film Criticism.

 

Lis Rhodes: Light Music

Dr. Aimee Mollaghan – Edgehill University

Abstract:

In the 1970’s British experimental filmmaker Lis Rhodes produced a body of work exploring the corporeal correspondence between sound and image. Describing her abstract direct animation Dresden Dynamo as a documentary, Rhodes attempts to explore the connection between what we see and what we hear through a transposition of the optical soundtrack into the visual images presented on screen. Further to this, Rhodes explores the audiovisual relationship within an expanded context in Light Music (1975). In Light Music, two projectors located within a smoky room face two opposing screens. Rhodes presents the abstract graphic forms of the optical soundtrack on screen so that the viewer is seeing what they are hearing. The intermediary space between screens turns the beams of light into immersive animated sculptures. The audience plays an active performative role in the creation of this work, affecting what is presented on the screens and introducing chance operations into the performance. Rhodes’ optical sound experiments interrogate not only the relationship between sound and image, but also the essence and materiality of film itself. Bearing this in mind, this paper intends to explore how Rhodes both confronts and subverts conventional notions of sound and synchronisation within her work.

Biography:

Aimee Mollaghan is a lecturer in Media, Film and Television at Edgehill University. She recently completed a book on visual music for Palgrave and her current research is centred on psychogeography and soundscape in cinema.


 

Talks 3 – Sunday Morning

 

Ventriloquial Acts: Critical Reflections on the Art of Foley

Matt Lewis – Call & Response

Abstract:

Foley, the art of producing additional sound effects in synchrony to actions on screen, only forms a part of the use of sound effects in a film. It is akin to other ‘dark arts’ such as ventriloquism as it relies on the same manipulations of context and space. Because Foley perceptually links us to human gesture it offers a particularly useful model with which to understand not only cinematic reception but also our relationship with everyday sounds and music making.

Although within the artificial environment of the cinema our perception is often played with and distorted, the controlled context of cinema is a useful laboratory in which to examine our responses to sound. This is a context from which we can develop a deeper understanding of our reception of sound outside of the cinematic realm.

The text below uses our responses to the production of sound effects and in particular Foley, to examine critically different, and sometimes converging, strands of thinking around perception, listening practice, audiovisual theory and music.

Biography:

Dr Matt Lewis is a musician and sound-artist based in London. Key areas of interest include the politics of sound, Foley, urbanism, notation and alternative methods of media distribution. His work is most often focused on particular physical sites, or around particular social issues, such as regeneration and street vending. Matt is a co-founder of the group Call & Response who specialise in the production and curation of multi-channel sound exhibitions and performances. He has performed and exhibited nationally and internationally in countries including Austria, Brazil, Portugal, Serbia and the USA, in festivals and venues such as The Whitechapel Gallery, Cafe ́ Oto, The Roundhouse, Diapason NYC and Centro Cultural Sao Paulo. Matt is an Associate Lecturer at London College of Communication.

 

Acousmatic Foley

Sara Pinheiro

Abstract:

Acousmatic Foley researches the connection between two sonic worlds mostly treated as parallel. The combination of “acousmatic” and “foley” appears to be an oxymoron. On the one hand, the main principle of acousmatic music is to disengage sound from its visual source. On the other hand, foley is the covering of an action with sounds that are visually justifiable, although they may not naturally belong there. Nevertheless, what links an acousmatic composer to a foley artist is that the latter makes use of objects for their layout, for a sonic construction that matches visually what we expect to hear, rather than the concrete visual matter. The argument is that every foley artist is an acousmatic composer. Likewise, the acousmatic composer is focused on the use of sound effects to present their concepts. In short, both fields deal with the same conception of sound objects, as in the tradition of musique concrète (Schaeffer, 1966) and soundscape theory (Schaffer, 1977). The paper investigates the circularity of these subjects. It addresses sound as a dramaturgic practice, involving the notion of stage (with a mise-en-scène) and its characters. It requires a model of composition based on presentational strategies. Thus, it takes into consideration three elements of sound performativity: the loudspeakers as stage advocates, the sounds as actors and, consequently, the listener as the final extremity of these articulations.

Biography:

Sara Pinheiro (1985) is a sound-maker. On film and video art, she does sound recording, editing, foley and mixing. On her own, she makes acousmatic pieces, usually for multichannel performances, radio broadcasts or installations. She graduated in Cinema (Lisbon, 2008) and holds a Master in Sonology (The Hague, 2012). She is a guest lecturer both at The Institute of Sonology and at CAS, in Famu (Czech Republic). Currently, her project “Acousmatic Foley” is in progress with the support of Calouste Gulbenkian Foundation (Lisbon). She is also a member of the live-coding group KOLEKTIV and collaborates with the Barandov Film Studios, both in Prague.

 

Determining the appropriateness of sound/image relationships in parallel sets of music videos for Bon Iver’s deluxe edition.

Alex Jeffery – City University, London

Abstract:

Music video scholarship to date has been largely focussed on high-profile single videos, promoted via traditional channels. While Korsgaard (2013) provides an invaluable overview of diversification in form and content in the YouTube era, further exploration of these avenues is required, particularly the video album. This practice has a trajectory going back to at least 1972, and is undergoing a significant revival in the digital age, often with increased focus on experimentalism, and diminished focus on performance. This is exemplified by Bon Iver’s Bon Iver (deluxe edition), produced in 2011, whose videos (termed ‘visual accompaniments’) not only eschew performance, but almost entirely the presence of humans. This disrupts a sense of narrative progression, encouraging focus on the interplay of musical and visual details.
In addition to the 10 videos that form the deluxe edition, more traditional music videos were created for the album’s 4 singles. Comparison of these offers opportunities for ontological enquiry into the nature of music video and appropriateness of sound/image relationships. The existence of YouTube comments trails also evidences multiple readings of these relationships, with fans themselves articulating many issues surrounding multimedia. The analysis I present in this paper selects three main points to separate the two video styles: cutting speed, lyric/image synchronisation, and music/image synchronisation at both macro and micro levels.

Biography:

Alex Jeffery is a British music writer, lecturer and musician who has spent four years researching Gorillaz’ Plastic Beach as a major case study. His particular research writing interest is narrative and the audiovisual in popular music, and he theorizes about the role of popular music in transmedia.
He also writes reviews and has an Associate Editor role for the long-running music site Music OMH.

 

The Sound of Visual Art

Sandra Kazlauskaite – Goldsmiths, University of London.

Abstract:

Art has always been sounding, just not necessarily sonically attended to. From the boisterous formations of the earth and the echoing Paleolithic cupules to the ever-enduring presence of audiovisual technology in today’s gallery settings – visual art has persistently accelerated in volume. When entering an exhibition setting today, the increasing multitudinous echoes, noises and harmonies instantaneously invade the participants’ eyes, ears and bodies, consequently affecting one’s perceptual involvement in the surrounding confined field. Art can no longer be restricted to an isolated experience of visuality. Considering the acoustic and visual architectures are mediated by a myriad of mechanically and electronically produced sounds: interactive electronic media, mobile phones, headphones, speakers, all of which contribute toward the formation of a noisy space, one that includes both, visual and sonic surfaces, the presence of sound in relation to contemporary art can no longer be disregarded. Instead, its aesthetic and experiential extents must be conceptually reconsidered.

Whilst the white cube continues to hush the increasing volume within its confines, noises spread, “silences” become un-silenced, thus, the dimension of sound in contemporary art spaces, and specifically video installation art settings, becomes augmented. Whether through TV monitors, computers, or projections – the gallery space is not mute, but continuously sounding.  Taking the convoluted conjunctions between images and sounds into account, in this audiovisual presentation I question: how does the sonic cacophony of a contemporary gallery ambience affect the formation and experience of the aesthetic object? Can the aesthetic object be silenced? Using an experiential approach of sound and video installation practice, I pursue to challenge the already ingrained hierarchical structure of the arts, where vision and sight continuously acts as the main source of knowledge. The presented artworks – case studies scrutinise Douglas Kahn’s notion of all sound, including background noise and silence, which persistently endures alongside the mediated screen. The discussed audiovisual artworks seek to unleash the phenomenon of sound in relation to contemporary screen-based installation art spaces in its aesthetic and phenomenological entirety.

 

Biography:

Sandra Kazlauskaitė is a composer, artist, and curator working across the disciplines of sound performance, sound and video installation, as well as audiovisual curatorial projects. Her creative practice ranges from acousmatic compositions, radio dramas, audiovisual installations to works for non-musical objects. Currently, Sandra is undertaking a research by practice Ph.D (CHASE, AHRC funded) at Goldsmiths, University of London. Using sound and video installation art, she is creating an in-depth conceptual research into the embodiment of sound in contemporary screen-based art gallery spaces, questioning how aurality, in its techno-phenomenological ubiquitousness, affects our aesthetic experience of art.

 


 

Talks 4 – Sunday Afternoon

 

KEYNOTE:

The Audiovisual Contract: Towards a Phenomenological Approach to Sound/Image Relationships

Jo Hyde – Bath Spa University

Abstract:

This paper takes as its starting point Michel Chion’s idea of the audiovisual contract, a constructed relationship which is forged between sonic and visual elements when we experience them at the same time.

It will explore the theoretical history of equivalence that has been sought, postulated or constructed between the two media, found such places as the long history of colour music, based on a supposed equivalence between the frequencies of light (colour) and sound (pitch) vibrations.

It will contrast such approaches with those based on more abstract thinking, which tend to favour more indirect mappings often built around higher level constructs independent and separable from either media. Examples include Paul Klee’s ideas around counterpoint, Richter and Eggeling’s approaches to rhythm in film, and John Whitney’s exploration of harmonic movement.

Finally, the beginnings of a personal take on an ‘audiovisual contact’ will be outlined. In contrast to ‘hard’ mappings such as found in colour music or the abstractions outlined in the paragraph above, it is built around how we see and hear and the similarities and (more importantly) differences between these modalities. These ideas link to earlier attempts by the author to adapt ideas derived from Pierre Schaeffer to a multimedia context. Tracing their derivation in turn from the writings of Edmund Husserl, a phenomenological basis for this system is outlined and discussed.

Biography:

Joseph Hyde’s background is as a musician and composer, working in various areas but in the late 90s – and a period working with BEAST in Birmingham – settling on electroacoustic music, with or without live instruments. Whilst music and sound remain at the core of his practice, collaboration has since become a key concern, particularly in the field of dance. Here he works both as a composer and with video, interactive systems and telepresence. His solo work has broadened in scope to incorporate these elements, and he has made several audiovisual Visual Music works, and has written about the field, recently undertaking a two-year study of the work of Oskar Fischinger, funded by the Arts and Humanities Research Council.

Hyde also works as a lecturer / academic, as Professor of Music at Bath Spa University (UK), where he teaches in the BA Creative Music Technology, runs the MMus in Creative Sound and Media Technology and supervises a number of PhD students. Since 2009 he has run a symposium on Visual Music at the university, Seeing Sound.

Cinema= Music and the other way round in  Jim Jarmusch’s films

Céline Murillo, Université Paris 13/Sorbonne Paris Cité.

Abstract:

Jim Jarmusch  includes music in his films in deep and manifold ways. Trained in the late seventies among the artists of New York punk scene,  he adheres to egalitarian DIY ethos and thus considers music and cinema as one creative substance.

The article endeavors to prove how, in Jarmusch’s works  music and cinema must be considered as a whole. The article aims at showing how music sometimes includes the moving image  in Jarmusch’s film, how  he uses music in a subversive way for identification purposes and eventually how the  non-referentially of music attacks the referentiality and narrativity of the image.

 

Biography:

Celine Murillo defended a PhD dissertation in 2008 about reference and repetition in Jim Jarmusch films. She now works as a senior lecturer at the University of Paris 13 (Sorbonne Paris Cité). She is the co-editor of issue n°136 of the French Review of American studies What about Independent Cinema ?. She has published several papers Jim Jarmusch and also about underground American cinema from the 1960s onwards.

 


 

Print Friendly