- Mobile Performance
- Electric Speed
- Schematic as Score
- (Re)purposed Clothes
- Collaborative Spaces
- Device Art
- Digital Dub
- Rise of the VJ
- Sample Culture
I have a fascination with all things emotional, embodied, felt, sensed, the visceral, physical, relational, and participatory, using video, communication devices and biofeedback. I have been on a continuous quest to work with new technologies, expressive methods, via art and performance, in order to find new ways to connect people with each other, over distance better and in more embodied, emotional ways. I am an artist-performer/researcher/ curator within various art forms: interactive and performance installation, music composition and performance, video art, web animation, and experience design.
My mobile media performance project, MINDtouch: Ephemeral Transference, a PhD art research work was completed in 2010 and published in 2011. It proposed that the mobile videophone become a new way to communicate non-verbally, in real time, across different physical and technological environments and locations. It was intended to uncover any new understandings of the sensations of ‘liveness’ and ‘presence’ that may emerge in participatory networked performance, using mobile phones and physiological wearable devices. The goal was to expand and explore more embodied and meaningful exchanges between remote groups of people. To practically investigate these concepts, a mobile media performance series was created.
Users ‘VJ-ed’ or mixed video from a database live, and using their body data with wireless sensors they had abstract visual conversations with other mobile users, creating a collaborative, telematic collage of externalised body sensations. MINDtouch participatory events involved creating a mobile, networked performance that utilised a database of archived of streamed and/or archived video clips created by video enabled mobile phones, to be retrieved, streamed and remixed during a live visuals performance(s). The live event performative, collaborative, non-linear narrative montage or “remix” is then streamed back out to anyone’s phone and the Internet, using real-time video mixing and streaming, and accessing a remote server and mobile network. It was about the transmitting the sense of liveness and presence, through visual manifestations of embodied experiences through the mobile network.
MINDtouch was a participatory media project that used biofeedback sensors and mobile media phones, during live streaming networked events. MINDtouch explored notions of ephemeral transference, distance collaboration and participant as performer to study ‘presence’ and ‘liveness’ emerging from the use of wireless mobile technologies within real-time, mobile performance contexts. The intent was to create a paradox in the notions of liveness and presence, or the feeling of ‘being there’.
To identify how or whether liveness and presence could be sensed during these mobile social events, video was the mode of expression, engagement, embodiment used to externalise the internal within live, social and mediated environments. Participants were guided through exercises to create the intimate video expressions to explore ways to communicate visually, gesturally and non-verbally repurposing the mobile device's means of communication and connection. These clips were steamed and archived into a database and both streamed at the time and stored for future use.
Mobile media phones acted (1) as a conduit for non-literal or abstracted, non-verbal expression of experience and as an extension of the body/mind; (2) as a vehicle to express inner sensations between participants or with one’s self. It is my contention that the lo-fi aesthetics of pixelated images added to the intimacy, authenticity and 'realness' of the mobile video medium, as well as making it more accessible to the users. Delays in the transmission render the work more 'everyman' in its nature and easier to relate to, while professional quality work creates a distance or disconnect from the common person and their everyday experience or ability.
Critical to the MINDtouch investigation was the facilitation of individual and collective perceptions and embodied sensations within the context of the virtual, invisible space of mobile networks. Thus, ways to simulate, emulate or even facilitate connections and the sensing of feltness, presence and/or liveness, co-presence and collaboration were explored with participants within mobile performance events. The project tried to exemplify liveness and presence within the context of a series of live, ‘staged’, iterative, ‘scratch’ mobile media social events. These ‘scratch’ or performance experiments involved improvisation and experimentation through generative, participatory, collaboratively mixed mobile video visualisations, triggered by biosensor data from participants’ bodies. The live events involved participants engaging in improvisation and experimentation with the mobile video activities.
Through participation by in-person and remote interactors creating mobile video-streamed mixes, the involved a daisy-chain of technologies through the network space. The performance series resulted in live, collaborative visual collages, made with mobile and wearable devices during these in-person and remotely accessed networked events. The events focused on creating a very personal, intimate and private performance, without an actual spectator, only with interactors.
The body data that controlled the video effects and mixing was displayed or projected for the live participants and streamed for remote participants. The custom software took the body data from the sensors to mix and stream the video from the database of clips, added visual effects, then streamed the mixed video back out through the server, ideally in real time. The resulting video collage became a collaborative, narrative and global mobile-cast, converging distinct technologies and practices, bringing all the remote audience members to share and interact with the generative visual collage of other people’s video.
For the real-time telematic online streaming event Low Lives 3 in 2011, a version of MINDtouch performance was created using the video expressions created by users of the original media art project, who recorded short video pieces that translated their inner body experiences and perceptions into visual 'utterances' for a global non-verbal conversation. For Low Lives 3, I had a live dancer, Kate Sicchio, wearing the MINDtouch biofeedback sensor garment performing a live 'mix' the previously created video clips, using her body responses to do so, these mixes were then sent through the network for viewers to view. The video collages triggers by her dance and gestures were created by sending the biofeedback signals from her body to the laptop mixing software and then those mixes were live streamed to the online Low Lives 3 audience, in real-time, followed by another live video stream showing the dancer in 'performing' the video mixing, also in real-time - to show the connection between the dancer and the video collages.
My new work Tactile Video Love Letters in non-verbal mobile expression and performance, repurposes the mobile phone from a textual and voice device to a more multi-modal, synaesthetic, tactile, expressive, and gestural device. This new project seeks new modes for individuals to express themselves intimately, visually and non-verbally––akin to remote ‘touch’, immediately and intuitively understood in a pre-conscious sense––direct and tactile route to interpersonal communication. Mobile video cameras can be repurposed to visually convey emotions and sensations, rather than merely a device for documentation of events, or as an entertainment gadget. As such, this new project ‘Tactile Video Love Letters’, aims to continue to bridge this emotional and physical divide between users.
Portable devices and increasingly ubiquitous forms of mobile media and communication devices have transformed the modalities in which we can communicate our affective and emotional “states,” however they are still predominantly via text and language. Mobile images are often experienced as personal, intimate, and private expression. Once sent, the immediacy of these media images feels like a receiving a virtual kiss blown to you or of invisible pieces of them through the network. Therefore, if one thinks of a text message as a thought transfer, then an image or video is sensory or sight transfer, sending visual experiences, feelings, as well as one’s unique expression and perspective. Thus, people might wish to send their internal perceptions, instead of relying solely on words to communicate when using the mobile phone.
Tactile Video Love Letters involves developing a novel method to repurpose the mobile videophone, using wearable technologies and smart textiles. This fresh approach hopes to replicate physical experience as much as possible. The system will use both the mobile connection using a wearable interface for tactile interaction. This will create a reciprocal exchange with a way to reply to a mobile message sender in a non-verbal, embodied, multi-sensory, two-way dialogue. Thus, this media project studies to relay ‘felt’ experience or touch sensation ways through a sensitive interface that is a skin-like membrane or bio-material responds with a reply directly to the recipient's mobile phone application. Tactile Video Love Letters asks users to draw upon the visual material in their environment and any imagery they think relevant or essential to their expression. The ‘tactile textile to video’ exploration will develop a customised video symbolic vocabulary or lexicon to express and construct visual ‘sentences’ or ‘utterances’, to then be translated into distance touch or embrace. The intention is to create a structured semiotic system for expression, especially for people with physical, verbal or linguistic challenges, to use for personal interaction and to have an embodied, tactile and visual means of messaging. The project will develop a method to use videophones and mobile imagery together as an alternative to the current textual or voice uses.
In MINDtouch participants were asked to use videophone imagery to speak for them, to draw upon visual material within their immediate environment, as well as from their own sensations, perceptions, thoughts, and emotions, to share with others without words. Through gestures and movement it was evident they could represent their internal experiences via the external world visually and could have meaningful non-verbal expression. This modality falls outside of the usual physical or body language and is more akin to the cinematic language of video art, abstract cinema, audio/visual performance. This method of non-verbal expression does not require in-person presence, but could be transmitted in a visually with a mobile connection. This digital mode of remote non-verbal expression transmits different layer of emotional presence, beyond tone of voice or turn of phrase that is currently used. The embodied interaction through technology replaces the in-person physicality.
Tactile Video Love Letters instead investigates how a receiver of video messages might experience and interpret these messages, especially if the messages are abstract, emotional and visceral. A direct, literal approach to constructing the visual expression may be the key to meaningful exchange (or may not be). Thus, one option is to construct a semantic language or symbolic representation for video messaging. While it can be argued that, like a dream or an artwork – the receiver/viewer/dreamer should also be free to interpret the message any way they like, in some cases the representative imagery is clear enough in content that the message is evident and obvious). Yet, the way the communicator constructs the message can also help in its interpretation and how it is decoded, especially if they use recognisable signifiers and symbolism. The project will develop activities to develop a visual lexicon, building on past research that can provide tools and a structure to guide users to create a video language of their own. The intention is to develop a coding system for direct personal, embodied, sensual, and meaningful visual messaging using synaesthetic possibilities of mobile expression. The goal is to enable the average, mobile user to communicate in playful, creative, intimate, and novel ways.
Another new work in development, also with Kate Sicchio is a three-part performance exploration, called Hacking the Body, using biofeedback sensor, mobile phones, the Kinect, Processing, Layar and other mobile apps for body data, gestural, video sensing and generative mobile video performance installations and participatory interaction. This work is still in development so I can't really reveal what the specific outcomes will be just yet. See my portfolio online at www.swampgirl67.net