A glowing tetrahedron glides through the air, suspended above peoples heads from a 21-metre motorised rail holding the world’s largest delta robot. As the only light source in the room, the tetrahedron acts as entertainer and guide to the space, dancing with the audience, and playfully encouraging them to become an active part of the performance. Through the interplay of luminous form and motion, ambiguity in visual perception is explored and manipulated in an unfolding interactive performance between the public and a kinetic installation.
Taking its title from William Blake’s poem “The Tyger”, the installation returns visitors to a primal state of hyper-awareness through advanced computer vision, robotics and interactive choreography, the sum of which creates an intense, visceral and primal way to experience the Tate’s Tanks. The work builds on earlier kinetic pieces, Motive Colloquies (2011, Pompidou Centre Paris) and Performative Ecologies (2008, National Art Museum Beijing).
More Images & Press Pack
Robotics – Vahid Aminzadeh (KCL) & Alex Zivanovic (Middx Uni)
Computer Vision – Paul Ferragut & George Profenza (UCL)
Mechanical Engineering – Neil (Spike) Melton (Middx Uni)
Sound Design – Emmett Glynn & Sam Conran
Light Engineering – Lianka Papakammenou (UCL)
Photography – Simon Kennedy
Puppetry Consultant – Ronnie Le Drew
Graphic Design – Amy Lewis
Filming – Ronan Glynn
Communication – Ollie Palmer (UCL) & Diony Kypraiou (UCL)
Fabrication Assistant – Djorn Fevrier
Thanks also Ryan Mehanna, Frank Glynn, Stephen Gage, and Ranulph Glanville
Special Mention to the Motive Colloquies team, particularly Ciriaco Castro, Miriam Dall’Igna and Enrique Ramos. Fearful Symmetry builds on the earlier work we produced for the Centre Pompidou in Paris June 2011.
An interactive installation and performance, developed through a collaboration between interaction designers, architects and performance artists. Its principal performer, a 3 metre high responsive robot, interacts with its audience’s gestures while it waits for the arrival of its human co-performers. The film shown here presents the first choreographed site specific work to come from this colloquy. Titled ‘The Promise of Touch’, it was presented at the Pompidou Centre in Paris in June 2011. responding to two works within the Gallery – Francis Bacon’s Triptych ‘Three Figures in a Room’ (1964) & Pablo Picasso’s ‘Femmes devant la mer’ (1956).
Motive Colloquies is both the work and the people who have formed it. Ciriaco Castro, Miriam Dall’Igna, Ruairi Glynn, Enrique Ramos, Sigridur Reynisdottir, Nicholas Waters, Jemima Yong
Investigating gestural forms of dialogue between inhabitants and an evolving environment, Performative Ecologies is a kinetic ‘conversational’ environment, which examines what it means both to observe, and to be observed by machines. It considers in the light of developments in computer vision, sensing and artificial intelligence, how an ‘intelligent’ architecture can discuss its behaviour in relation to the goals and behaviours of the world around it.
“The role of the architect… I think, is not so much to design a building or city, as to catalyse them: to act that they may evolve.” Gordon Pask
Within the darkened installation space, a dance evolves as a community of autonomous but very sociable robotic sculptures perform with their illuminated tails for inhabitants. Rather than being pre-choreographed, these creatures propose and negotiate with their audience, learning how best to attract and maintain their attention. Using a genetic algorithm to evolve performances, and facial recognition to assess attention levels (fitness), the individual dancers learn from their successes and failures. As they gain experience, they share their knowledge with the larger ecology, dancing to each other, exchanging their most successful techniques, and negotiating future performances collaboratively.
An ecology constructed by both robotic sculptures and the human inhabitants through an intertwining of networks rich in circularities of reciprocal gestures and adaption. A dance is formed in which individual participants both human and robotic operate as performative agents, each acting independently, but continually negotiating their choreography with each other. This social system revisits some of the concepts first considered in Gordon Pask’s art work, the ‘Colloquy of Mobiles’, exhibited at Cybernetic Serendipity (ICA 1968). Like the colloquy of mobiles, it is an environment of active conversational participants, a physically constructed embodiment of his Conversation Theory, unlike it, this work uses new technologies unavailable to Pask and explores how Pask’s ideas can be extended using contemporary digital technology.
For more detals see my Paper ‘Conversational Environments Revisted‘ Awarded best paper at the 19th European Meeting of Cybernetics & Systems Research, Vienna, Austria 2008.
Image of Performative Ecologies in a a square arrangement from the VIDA 11.0 Exhibition, Madrid, Spain, 2009
The installation’s physical composition of 4 independently responsive sculptures is built from perspex, steel & aluminum. Each one is actuated by servos; 2 in the ‘head’, 1 in their ‘tails’ & 1 up at ceiling level which orientates their body. Each tail has RGB lighting embedded within them so that they can perform the entire wide range of colour and lighting effects. Able to rotate 360 degrees, they each occupy 1.5m in diameter and are able to hang facing their audience at an average eye height.
It has been exhibited in near dark rooms to act as contrast to the brightly illuminated tails. Alternatively Performative Ecologies has also been presented in day light when the installation was shown at the Kunsthaus gallery in Graz, Austria. It was strategically positioned on the ground floor of the Gallery looking out at the people walking by on the street. In this setting the object’s contextually adapted to their environment learning not just how to attract people within the gallery but also out on the street, almost beckoning them to come inside. The vision of the robots was additionally transmitted onto BIX, the Kunsthaus gallery’s large media facade, presenting the activity of the installation out over the city.
The performances are generated from a gene pool of evolving dances functioning in a Genetic Algorithm (G.A.) which uses facial recognition to assess attention levels & orientation of the audience before & after each performance as a way of assessing & assigning a fitness value to each new choreography. Over time successful maneuvers are kept & recombined to produce new performances while less effective ones are discarded. Mutation in the G.A. fluctuates based on how successful the sculptures become. If they get a lot of attention, mutation levels rise as if they are getting arrogant & as a result become more experimental.
When there are no people around, they turn to each other & teach their most successful performances to each other negotiate new performances together. They take the suggestions of their surrounding partners & compare their gene pool of performances to their partners suggestions. If they are comparatively similar then they are accepted & replace a chromosome from their own pool. If they are too different they are rejected as if they dislike the partners dance moves.
Currently this is done via a wireless network but it is hoped that in later iterations, it will be possible for the sculptures to use their computer vision systems to interpret each others performances adding interesting potential for degrees of misunderstandings to occur. The Servos & Lighting are controlled by Arduino microcontrollers receiving instructions from a G.A. running in Processing. Each head has a low light vision camera on board transmitting to facial recognition software built using the openCV library.
Inspired by the experimental photography of Gjon Mili, the installation ‘Signallers’ was built to explore the recording of gestural interactions between a robotic armature and a human arm playing a series of games.
‘Traceries with lights attached to foils’ Photograph by Gjon Mili 1942. Mili records the gestural interaction of two fencers by capturing the movement of light sources attached to the ends of their foils. On close inspection, subtle moments of interaction can be found in these traces.
Signallers was initally an investigation of generating kinetic behaviours for use in a robotic armature through the use of light source tracking. It quickly became however a project more about how a kinetic object could use these behaviours and learn from their successes and failures. Inspired by the experimental light tracing Photography of Gjon Mili, Signallers was an environment made up of a darkened room with a robotic armature centered within it. The armature actuated a light source on the tip of an acrylic rod in 360 degrees.
Introduction to Angels
Investigates ways of constructing intelligent agents that work as independent spatial features or combine to assemble virtually infinite constructs. The ‘Angel’ project plays with architectures historically rigid nature playfully looking at the possibilities of an architecture lighter than air capable of sheltering us and even bringing communities together.
The initial concept developed out of a building proposal in which a conversation space could transform its spatial conditions reacting to a set of protocols based on inhabitant’s discourse. The constantly reconfiguring space was actuated by a series of agents that could descend, rise, approach and retreat from the people within the space as well as articulate a range of behaviours. These “Gestures” attempted to act as catalysts for the generation of new conversation and interaction. This investigation led to the exploration of LTA (Lighter Than Air) Vehicles capable of acting independently or in flocks constructing dynamic spaces for people to meet.
Below are Initial concept images of how these flying transforming agents would interact and transform.
Our research examined how simple behaviors actuated by the first iteration of Angels affects the experience of a ‘conversational’ space. The following images show our test environment in which we were able measure the success of the LTA vehicles movement and interaction with inhabitants.
A number of observations and recordings were made over two days of flight testing. The next stage was to critically analyze these and focus on the individual behaviors exhibited that were most successful. Part of our investigation was also to experiment with suitable forms of notation to express interaction in space. Initial drawings described the motion paths of the Angels and Inhabitants and were later followed by notation that correlated statistical data.
Using the Angels onboard Vision System transmitted wirelessly to a local computer we processed real-time data of conversation space using a piece of software we developed in MaxMSP Jitter that generated formal representations to support our recording and notation of the interactions that occurred. These projections also provided an added form of feedback when projected into the conversation space. Below is a sequence of transformations over 3 seconds based on input data from the Angels onboard sensors. Below is a 60 second timeline exploring statistical representation as a tool for notation and analysis.
A collaborative project in 2005/2006 by the Interactive Architecture Workshop of the Bartlett School of Architecture with the support of a grant by the EPSRC & led by Professor Stephen Gage. My role was particularly focused on developing a touch sensitive floor and developing animation & behaviour in collaboration with the school children. Coding was developed by Andy Huntingdon.
Finding methods by which to represent interaction with architectural representation is an ongoing interest of mine. Euclidean space presuppositions have long ago proved short on providing an adequate model both for physical and for metaphysical spatial considerations. Alternative geometrical models of space became available more than a century ago. Higher-dimensional, or curved, space appeared more suitable to accommodate the needs of a broad range of disciplines.
Based on the fundamental relationship between form, reality and any human interaction with the external world. As part of the research spaces investigation at UCL I begun looking at the types of non-euclidean models I could use to explore relationship between subjectivity and alternative geometrical models of space.
Akin to one of the four “Research Spaces” conference strands [Conceptual Spaces], the piece investigated the relationship between space and the knowing subject and interrelationships between subjects and objects in art [and] architecture. Aiming at the initiation of the participants to an alternative model of our physical space, The sculpture/notation/ interactive installation aimed primarily to alert participants of common assumptions about the physical and geometrical nature of space within our everyday perception. Participants were given the opportunity to experiment with engaging these new models of space through interaction.
Interactive Architecture as a field of research has key characteristics. These interactive spaces must feel / experience its inhabitants and respond in a way that challenges the inhabitants to reciprocally respond. If it fails to challenge their cognitive perception of the space, then it fails to engage the inhabitants of the space and a reciprocal relationship will not be created.
Interactive video and audio installations have been widely explored for these relationships. Moving image and audio have of course a very real impact on our sense of space but as degree thesis project I examined how moving physical space could have a distinctive and potentially more reciprocal affect on its inhabitants challenging neglected modes of cognition.
“derived from the particulars of the real world, from data and processes of the virtual world, or from numerous techniques of capturing the real and casting it into virtual, motion capture for instance. Since time is a feature of the model, if the model is fed time-based data, the form becomes animate, the architecture liquid. ” Marcos Novak