top of page

BLOG

May 2022. It was the culminating week of the BeFantastic Within Fellowship. Throughout the past three weeks of exploring the interplay of art and artificial intelligence through discussions, skill sharing and guided experiments, we’d been thinking about how AI might find its way into a live performance. From movement arts and dance, to voice and text-based artwork, it was evident that the integration of artificial intelligence into traditional performance formats will throw up interesting results.


Image: The first written draft of the Climateprov project during the pitching stage where we refined the idea and found collaborators to join the team.

Building blocks come together


After many rounds of virtual huddles and (optimistic, if not cautious) imagination, the building blocks of Climateprov – improv theatre, generative AI and climate change – started falling into a cohesive structure, if only at a conceptual level. The choice of improvisational theatre as the artistic form was guided by two reasons: Firstly, our personal encounters with conversations about climate change have led us to believe they evoke a sense of helplessness, despair and anxiety amongst those listening. Improvisational theatre on the other hand focuses on humorous, heartfelt and vivid explorations of a subject matter. Secondly, improvisational theatre relies on a series of suggestions from the audience (ie an input) to spontaneously create a story (ie an output) based on certain improvisational principles and rules (ie the parameters). Its nature to extrapolate meaning from seemingly unconnected prompts and generate a cohesive story is similar to how an AI model also works.


We began testing some of our ideas through brief playtests and quick experiments. The first of these included training a sample GPT-3 model on the popular ‘Yes, And..’ improv game.


Video: Training a GPT-3 model to play the popular improv game ‘Yes, and…’


How does the game work? Human performers and the AI tell a story together, by starting each successive sentence with a ‘Yes, and…’ and building the narrative further together. As you see in the video, the model is first trained on the ‘Yes, and…’ structure through a few rounds of human-generated text and then tasked to generate a response. The positive demonstration was a significant moment for us, as a coherent and responsive interplay between the human performers and the AI was critical for this project.


Soon, our team of six was complete. Our diverse artistic and technical backgrounds had already allowed us to begin imagining a wide array of possibilities for this project. A few weeks later, we received the news that our pitch had been successful and this project was officially greenlit for development! The question we faced changed from a speculative ‘What was possible?’ to now an exciting ‘How to make this possible?’. And thus, we dived deep into the next phase of research.


We decided to go back to the three building blocks of the project – improvisational theatre, generative AI and climate change – and take these one step at a time.


Wrangling with the AI


With generative AI, we decided on training our own model using GPT-3 and then integrating it into the performance. Additionally, we have been exploring GAN-based technologies like Dall E Mini, Dall E 2, Midjourney and Stable Diffusion. One interesting learning while exploring these has been the need to create highly specific and targeted text prompts to generate better images. For this, the team researched this guide on prompts for Dall E.


Here are some AI artwork generated by us in rehearsals based on prompts related to climate change:



Another question confronting us is the visualization (read: personification) of the AI in the context of the live performance. Just as we’ll have human performers on stage to tell a story, we want to present the AI to the audience as an equal collaborator in the process. To do this, we must synthesize a public-facing avatar of the AI that can be shown during the process.

Image: Looking at past visualizations of AI in media and popular culture

We decided to research how AI (and related concepts) have previously been visualized in mass media and popular culture. From Siri to the 2001 Space Oddyssey, we found ourselves with even more questions. Should our AI have a name? Should it have a personality? What is it its relation or position with respect to climate change? Does it have a relationship with the other human performers?


Photo: A few visualization mockups for our AI in the performance.

We are still navigating our way slowly through these questions, relying on other pieces of the puzzle to be answered first. Additionally, we are prioritizing the development of the GPT-3 and other generative AI pieces so they could point us towards a certain direction.


Figuring out the story spine


With improvisational theatre, there was a sense of comfort and familiarity. Blessin, Ranji and myself have been working with improvisational formats for a long time and already had a sense of where to begin. We began researching ‘The Documentary!’, a long-form improvisational format created by Billy Merritt at the Upright Citizens Brigade Theatre, as the starting point for the performance’s narrative structure. A long-form improvisational format relies on taking suggestions from an audience at the beginning of the show to improvise the first few scenes, and then use whatever came up in the previous scenes as the base to improvise every successive scene.


Image: A snippet from the draft narrative structure we are exploring for the performance.

With The Documentary format as our guide, we created a new long-form format that begins by asking the audience about a fictional climate crisis in the world (For eg: Plastic is turning sentient or polar bears are moving out of the poles). Herein, a series of scenes unfold where the human performers and the AI begin to build a narrative around this fictional crisis. Every single scene structure begins to incorporate the Ai in different ways – text generation, image generation – and gives the output back to the human performers to incorporate into the action in real-time.


Reflecting on the process so far


As of writing this blog, we have begun playtests with a group of improvisers in Bangalore, India who are rehearsing with our draft script structure to work out the kinks and glitches. Our goal is to ultimately use a structure that allows for creative integration of the AI into the story as an equal storyteller as well as is fun, accessible and interesting for the audience.



Some of the challenges we can already foresee with using any of these models are high running costs (due to the frequency of testing and stabilising the output) and processing delays (which impede the “real-time” nature of the human-AI interaction). Another challenge is to streamline the data pipeline for the performance, which turns human speech into text, feeds an input to the model, generates an output, and converts that output into a presentation-ready text and voice for the AI performer. For real-time animation of the AI performer using the model, we have been exploring Adobe CC, Unreal Engine, Wav2Lip and other lip-synching / animation models that give our AI performer a life of its own.


Video: A demonstration of the input-output pipeline for the performance.


However, to run the above pipeline in real-time during the actual performance (and make sure that everything runs properly!) along with everything else that stage performance requires (lights, sounds, projection, props!) is the next big task ahead of us.



To help us visualise the actual performance and what it may look like, we created a few mock ups like the one above. Perhaps our imagination can already prompt some of the challenges (read opportunities) that await us!


The team is optimistic about what's coming ahead. For Ranji, the merging and co-creation of AI and improv to create something new is exciting. The same is echoed by Blessin for whom, two of his interests (AI and improv) come together neatly in this project. He's also keen about the launch of a new (AI) models every other week or month which widens the scope of possibilities and the exploration further. Monica believes that the scope of public engagement with this project is high, for it allows the general public to actually interact with the AI. For Tajinder, the visual production and design aspects of the project invite a lot of thought and introspection, especially the way the practicalities are slowly being worked through. For myself, artistic research into the role of AI in the performing arts allows us to glimpse briefly into the future and think of ways to integrate them together.

Image: The team taking a break during one of our Zoom rehearsals



bottom of page