Raindance Immersive 2019
Copyright: Raindance Immersive
For me it was such a fantastic event to attend because:
1) it was a privilege to listen and learn from those people forging ahead on this XR path and
2) it was such a generous and supportive environment.
Pretty much every speaker just kept reiterating that “there is no right way” yet and that everyone is experimenting and making things up and just to get stuck into these emerging technologies and have fun!
Notes from the summit
The theme of the festival is female creatives and Brexit.
Raindance Immersive has only been running since 2016 (Instigated and now curated by Mária Rakušanová)
Trends in 2019:
Tilt brush has become THE TOOL to use for VR animation and design.
Acting talent is being drawn into the medium, as are other high profile mainstream names (i.e David Attenborough). XR is no longer a “fringe medium” although still a long way to go.
The rise of AR and mixed reality such as Rise of Animals (using magic leap).
Gaming is also colliding with VR for example Doctor Who: Edge of Time and The Infinite Hotel.
Branching narrative is being pushed, such as After Life.
Computer generated experiences are becoming ever bolder: Ayahuasca and Heart of Darkness.
Live performance meets VR in realtime with Box in the Desert and Cosmos Within Us.
Sound design is finally being recognised as a pivotal part of the experience, and awards are being created for this.
TV IP is moving into immersive content.
Rise of Animals
This is a mixed reality experience. Users wear “glasses” which allow the creators to put animals “into the room.” The glasses/headset traces the room in infrared and responds to eye tracking. It also has directional speakers. It took 4 years for them to get to this stage.
Any new medium comes with new lessons. The four big challenges for our creators:
The space (real and visual)
Can this be used at home? They’re not yet sure. For now they have created a curated space for people to visit and wear the glasses.
This is a new UX design: the controller is your head and eyes and hands (removing controllers).
They used a footpad anchor which allowed users to return there to trigger the next animal. To guide users gaze they implemented a particle affect and ambisonic audio.
Their decision between Rift versus Quest came down to the fact they needed it to run on mobile. To do this they had to significantly reduce their polycount.
They discovered that lighting was absolutely key, and it was difficult to balance real versus virtual light.
They did a lot of user testing which unearthed some not so obvious problems. So far they have tested on 150 people.
Moving forward plans include multiplayer options and using real assets.
The main thing is to embrace the change! Media is converging between TV, Film, AR & VR. Be multidisciplinary.
This was created using a very small budget (£2000). They shot on an instapro 360. It’s a 12 minute monologue based on the creators personal experience with an alcoholic parent.
Over the 12 minutes she moves between 5 different chairs representing the 5 stages of grief. They did also add in some graphics to give it a child-like theme. This was an issues based/awareness raising piece.
This was another small budget looking at immigration. A branching narrative piece that has to conclude in 1-2 minutes. Live action experience and basically asks “how much of an arse are you?” The point of the experience is you’re not supposed to know that you’re making choices about real people. But they found that until people understood it was about real people – they didn’t really engage. It was tricky to know how much to give away – because you do want people to care and have stakes. This was used as a catalyst for conversation.
The story is composed of 45 scenes shot in live-action VR.
There are 29 branching paths and 5000 possible interactions and multiple outcomes.
After Life Branching map
They designed gate-controlled paths. The branches happen when you follow a character. You don’t consciously “choose” it’s more organic.
After Life, example scene outline
Discovered that space became a character, and the house its self was a character. Even if you have multiple branches in a story, you still structure your story according to three act structure. There are no templates or “rules” about what a VR shot list or outline looks like, and so they combined knowledge from standard filming with what they learned they needed in order to be able to film in 360 degrees.
Panel discussion: Location-based Entertainment
A Box in a Desert | Nanna Gunnar, Director, and Owen Hindley, Director Huldufugl (UK)
Cosmos Within Us | Tupac Martir, Director (UK)
Alexia Kyriakopoulos, Arvore (Brasil, Greece)
Panel moderated by Nina Salomons
Box in the Desert, by Huldufgl is an interactive digital fable. You are in a box, and a character appears outside of the box (played by an actor) telling you you’re safe as long as you stay inside the box. But you can hear a third person speaking to you telling you not to trust the external character. You have to decide who you will trust.
Location-based VR immerses all your senses. In Box in the desert you are interacting with people on a set. It’s a role playing theatre experience. So far they have found that because they don’t feel there is an audience or a person there that people will come out fo themselves a bit more. The screen gives they a mask and avatar.
Box in the Desert set up
In Cosmos within Us the interacter becomes important because they frame the shots for the audience. Tupac (the director) can control the heat, volume and sound, so can be very responsive. Sometimes people start talking back! There are also smells in Cosmos within us which is very powerful for directing people in the space.
In Box in the desert you do sometimes get an interacter that is a challenge or tries to push the limits. One question they ask is “Where do you want to go? Space?” And the interacter did not want to be lead so said “No, and asked to go under the sea.” They couldn’t do that because they hadn’t created that for the experience, so the compromise was taking them to a waterfall.
Extra sensory elements like smell and touch add to the feeling. Your brain does add a lot too. Some people are convinced they smelled something in Cosmos within us when in fact they didn’t.
Funding is always an issue. Box in the desert just did it for fun and in spare time. Because it only requires one actor that’s made it doable. But the challenge for making it profitable is that you can only have one audience member. So to sell 350 tickets you have to do 350 shows.
Cosmos within us got most of its money from Luxembourg. They have also been scaling up audience. In Venice they only had 4 audience members. At Raindance they have 10 and hopefully in the future they can have 40. So if they take over a theatre the distribution model changes – but also brings new challenges – do I need extra screens for the audience?
In video games people watch other people play games like on Twitch. Theatre’s should start taking on VR projects, because people will pay to watch others experience something, if they can also get something from it.
Could they also join experiences up? So you buy one ticket and can see three performances back to back over three hours (maybe with an interval)?
The story is about Herbie (a panda) and his break up with girlfriend Rice (a deer). The illustrations are Herby working through the breakup.
Funding was a massive hurdle for this project. They recorded the script early on. So essentially had a Radio Play and some illustrations but couldn’t do much more.
Talking through funding
They learned that there is a new language forming around VR creation. The film process is so well-laid out, but that isn’t the case for VR. For example, How do you storyboard in VR? This was a problem for funding – how to explain the production process, when they are still learning what the “correct” way to do this is themselves!
Immersive Games Panel
Doctor Who: The Edge of Time Marcus Moresby, Maze Theory (UK)
The Curious Tale of the Stolen Pets Andreas Juliusson, Fast Travel Games (Sweden)
The Infinite Hotel Kevin Beimers, Italic Pig (UK)
Panel moderated by Jamie Feltham, Upload VR
Think about why you’re creating your game for VR: is it emotional or is it immersive? Otherwise why do it in VR?
Can you allow people to speak to a character? Unfortunately that’s asking a huge amount. Think about how to trigger responses that feel unique to the action from the player. For example, how you open a door impacts on how a character responds. If you open it slowly or fast may indicate how the player feels and so illicit a different response from a character.
Movement is a CHRONIC issue in VR games. It’s incredibly difficult to port over games like Zelda because “running” around a world is just impossible for now.
Most traditional game narratives are 3rd person. You may not “be” that person, but you may have a relationship with that person.
In traditional games as a first person shooter, you tend to be silent and holding a gun until you get to a cut scene.
In VR you can’t speak, but you really want to!
So you are seen but not heard. Dialogue in VR is just very complex. You can’t spark a conversation. Essentially you can say “yes” or “no.”
But you CAN use action instead.
Usually story is told through your ability to mess with the world i.e how much can you break.
What happens if you poke a character or slap etc.
Being 1st person you can’t really drive a scene. You also can’t have cut scenes. One solution is to have an AI who talks to you.
You can use subtitles BUT think about where they sit, and also do they further the narrative.
Punch drunk works well because you walk through it as an invisible person and can pick up the story at different places and it doesn’t matter.
In VR action carries all the meaning – when you move something should happen.
But the key for VR is that how you feel is more important than how you interact.
If you have a story really think about what medium it fits.