The Problem
The initial idea for the app came to me after an experience of renewing my UK first aider certificate via a remote video-based course. I found the content incredibly dry and not particularly memorable, and came out of the training wondering if I would retain any of the 3-hour long course content.
The experience got me thinking that there must be a better way in teaching first aid. This potentially life-saving set of skills requires a much more engaging and hands on pedagogy and hence my thoughts immediately went towards creating a VR-based app.
The goal of the app was to make remote first aid training engaging and memorable by presenting the player with realistic scenarios and active learning and learning through doing instead of passive learning followed by quizzes.
The MVP (Minimum Viable Product) Process
I have pitched my idea during the XR Bootcamp and it got voted in as one of the MVP group projects. Within a group of 3, we started scoping potential app contents. We quickly agreed that we should try to lean into the biggest opportunities a VR course could offer - focusing on soft skills and simple physical interaction tasks, rather than e.g. the complexity of performing CPR without a physical dummy available.
We decided to test out the concept on a car collision scene, where the user has to:
1) Assess the incident scene, take charge of the situation and remove any potential risks that may put him / her in danger before tending to the casualty
2) Primary survey with a responsive casualty - finding out if they are in pain / bleeding and calling the emergency services if necessary.
We have established the following as core features of our app, which we subsequently implemented during the MVP process:
- Step-by-step guided simulation through the process
- Environment-based physical interactions
- Interactive dialogue with branching decisions
- Gaze detection to guide user what to look for within the environment
- Simulated contact with emergency services (based on UK NHS 999 call script)
UI Development
We knew from the outset that careful UI design throughout the VR simulated experience will be key in guiding users through the process.
We have prototyped different UI interactions collectively using the Shapes XR app and importing a simplified version of our environment into it. This collaborative approach allowed us to make multiple decisions quickly and stay aligned, particularly as we were working in a distributed manner across 3 different time zones.
The UI comprised several panels across the scene appearing in front of the player with a smooth transition as each milestone was reached. We used glow particle effects to highlight key physical interaction, audio sfx effects throuoghout and audio voiceover for character dialogue responses to enrich the variety of feedback the user received.
The Result
The bootcamp timeline allowed us to spend circa 150 hours on the app spread over two weeks. The full list of app features is included below:
- Cross-platform development (Quest 2 & Valve Index)
- Autohand SDK
- Teleportation-based Sequence Manager, including activity checkpoints
- Guided interactions throughout
- Interactive dialogue with different branches creted using the Yarnspinner package
- integrated audio dialogue, using VoxBox text-to-speech.
- Integrated UI following collaborative prototyping in Shapes XR.
- Code refactored to fully incorporate principles of OOP and allow future extendability
- Unity Profiler used to optimise the FPS rate during gameplay
Reflecting on the Project
Overall, we were very happy with the outcome of the 2 week long MVP effort - we succesfully managed to navigate working across different timezones by clearly organising task tickets and having regular catch ups on Discord.
We have established the below items as lessons learnt from the project:
- Plan out prefab groups. Prefabs by “mini-scene” would have led to fewer scene merge conflicts when we serialised fields in Unity
- Find more co-working time across timezones
- Expanding our Teleport sequencer to a full state machine
Below are some of the items we would have liked to continue to work on to expand on the app experience
- Different environment / incident scenarios
- Different types of casualties. How would you talk to a kid in the same scenario?
- Multiple casualties in the incident
- Unresponsive casualties, more physical interactions
- Different weather conditions? How would you make sure that the patient is comfortable till the ambulance arrives?
Github Tree
Copyright © 2021 agnieszka zielke - All Rights Reserved.
Powered by GoDaddy Website Builder