Learning to read can be difficult for any child, but for children who are severely or profoundly deaf it can be an insurmountable challenge. We were posed the question of how we would create an app that could help deaf children overcome this difficulty.
The result is StorySign; an app made in collaboration with FCB Inferno for Huawei that aims to help deaf children to read in a collaborative way with their parents, using real children’s books to create a connection to the written word. In this article I’ll be discussing our process in creating the app and looking at the technology we used to make it a reality.
Above – children reading with StorySign.
When this project started we had no concept of the complexities deaf children face when they learn to read and the nuances sign language has versus the spoken word. The first step was to undertake research with the British Deaf Association (BDA) to gain an understanding of this process. Sign language’s syntax is different, words aren’t ordered the same, are omitted altogether or are replaced with contextual signals. This is one of the issues as to why learning to read can be very confusing for a deaf child; the written word is based on spoken language which is structured in a completely different way, so it’s all mixed up for them. We wanted to create a visual link between the sign and the word to try and help create a connection in the reader’s mind to overcome this fundamental challenge.
Another huge barrier the majority of deaf children face, is that they are born to hearing parents – so there’s a communication barrier between parent and child from the start. One of our secondary hopes for the project was that the app could facilitate bringing parent and child together. Story time is a special moment that no family should have to miss.
DESIGNING THE CHARACTER
We researched a large number of other learning products aimed at the deaf community to get an understanding of what was on the market and how successfully they achieved their goal. One thing we noticed where animated content was attempted, was that the characters weren’t very appealing for a young audience and the fidelity of body/facial movement wasn’t close to their human counterpart. An equivalent of this would be using Google Translate on a menu and knowing that half of the words aren’t right or not there. For our app we wanted to ensure that we created a character that resonated with a young audience, but also had the complexity of movement essential in imitating actual sign language.
Facial expression accounts for a high percentage of sign language, coupled with nuanced body movement and complex hand/finger movements. Insofar as character design and animation this posed some quite unique challenges. Often with character design you’ll strip back features to their most basic, or drastically change proportions, or make animalistic or even alien designs; with Star we had to keep things humanistic whilst balancing that against making a character that kids would be excited by. We used a slightly oversized head to make her more cartoony with very large expressive eyes and eyebrows. We chose fun vibrant colours for her clothes and hair, made her slightly older than our audience so she felt aspirational (in a big sister kind of way), a bit tomboyish to appeal to both boys and girls and, most importantly, gave her a hearing aid.
Above – Star in her final form.
BRINGING STAR TO LIFE
When we approached animating Star, the animation team poured over videos of sign language performers; very quickly we realised that to get the level of detail we wanted to achieve with hand animation would be incredibly difficult and time consuming. Another factor we considered was that the animators didn’t know sign language which could lead to vital elements potentially being missed. We decided the best approach was to motion capture real sign language actors to get the most accurate performance possible. Aardman’s CGI department took the 3D design they’d created of Star and went about the very complex process of designing a 3D armature control system that linked precisely with all the body and facial movements of the actors, so every physical movement would look realistic on-screen. We’ve worked with mo-cap on a number of projects but nothing that required this level of fidelity, so we worked with Centroid, a industry-leading facility at Pinewood to record both the face and hands. To show the process in more detail, here’s a behind the scenes video from the Pinewood shoot.
The process involved suiting up the actors in a motion capture suit with markers that would record their full body performance from over 70 infrared cameras placed around the studio. The markers gave us a live feed to see their performance of our actual 3D model on set, so we could see how Star would look in the final renders (in the video you can see a shot of the actor’s movement driving the 3D rig that controls Star). A head camera was placed on the actor to record every minute facial movement. The camera was a difficult element for the performers to deal with; it could get in the way of their hand movements, so the actors had to adjust their performance to suit. The facial expressions were particularly complex to capture and were sent to a specialist facility to output and configure – Cubic Motion, based in Manchester – where any issues were tidied up with hand animation, using video reference of the actor’s live performance. The face and body were then merged, lit and rendered at Aardman’s CGI department into the final form you can see in the app.
Above – the face camera records every minute movement of the actor’s face – the data is then used to drive Star’s facial expression.
StorySign is a simple concept that feels like a seamless experience for the user, but behind that simplicity lives some pretty complex technology. Not to get too geeky, but the team at Aardman’s interactive department worked with Huawei’s new AI technology to ensure that the character recognition of the actual printed words on the page (of potentially decades-old children’s books) was recognised and would trigger the correct sentences in the books. Various devices had different quality cameras which meant we had to work through a huge amount of variables to ensure that StorySign was robust and work on the devices that children would likely have access to. A bespoke database was designed which ensured all languages, and all potential future books could be easily added to the system, ensuring the ecosystem could organically grow.
USER TESTING & QA
With a project like this, with such a specific goal and user, it was essential to continually speak to our audience. At the start of the project we worked very closely with the British Deaf Association to understand the challenges facing deaf children as they start to read and the underlying syntax of sign language. We designed an alpha version of the app with their continued feedback to ensure we were focussing on the right problems and create a solution that truly made a difference.
We took this alpha version into Elmfield School for Deaf Children in Bristol to test with Children of all ages and abilities; it was the first time we’d seen the app used en masse, which is always rather scary! The children loved the app (phew) but also gave us some clever feedback which we integrated by amending some user interface design.
Above – user testing at Emlfield School for Deaf Children.
When the project broadened from just British Sign Language (BSL), to encompass a further nine languages, we expanded our support network to include the European Union of the Deaf and local sign language interpreters and translators. Having this expertise on set allowed us to record the motion capture for each language and ensure the sign language was correct throughout.
It was a wonderful experience working with signers from all over Europe to bring Star to life, each bringing their own personal performance. If you get a chance check out the ten different language versions*, please do, as they each bring a different dimension to the performance.
It was amazing seeing children’s reactions from all over the globe when the project was launched in Lapland; the excitement on their faces when they see Star come to life and their connection between the app and the book was magical to witness. A huge hats off to all the production teams at Aardman, FCB Inferno, Huawei, Centroid, Cubic Motion, Comtec and the publishers/owners of the properties that made this vision a reality.
Feedback from the deaf community, teachers, carers and parents has been so positive; a number of times it’s been said that it’s the best animation of sign language they’ve ever seen. For this, we are truly humbled and proud.
I see Star as a superhero, and her superpower is being able to sign. We want to make children proud that they share this special skill and have the confidence to begin their journey to learn to read.
Above – Wrapping the first book at Pinewood Studios, using the sign for clap.
* Languages included in the app currently include British Sign Language (BSL), Irish Sign Language (ISL), Nederlandse Gebarentaal (NGT), Vlaamse Gebarentaal (VGT), Lingua dei Segni Italiana (LSI), Lengua de Signos Espanola & Lengua de Signos Catalna (LSE & LSC), Langue des Signes Francaise (LSF), Lingua Gestual Portuguesa (LGP), Deutschschweizer Gebärdensprache (DSGS), Deutsche Gerbärdensprache (DGS).