Oxford wordlist app -
An interactive digital product which could help young learners to read.
project
Oxford Wordlist iPad App
Prototype
Project Year
2017 - 2018
My Role
Lead UX Designer
Mad Tester
I remember smiling within, when this project came to our team. It was a breath of fresh air and I saw colours. We were mostly dealing with corporate clients that year and things were getting dry. Nothing against you guys. We both know you are awesome! but you also know what I mean…
―
A little introduction
Oxford University Press has a rich history which can be traced back to 1668. Having moved firmly into the digital world, they came to us, with a problem to solve…
how might we design and build a digital interactive product to help early learners to read high frequency words?
―
Project’s
objective
To create an app based prototype for iPads by researching voice recognition technology.
The prototype will be a word recognition tool that will support children to learn and practice essential high-frequency words which are critical to learning to read.
To test if voice recognition technology could be utilized for checking children’s oral understanding of words while reading.
The app should be very easy and most importantly fun to use.
End users
Early learners from the age of
5 till 8 years.
Could also be used by people struggling with reading and understanding basic english
words.
Technology
Voice recognition technology (VRT) and Artificial Intelligence (AI).
Prototype goals
Prototype will be built into an iOS app.
This project will utilise 10 out of 100 of the most frequently used words complied from children’s writing in 2017 (Oxford Wordlist).
The prototype will be:
• Educational
• Easy to use and personalised
• Gamified (Rewarding the user as progress is made)
Ux process used
- Understand
- Market and software research
- Analyze
- Design
- Launch
- Analyze again
―
The challenges
understanding
the problems:
Oh Lord! here opens the box full of technology, API's, and voice recognition tools.
Our development team researched and researched and researched. They suggested a couple of API's we could use. Which was good news but also time for me to grab the biggest cappuccino in this world and sit down with my dev team and learn about those API's.
Long story short we decided to use three different API’s (iSpeech, Neat speech and Native speech framework) as we didn’t know which one would work the best. Ai technology is still new and we were a curious bunch.
One of many brainstorming, api discussing, sometimes brain numbing sessions with the team
After understanding how the technology worked I now had to:
Create an intuitive app showcasing the voice recognition technology.
Make the flow easy enough so a 5 to 8 year old child could use it after some supervision.
Create an admin section to conduct the testing using different API’s.
Use gamification to keep the kids entertained and interested at the same time.
Create a healthy reward and "try again" based user experience for the children. I had to come up with a flow which would reward the user when they get a word right and encourage them to try again if they pronounce it incorrectly, without creating a sense of defeat or discouragement.
―
The journey some might call it “solution”
Working close with the client and my team we set off on the journey to create an interactive fun prototype which could be tested in a classroom environment.
After the discovery workshop I went through my notes and created some rough features lists and user journeys.
Not so fancy sketches
Brainstorming session number probably 10 thousand for features and Call to actions.
With the help of the client I was able to divide the user journeys into first time and second time users.
The journeys had to be simple but effective. They had to be quick and not boring or overly complicated, similar to “how the teachers use flash cards” to teach children high frequency words in a classroom setting.
The new addition was voice recognition and gamification.
We decided to add:
A welcome screen.
A step by step process for the Ai to recognise user’s voice.
User to create a minimal profile while talking to the Ai.
Monsters to choose from as an avatar.
A word list to choose words from. ( list will grow as you level up )
A “press and talk” button so the voice recognition Ai is not active by default.
A system check process which will tell the user if they have succeeded in pronouncing the word correctly or not.
A flip screen with a sentence for each word.
Automatic level up after facing difficulty to pronounce a certain word more than three times to avoid disappointment.
A process to come back to. A results / my board screen to see progress of the user.
―
Wireframes ze magic
OMG you guys this is like my favourite part! After the user journeys were locked in I started creating wireframes using sketch.
Several rounds of changes later we were able to create something which we were all happy with.
Sadly this is how they actually look like. But I still love my greyscale babies.
Please click on the images to make them magically go larger.
―
Design and
testing
The prototype was designed based on the printed flashcards colour palettes. We added fun characters and animations to engage the target audience.
Being a multicultural team it helped us to test the API’s with different accents. We tested the voice recognition thoroughly for weeks. Sometimes the API’s worked really well and sometimes it faced difficulty recognising some accents.
Especially mine. I mean what the hell!?
―
User testing
summary
Because of the unusual behaviour of the three APIs we were testing with adult voices. We decided to test it with children now.
We tested 10 kids between the ages of 3 and 12.
Our original assumption was that children’s words should be recognisable, no matter what accent or stage of oral language development. Because the technology should be able to recognise the phonemes.
From testing it with kids we made an educated assumption that Native Speech is the superior API of the three tested. However, the API had significant limitations to understanding young children’s speech. Lisps, braces and, most importantly, common pronunciations of words by young children in an Australian accent greatly impacted the API’s ability to register words they were enunciating clearly and correctly.
Words most unrecognised by Native speech
A: all tests failed (app instructs subject to say ‘uh’, when API wants to register ‘aye’)
He: 7 failed tests out of 10 (subjects speaking clearly, yet API registers as ‘hey’)
Had: 6 failed tests out of 10 (API consistently registers clear pronunciation as ‘head’)
The: 6 failed tests out of 10 (API needs to recognise common pronunciation in Australia is ‘tha’)
They: 4 failed tests out of 10
My: 4 failed tests out of 10
Was: 4 failed tests out of 10
There: 4 failed tests out of 10
On: 3 failed tests out of 10
I: 3 failed tests out of 10
With: 3 failed tests out of 10
―
User engagement
summary
While children loved the colours, animations, monsters and the profile creating journey, they were frustrated every time the app got their answer wrong although they were pronouncing it correctly.
Common reactions we got:
I love the monsters.
It’s easy. I can do this without your help.
Why is it wrong when I said it right?
Why doesn’t it hear me?
Why do I have to wait so long to see if I got it right?
Can I just play with the app?
Can I use this and not talk to it?
―
The outcomes and takeaways
The intended tenets of this project were to ensure that the prototype was:
Educational
Easy to use and personalised
Gamified (Rewarding the user as progress is made)
While we achieved all three, success was limited due to the performance of the speech technology. Collaborating with my team and testing the technology in the user’s environment was a crucial part of this project. Without that we would have hit a dead end.
Sadly no further improvements were made due to limited funding supply. But regardless of that this has been one of my favourite projects. It was fun and I liked it!