Open Voice #05: Integrated Voice
‘Alexa who’s the first speaker?’ Faithful Open Voice fans will know it’s Maarten Lens-FitzGerald on stage accompanied by his smart assistant Alexa. After four Open Voice meetups at Mirabeau, we spread our wings and settled down at this brand new venue near the Zuidas: Epicenter. An innovation environment, network and flexible office space for companies that are looking for digital growth and knowledge exchange.
Maarten opened the show with a video going through the ages of technology, from the mother of all demos, to the Macintosh launch, to Steve Jobs introducing the iPhone to present-day Alexa and Google Assistant. Are you ready for the Voice revolution?
This edition of Open Voice was not just about smart speakers, but about integrated Voice: Exploring how to use voice to interact with consumer products and devices, like your car and coffee machine. Actually, the next level of voice interfaces, as they are likely to physically disappear but will remain omnipresent.
Your emotion is in control
Guido Jongen of Nuance, the leader in conversational AI got to set the bar. As a Sales Director, he helps his clients to integrate voice into their business. He stated that, after decades, speech recognition is not in Gartner’s hype cycle anymore because mass adoption has started. By 2020 customers will manage 85% of their relationships with companies without interacting with a human. This he illustrated with two of Nuance’s solutions that will bring a lot of comfort in our daily lives.
The first is the Solution Sky Q, the voice-controlled navigation for the Sky setup box that will soon be available. Via its Voice interface, the system will serve you exactly what you feel like watching, leaving the big hassle to navigate with your remote control behind.
Secondly, he showed a video of a future voice controlled car with multiple services integrated that will make drivers’ lives easier. Think about this great Augmented Reality-feature that provides information about the things you see along the road. Besides speech recognition, Nuance experiments with gaze recognition, which can detect drowsiness and a yawning when a driver gets tired. The car suggests lowering the temperature or guiding you to the nearest gas station to rest. Or the car gets more power when you are happy. However, what happens when you smile when you see your own kids playing in front of your house after a day of working? Your emotion is in control of your car? Interesting thought, but that is obviously not ready yet and stresses that testing and learning is crucial is to design a solution that guarantees safety.

Still, the integrated Alexa still has some challenges, which she funnily confirmed when Roderick stated that the users of the Spinn Coffee machine really like the machine. Alexa interrupted him by suddenly saying ‘I’m not sure’.
State of voice
After the break, Maarten gave a quick catch up on the movements in the world of Voice. The adoption worldwide has increased rapidly, with urban China at the forefront with an adoption rate of 22%, followed closely by the US (20%). Which is remarkable as the usage of voice in China is underexposed in the international media. Overall the number of Voice devices is 2 billion.Amazon recently unveiled blueprints to create Alexa skills yourself, without the need for coding skills.
Furthermore, Samsung will equip all their devices with its Voice assistant Bixby 2.0, from phone to washing machine, fridges and will open them for other voice assistants too. This promises interesting cross-device future services.

Do you need the cloud to open your curtains?
Eric Bezzam, from the machine learning team of French voice assistant company Snips started his talk by explaining what’s in a Voice assistant in 4 steps:
1. Start with a wake word: like ‘Alexa’ or ‘Ok Google’
2. Automatic speech recognition (ASR): the computer maps speech to text
3. Natural Language Understanding (NLU): the assistant extracts a meaning of the text (your request)
4. Action/dialogue: the assistant answers by acting or talking based on your request
Snips builds on device solutions that deal with the last three steps. They have a private-by-design approach for specialized requests like turning on your lights, which require smaller models and fewer data.
Their services do not rely on remote computers or on the cloud like assistants of Amazon and Google do. These tech giants connect the data of all your devices to train powerful models. But is that really necessary for habits around your home, like opening your curtains? Snips’ software learns for the specific task on each device.
Furthermore, the regular smart assistants always listen for the wake word, which brings up the privacy issue. Your voice is a fundamental part of your identity and imagine all your voice data out there. With tools like Lyrebird, people can easily replicate your voice, with all its consequences.
Make your machine talk.
Always wanted to talk to your hoover or blender? Snips invites everyone to start experimenting with their open system Maker Kit. Check out this blog on the New Hackster Development Contest
About Open Voice
Maarten Lens-FitzGerald, Sam Warnaars (aFrogleap, a Merkle company), Daan Gönning (freelancer Rabobank), Hayo Rubingh and Marna van Hal (both Mirabeau, a Cognizant Digital Business) are founders of Open Voice: a series of interactive meetups in which we will explore the possibilities of the new channel of voice.
Together we enable others to share best practices and learnings, show how voice and conversations fit into the customer journey and connect the people in the emerging voice industry.
Check out the slides, photos and audio recordings of Open Voice #05.
The sixth edition of Open Voice will be Tuesday, 14 May at Mirabeau.
Want to stay up to date? Follow us on Twitter or subscribe to the Open Voice newsletter.
Do you want to learn more about Open Voice? Get in touch with Hayo Rubingh.