Alexa and her friends may be delighting users in the home with how they can make life easier, but some companies are taking the first bold steps into voice-controlled e-commerce. Conversion.com head of conversion strategy Kyle Hearnshaw explores
The smart-home revolution is in full swing. The success of the Amazon Echo and its Alexa ‘skills’ platform and the launch of Google Home have taken the idea of voice control and voice-controlled e-commerce from a novelty concept to a legitimate potential revenue channel for retailers willing to take the risk.
Early brands to explore this opportunity include Uber and Just Eat, and earlier this summer Domino’s Pizza launched its Alexa skill in the UK after over a year of offering the same in the US. This allows you to order pizza with just a few words.
We’ve yet to see data on how many sales these brands are generating through their voice-control channels, but the phased deployment from Domino’s certainly suggests they are seeing enough value to justify the investment.
Designing a successful voice-controlled experience isn’t going to be easy. Looking at this from a user experience and conversion rate perspective, voice control is a whole new touch-point and interaction type to understand.
In traditional conversion rate optimisation for e-commerce sites, potential reasons why a user might abandon and not complete a purchase fall into two categories – usability and persuasion.
Usability issues would be anything that physically prevents the user from being able to complete their desired action – broken pages, links or problems with completing a form or online checkout.
As for persuasion – even a site with no usability issues wouldn’t convert 100 per cent of its visitors. There will always be an element in the user’s decision-making process around persuasion. Have they been sufficiently convinced to purchase this product or service? Typical persuasion issues include failing to describe the benefits of a product.
So what does the future look like in a voice-controlled world?
In traditional e-commerce, the user is free to make their own journey through a website and we enable this freedom by displaying a range of content, products, deals and offers, navigation options and search functionality. With voice control, the possible journeys to purchase are far fewer and almost completely invisible to the user at the outset. So with an Alexa skill, the developer must define the possible trigger phrases that the user can use to take a certain set of defined actions.
Skill and experience in voice interaction design will emerge as a crucial requirement for any team looking to develop this channel. Collecting and analysing data on how users are invoking your app/skill, what exact words and phrases they’re using, how they’re describing your products and service and how they’re talking to your app through their journey, will be an essential part of experience optimisation.
Another area that will dominate user experience for voice control will be how the app responds to user mistakes. Frustration will be the worst enemy of voice-controlled services, far more so than it is with websites now. If you’ve been unlucky enough to have to call an automated helpline that uses voice control, you will know how quickly the frustration builds when something goes wrong.
On a website, if the user gets stuck or confused on their journey, it’s relatively easy for them to go back or to navigate away from the page and try again. With voice-control, this isn’t the case. If the user tries a command that isn’t recognised by the app, then it can only respond with a quick error response. Failure to re-engage the user and keep them trying will quickly result in frustration and even abandonment.
So how do you persuade a user to complete their purchase once they’ve started their voice-controlled interaction? How would you describe the benefits of a certain washing machine, laptop or TV when they can only be spoken, and spoken by a robotic voice at that?
The development of chatbots in the past couple of years has seen a lot of investment and progress on how to get an automated response to appear human and more engaging. But this development has all been in how to present text responses rather than voice responses. Voice responses are inherently more complex.
Will developments in Alexa’s AI allow her to improvise responses based on prior knowledge of the user? Personalisation within the voice space could allow Alexa to make tailored recommendations based on my purchase history.
“Alexa, look on Currys for a new kettle.”
“OK Kyle. There’s a black Breville kettle that would look great with the Breville toaster you bought last month. It’s £39. Is that OK?”
“You bought your last kettle 18 months ago. Shall I add the three-year warranty on this one for an extra £9.99?”