Back|Use Cases Applications Voice Applications

Voice Applications

Voice apps help users schedule appointments, make purchases and search for goods and services.

Challenge: How to add voice tech to your app?

Voice technology is playing an increasing role in how people interact with data. We use voice tech to update our appointments, start exercise routines, change a song that is playing, buy groceries, and control the lights in the house. Almost any daily task can have a voice element included.

Adding voice to any Ecommerce or scheduling app significantly increases conversion and long-term loyalty

Smart Speakers such as Alexa or Google Home work by listening for trigger words “Alexa...” or “Hey Google…” then taking the words that follow, processing them into text, and then passing that text as keywords into their respective search engines. Applications or websites can take advantage of this just as the same as typing a string of keywords would go directly to a product detail page or specific object in a list. In a standard browser, keywords take us to a page full of information, and we can then find the specific data point and then click to perform the subsequent action (add an item to our cart, look up a flight, etc.) The challenge with voice-initiated interaction is that Alexa or Google Home isn’t going to read us the entire contents of the landing page-- it will only respond with a short spoken sentence of the first thing it finds.

The challenge is configuring your application or data set to find the specific bit of information (e.g., when my flight leaves) or specific action (e.g., adding milk and orange juice to my curbside pickup) that can be done with a simple common-language voice command.

Interplay has all the modules you need to create an interactive, customer-focused voice app...fast.

Solution

There are several steps here:

  • Prepare your data set to have hooks or action points at a granular level
  • Connect to the Alexa or Google Home interface to get the text result of the spoken command
  • Interpreting that text results in a specific action, requiring AI to interpret words like “when is my ___”, ‘ add ___ to my pickup”, etc.
  • Connect the appropriate AI-determined command to an action point (prepared in the first step)
  • Follow through with the entire chain of actions, including connections to payment gateways, reservation systems, etc
  • Prepare the appropriate completion message and send it back to the smart speaker for audio confirmation

To build this out, several bits of technology would come into play: the voice SDK from Alexa or Google Home to get the spoken words as text, preparing your app or website to accept at ac at a granular level, the AI text processing to correctly parse and “understand” the text, connect to all appropriate APIs at the inventory, order, or content data level, perform the action, then send messaging back to the smart speaker.

Interplay already has modules for all of these steps. We’ve connected voice commands to Ecommerce purchases, FAQs, curbside pickups, and more. We’re expanding these capabilities into appointment scheduling, services, and deeper database lookups. If you would like to add voice to your apps, let us know-- it could be faster and easier than you think.

Modules

  • Alexa/Google In to receive Alexa input
  • Alexa/Google Out to output to Alexa voice
  • Alexa/Google intent Switch to determine predetermined intents
  • Alexa/Google Action Cards to perform various actions
  • NLP/RASA for custom language processing needs
We use cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. They will be enabled, unless you disable them. Our privacy policy
Accept
Decline