NEW Try Zapier integration to connect Dasha instantly to thousands of the most popular apps!

How to build an in-car voice assistant with Dasha AI

You have definitely heard of the rising popularity of in-car assistants. In the U.S. alone over 60% of people are inclined to buy a car with an assistant installed. Nearly 130 million people have already tried using one in their car and over 80 million enjoyed using their in-car voice assistant so much that they’re using it constantly. 

Let’s explore how you might go about creating your own in-car voice assistant app with Dasha conversational AI studio. An app that will not only provide you with basic information like weather conditions and navigation but would go a step further and analyze the in and out of car environment while offering and executing actions.

Voice assistants for cars are trending and here is why

In-car voice assistants are about to become present in every car on the market. Research conducted by Capgemini shows that 95% of car owners are predicted to be using in-car conversational assistant by 2022, which is right around the corner.

The number of drivers in the U.S. increases by millions each year, as does the number of cars on the roads. Car manufacturers are factoring this fact in when designing and building vehicles since the top priority is the safety of the driver, passengers, other drivers, and pedestrians. A safe driving environment requires full concentration on the road and on the driving itself; however, since we live in a fast-paced world, we need things to be done on the go. With our hands and eyes busy, voice is our primary way of communication, which makes in-car assistants imperative.

The reasoning behind the popularity of in-car assistants is that such technology helps drivers have a safe driving environment while performing tasks and actions on the go. Nowadays, voice assistants can not only take and make calls, play music, and navigate, but book appointments and tables at restaurants, and make suggestions on what actions the driver might want to take based on road and weather conditions. According to Kardome, with the increasing popularity of in-car voice assistants, consumers are looking for more than basic functions. They are expecting their assistants to support various areas of life and help control in-car conditions. To name a few, the control conditions within the car include opening and closing of the windows or the trunk, locking/unlocking the doors, help with parking, AC control, and managing car settings in response to time of the day and weather conditions. Let’s explore how to make a voice assistant app that would offer the driver to improve their driving experience by turning the snow mode and the heat on.

How to make an in-car voice assistant app with Dasha conversational AI

Let’s take a look at how you may create a simple in-car assistant conversational AI app using Dasha AI created.

If this is your first introduction to the technology, please follow this link which will guide you through the process of installing the programs you need to create a voice assistant for cars.

When you’re all set, go to Github and download the Dasha in-car assistant demo. Open Visual Studio Code and open the folder with the code. Alternatively, you can clone the project directly from your terminal using Let’s see what’s happening with the code and how you can make your own app based on it.

Note in this particular code sample

Go to main.dsl

Let’s take a look at the first few lines of the code. In this part, the context is declared and the variables that we will use in this context are written out.

context { input phone: string, currentTarget: string? = null, currentAction: string? = null, newInfo: boolean = false; forgetTime: number = 15000; yesNoEnabled: boolean = false; lastIdleTime: number = 0; } type CallResultExtended = { success: boolean; details: string; action: string; target: string; } ; type CallResult = { success: boolean; details: string; } ;

Now that we’ve got that part out of the way, we can move on to lines 29-48.

preprocessor digression targetFiller { conditions { on #messageHasData("car_function") or #messageHasData("car_part") priority 2000; } do { set $newInfo = true; if (#messageHasData("car_function")){ set $currentTarget = #messageGetData("car_function")[0]?.value; return; } if (#messageHasData("car_part")){ set $currentTarget = #messageGetData("car_part")[0]?.value; return; } return; } }

In the ’preprocessor digression targetFiller’ part we program a conversational AI app to understand what car part the driver chooses.

At this point the driver can specify either the wanted car part or what they want to be done with that part, which is shown in line 33:

on #messageHasData("car_function") or #messageHasData("car_part") priority 2000;

Now that the AI got one piece of information we need to teach it to get the rest of it. If it classifies the received info as a target (car part), it remembers it and asks about what the driver would like to be done with that car part. For example, a driver says “the window”, AI would ask “What should I do with the window?”, and once it receives an “open” or “close” command, it will proceed accordingly.

Note that it’s not necessary for the driver to specify the action and the car part separately, they can say “open the window” and, since both fillers are known, the AI will immediately open the window.

Speaking of actions and car parts, you can check them out under intents and entities in the tab section under “data.json”. We’ll take a look at the actions below but for now, let’s focus on the car parts.

"entities": { "car_part": { "open_set": false, "values": [ { "value": "trunk", "synonyms": ["boot", "luggage", "luggage boot"] }, { "value": "window", "synonyms": ["windows"]} ] },

Entities are the words or phrases that are extracted and categorized under a more generalized word (or phrase). Here we have '“trunk”' and '“window”', which are parts of the car. In order to make your in car voice assistant understand that the driver wants to open the trunk when receiving the '“open boot”' command, it’s necessary to add synonyms when writing down the entities.

Let’s move on to lines 50-68.

preprocessor digression actionFiller { conditions { on #messageHasAnyIntent(digression.actionFiller.commands) priority 2000; } var commands: string[] = ["turn on", "turn off", "open", "close"]; do { set $newInfo = true; for (var command in digression.actionFiller.commands){ if (#messageHasIntent(command)) { set $currentAction = command; return; } } return; } }

In the ’preprocessor digression actionFiller’ part we program what action we want to be performed once a command ( on 'on #messageHasAnyIntent(digression.actionFiller.commands) priority 2000;' }) was triggered.

In this particular instance, we see that the commands are written out in line 56: ’"turn on"’, ’"turn off", ’"open"’, and ’"close"’__. #messageHasAnyIntent is what triggers the command.

{ "version": "v2", "intents": { "open": { "includes": [ "open", "open the (trunk)[car_part]", "open (trunk)[car_part]", "(trunk)[car_part] open" ] }, "close": { "includes": [ "close", "close the (trunk)[car_part]", "close (trunk)[car_part]", "(trunk)[car_part] close" ] }, "turn on": { "includes": [ "turn on", "enable", "turn on the (headlights)[car_function]", "turn on (light)[car_function]", "turn (light)[car_function] on", "turn the (light)[car_function] on", "enable the (light)[car_function]", "enable (light)[car_function]", "(light)[car_function] on" ] }, "turn off": { "includes": [ "turn off", "disable", "disable the (light)[car_function]", "disable (light)[car_function]", "turn off the (light)[car_function]", "turn off (light)[car_function]", "turn (light)[car_function] off" ] }

Let’s take a quick look at what is going on in the intents. It includes the commands we specified earlier which are followed by “includes”, which means that the phrases listed below are the triggers of that specific command. Notice that, for example, in the “turn off” part we have the word light in parentheses and car function in brackets. It doesn’t mean that the '“turn off”' command will only be triggered once the driver wants the lights turned off; it includes all the car functions that you program the app to know.

For the purposes of this demo, we’ve listed four car functions that are listed in the “index.js” tab.

The default car settings, in this case, are that the light (and the AC) are turned off. The “true” part indicates a command can be performed and the “false” part shows the contrary, hence 'light: { “turn on”: true, “turn off”: false }'. The same logic applies to the trunk and the windows, which are close by default.

Now that the conversational AI app has received a command, it remembers it and sets the current action to that command ('set $currentAction = command;').

Lines 70-119 describe that our in-car voice assistant will check for specific conditions and ask the driver if they want an action to be performed based on that condition. Let’s consider an example when it’s winter and it’s snowing outside. The car has a snow mode and is equipped with an anti-lock braking system and traction control. When the car voice assistant gets a signal that the car owner is driving in a snowy environment, it will say the following: “It's snowing outside. Would you like to turn the snow mode on?". Here the target is "snow mode" and the action is “turn on”. We’ve programmed the voice AI assistant to check for such conditions every 8000ms (8 seconds; note that the timing can be changed).

Now it’s up to the driver whether to let the in-car voice assistant turn on the snow mode. Should the driver say “yes”, the assistant recognizes it as a positive sentiment ("snow mode") and follows the command. It’s important not to forget that the driver might repeat after the voice assistant and say something like “turn the snow mode on” ('#messageHasIntent($currentAction??"")') and have AI recognize the command. After turning the snow mode on, the driver will be alerted of the result ('#sayText(result.details);"). If the driver decides to have snow mode on, the assistant will note the sentiment was negative and say “as you wish” ('#say("asYouWish");').

digression yesOrNo { conditions { on $yesNoEnabled; } do { if(#messageHasSentiment("positive") or #messageHasIntent($currentAction??"")) { if ($currentTarget is null or $currentTarget is null) {return; } var result = external command( $currentTarget, $currentAction); #sayText(result.details); } else if(#messageHasSentiment("negative")) { #say("asYouWish"); } else { set $yesNoEnabled = false; return; } set $lastIdleTime = #getIdleTime(); set $currentTarget = null; set $currentAction = null; set $yesNoEnabled = false; set $newInfo = false; return; } } preprocessor digression commandUpdater { conditions { on #getIdleTime() - $lastIdleTime > 8000 tags: ontick; } do { var result = external checkCommandUpdate(); if (result.success) { #sayText(result.details); set $currentTarget = result.target; set $currentAction = result.action; set $lastIdleTime = #getIdleTime(); set $yesNoEnabled = true; } set $lastIdleTime = #getIdleTime(); return; } }

After the snow mode was turned on, the assistant sets target and action back to default and goes on to check for new commands every 8 seconds.

preprocessor digression forgetTarget { conditions { on #getIdleTime() - $lastIdleTime > $forgetTime tags: ontick; } do { set $lastIdleTime = #getIdleTime(); set $currentTarget = null; set $currentAction = null; set $yesNoEnabled = false; return; } }

Another possible command that is useful in wintertime is for the assistant to check for the temperature inside the car and offer to turn the heat mode if the temperature drops to a certain degree.

Let’s consider a situation when the assistant offers to turn the heat on but the driver urgently needs to open the window. In this case, it’s critical for the conversational AI to recognize that the new command is the priority and opens the window (more on digressions here:

digression command { conditions { on $newInfo; } do { set $newInfo = false; if ($currentAction is null) { #say("whatAction", { target: $currentTarget }); return; } if ($currentTarget is null) { #say("whatTarget", { action: $currentAction }); return; } var result = external command( $currentTarget, $currentAction); #sayText(result.details); set $lastIdleTime = #getIdleTime(); return; } }

And that’s it! Now you have an in-car voice assistant that is able to turn different car modes on and change the conditions inside the car.

It’s time to run the demo and test it out!

Last but not least

We’ve explored how easy it is to create an in-car voice assistant with Dasha AI. Your new conversational AI as a service app can have a mere 200 lines of code or even less, depending on what capabilities you want it to have. Why not try playing with the demo to see your best car voice assistant come to life?

Related Posts