Opinions expressed by Entrepreneur contributors are their own.
Artificial intelligence (AI) is the next big thing in product design. AI assistants will be an integral part of next-generation products because of their functionality: They will be capable of doing a lot of different things, and users won’t need to spend extra time learning the system because the interaction with AI products will resemble human conversations.
Here’s an overview of the foundational principles of human-computer interaction and some knowledge we’ve gained while designing AI assistants.
Understand the context of use
Context of use (where and how your product will be used) is the first thing you need to understand when creating a new AI experience. Many product-design decisions will be based on this understanding. For example, the interaction with a smart speaker in your room will be different from an interaction with a voice-enabled assistant in your car. Session time (seconds, minutes, etc.), interaction medium (voice, touch, etc.) and attention span (amount of time spent concentrating on a task) will be different.
Related: Google Assistant Comes to the iPhone
Define a core set of skills and key scenarios of interaction
Skills are AI-assistant abilities, and they should be added based on the functionality you want to provide in your product. Turning lights on and off, calling a taxi and playing music are examples of skills. Scenarios of interaction are examples of how people use those skills.
There are two types of AI assistants: universal assistants like Apple’s Siri or Amazon’s Alexa, which are capable of doing many different things, and niche assistants used in a particular domain (i.e., finance AI assistant). No matter what type of assistant you design, you need to learn user needs, carefully prioritize them and define a core set of skills and scenarios of interaction.
Here are a few recommendations that will help you define core skills:
Focus on utility. AI assistants should always serve a clear functional purpose. Learn about your target audience and their expectations about your product. Find specific user tasks that can be improved with the help of an AI assistant. For example, if you design a car-voice-based assistant, you can focus on daily tasks such as finding the nearest gas station or parking spot.
Think about discoverability. How will users know that a particular skill is available in your product? You need to introduce mechanisms that will help users discover functionality.
Collect feedback from your users. Introduce a feedback mechanism in your product that will allow users to submit requests for new skills. It will help you collect insights into what your users expect from your product.
For each scenario of interaction, you need to do the following:
Write a general dialog of interaction. Interaction with AI assistants shouldn’t break patterns that have evolved over the years in human-to-human conversation. When you write a dialog, always think about your users — think about the exact phrases that a user might use. A storyboarding technique can help you describe how people will interact with AI assistants.
Capture all possible user intentions. Once you create a general dialog between a user and a machine, you need to outline all possible alternatives. Different people might use different words and follow different paths when interacting with the AI assistant. For example, when users ask an AI assistant to turn on the music, they might say “Assistant, turn the music on” or “Assistant, play jazz.” Create a dialog tree with all possible variations.
Take cognitive load into account. Users can’t keep a lot of information in their short-term memory. When designing an AI assistant, you need to minimize the length of phrases you use and the number of options you provide.
Design for happy and unhappy paths. A happy path is when everything goes as planned (the user achieves his or her goal with the help of an AI assistant), and unhappy paths represent a situation when, for some reason, an AI assistant isn’t able to help the user. Users often evaluate product experience based on how well the product is designed for unhappy paths. Even when a product can’t solve a problem, the system should minimize the negative impression.
Give users more freedom. When users interact with an AI assistant, they might want to go back and modify some data they’ve provided at the previous steps. For example, for the scenario of calling a taxi, the user might want to specify the exact time they expect the cab to arrive. The scenario you design should support such behavior.
Practice your scenarios. By playing out your scenarios with an AI assistant, you will identify areas to improve. A technique called Wizard of Oz can help you validate your scenarios without building a product.
Provide system feedback
Voice is the primary method of interaction for the majority of AI assistants. But in voice-based interactions, there is a high risk that the system might not understand the user correctly. There might be a lot of reasons why the system can’t decode the message. To avoid situations when the system responds to the incorrectly formed user request, we need to provide feedback with the user query. Showing the original user query confirms that the AI assistant understood what users said. Users should never ask questions like “Did the system get what I said?” when they interact with the system.
Related: Your Human Virtual Assistant Will Soon Be an AI-Driven Digital Assistant
Another critical case where you need to provide system feedback is when a system needs some time to complete the operation. In this case, we should tell the user that the system is working on their request.
Create familiar visual language
Since AI assistants are voice-first products, creating a visual design for an AI product seems like a secondary task. However, humans are visual creatures, and what we see has a significant impact on how we perceive products. Emotions play a tremendous role in user experience.
So when we create a visual language for an AI assistant, we need to think about how it will make users feel. Visual language includes shapes, color, typography and motion effects. They work together to create an impression for users. Fine-crafted UI has a positive impression on users — Aesthetic-Usability Effect states that users are more tolerant of minor usability issues when they find an interface visually appealing.
It’s essential to give your assistant a visual presence. While AI is a bit abstract, it is still possible to find objects that users will associate with it. 3D spheres or sound waves are two common objects that represent AI in modern products (thanks to sci-fi movies).
Motion effects play a tremendous role in how users feel about interaction because they introduce dynamism in interactions. As a user interacts with the assistant, the object should respond to user input with appropriate visual feedback. For example, a moving wave can mean that the system is listening to the user right now. All animated effects that you use in your product should be easy-to-follow and predictable. Avoid sudden changes or unclear movements because they can confuse users.
While designing, consider how the dialogue will drive the visual interface you present to the user. Aim for fluid user interface — interface in which new objects or functional controls appear based on the user input.
Related: Here’s What AI Will Never Be Able to Do
The real magic of AI happens when the system is capable of solving a particular task and doing it in a way that makes users believe that the system truly knows them. AI should utilize all information about a user and turn it into value for the user. For example, AI assistants can learn users’ music preferences and surprise them with appropriate options whenever they choose to listen to music. It’s critical to find the right moment to show the power of AI —improper moment selection can create a negative impression of AI that annoys users or forces users to do what they don’t want to do.