Enhancing User Engagement with AI Question Suggestions
Learn how AI can provide better question suggestions for users.
Xiaobin Shen, Daniel Lee, Sumit Ranjan, Sai Sree Harsha, Pawan Sevak, Yunyao Li
― 8 min read
Table of Contents
Enterprise conversing AI programs are like helpful office buddies that assist workers in tasks like marketing and customer management. However, when new users join the fun, they sometimes feel lost about what questions to ask. This can be a challenge in advanced systems that are constantly changing. To tackle this problem, a framework is suggested to improve question suggestions in such systems. This framework is designed to give users smart, context-based questions that help them find what they need and make the most of available features.
AI Assistants
The Rise ofAs technology marches forward, big language models have stepped into the limelight, making AI systems more capable. Today, many companies are adding AI assistants to their tools to automate conversations and enhance the experience for users. These assistants are like skilled receptionists but for digital tasks, guiding users through structured tasks and improving how the platforms work overall.
Typically, enterprise AI assistants deal with two main areas: sharing product information and providing operational insights. The focus of our discussion is on explaining things to users, helping them gain clarity, and guiding them through the platform smoothly. Even though AI has learned a lot, simply answering questions is often not enough. Users can get stuck on what to ask next after receiving answers. This is especially true for newcomers who are still getting acquainted with the system's features.
The User's Dilemma
Consider a scenario where a new employee in the marketing department asks, "How is profile richness calculated?" They receive a lengthy explanation about the metrics in Adobe Experience Platform (AEP). Although this response sheds light on profile richness, the user might still be pondering how to use this info in real life. What should they do next? How does this fit into their broader tasks? This confusion points to the challenge of formulating follow-up questions that unlock the system's full potential.
Question suggestions can step in to bridge this gap. They not only respond to users but also nudge them toward relevant inquiries they might not have thought about. For instance, after getting a response, suggestions like “What are the implications of exceeding the Profile Richness entitlement?” or “How can I monitor and manage Profile Richness effectively?” help the user see broader aspects of profile richness and spark curiosity around related features.
The Challenge of Question Suggestions
However, generating good question suggestions in these systems isn't without its troubles. Many enterprise systems lack substantial historical data, making it hard for traditional models to predict queries. Sometimes, users ask quirky or messy questions that don’t fit common patterns, complicating the process. Moreover, as AI assistants keep growing and changing, a gap appears between what the system can do and what users know about it. This gap can lead to less user engagement and fewer people making full use of the platform.
To tackle these challenges, a framework is proposed that enhances question suggestions in enterprise conversational AI systems. This approach employs real-world data from the AI Assistant in the Adobe Experience Platform (AEP). The focus is on generating proactive and categorized question suggestions to help users discover more about the platform.
The Contribution
In summary, the improvements outlined in this study involve:
- A new approach for generating follow-up questions in enterprise AI, linking user intent analysis with chat session interactions.
- Using advanced language models to create context-friendly questions based on current inquiries and past interactions.
- Conducting human evaluations to judge the effectiveness of the question suggestions based on various criteria like relevance and usefulness.
This research marks a first for studying the impact of question suggestions in a practical enterprise AI system.
Related Concepts
Question Suggestion Techniques
In the tech world, traditional methods of question suggestions have significantly improved user experiences in search engines. By predicting and recommending questions based on users' past activities, these techniques have made searching more user-friendly. Various approaches, from basic data analysis to complex neural networks, have been used to enhance exploration in large-scale web searches.
Some efforts even aim to diversify question suggestions, ensuring users receive different yet relevant options. However, these methods typically require a lot of data specific to tasks to train effective models. With advances in large language models and retrieval-augmented generation, the need for task-specific data has diminished. Instead, pre-trained models leverage existing knowledge to suggest relevant questions.
Discoverability in AI Assistants
Discoverability refers to how easy it is for users to find out what actions they can take within a system. While this idea has been studied in traditional software, it often falls through the cracks in complex AI systems. As platforms grow richer in features, users may struggle to recognize new capabilities, leading to decreased usage.
Past studies on discoverability mostly looked at desktop software, mobile apps, and voice interfaces. Many focused on suggesting commands relevant to users, improving their overall experience. Recent work also explores the benefits of proactive interactions in conversational AI. Studies have shown that timely suggestions can lead to better interactions and increased user satisfaction.
Despite the focus on various fields, discoverability in enterprise conversational AI remains under-explored. Users navigating complex business contexts, like customer management, often face difficulties. These users come from diverse backgrounds, making it essential for systems to support immediate engagement and ongoing learning about platform features.
The Framework for Question Suggestions
The framework for next-question suggestion in enterprise conversational AI systems consists of two primary components:
- User Intent Analysis: Conducted across the entire user base to identify trends and needs among users.
- Chat Session Question Generation: Focused on crafting questions based on an individual user's history within a specific chat session.
This two-pronged approach allows the system to both understand shifts in user behavior and generate relevant questions tailored for each user’s interaction history.
User Intent Analysis
This stage identifies common patterns in user inquiries across the system. By understanding why users ask certain questions, the system can categorize user intents.
For example, if a user seeks to understand a process, the system may notice patterns leading to follow-up inquiries. This analysis allows for the generation of question categories that can help direct users toward related yet lesser-known features in the platform.
Chat-Session Level Question Generation
This part uses the current interaction history to create question suggestions for the user. Inputs for this phase include the most recent user query, the AI’s response to that query, and any prior questions asked in that same session. By leveraging these real-time interactions, the framework aims to create suggestions that are not just relevant but also proactive in guiding users toward feature exploration.
Evaluating the Framework
Evaluating the effectiveness of enhanced discoverability is a complex task, especially because there's a lack of standard datasets or metrics to measure success in this area. To assess the framework, data was collected from various interactions between users and the AI Assistant. Human evaluations were conducted to ensure a thorough assessment of the framework.
User Intent Analysis Findings
The findings reveal that over 35% of user queries had no connection to prior interactions in the same session. This highlights the complexity of forming patterns in user inquiries. It was also discovered that users often ask expansion questions or follow-up queries, which can help in better capturing the variety of user intents.
Human Evaluation Process
To compare the performance of the new framework against the baseline, both sets of question suggestions were evaluated. Questions were rated based on several criteria: relatedness, validity, usefulness, diversity, and potential for discoverability. Annotators were tasked with assessing the suggestions without knowing which was which, adding an element of impartiality to the evaluation.
General Insights
The findings underscore the challenges posed by sparse data in enterprise AI systems. Traditional methods of training models don’t always work well here. Instead, using big language models can provide an effective solution for generating question suggestions.
Moreover, the results indicate that a one-size-fits-all approach is not the way to go. Different users have diverse intents when interacting with the system, and these varying perspectives should be accounted for in evaluating question suggestions.
Conclusion
This framework emphasizes the need for adaptable question suggestion strategies that can keep up with changes in user behavior and system capabilities. It aims to help users navigate complex platforms while also encouraging them to explore less-frequented features.
Future efforts can focus on looking into how improved question suggestions impact user behavior in real-world environments. Metrics such as how often users click on features and how frequently they explore them will be crucial in gauging the effectiveness of these refined suggestions.
In a nutshell, efficient question suggestions can be the friendly tour guide users need to fully enjoy the vast landscape of their enterprise AI systems. Let’s hope these systems soon become as popular as coffee breaks in the office!
Original Source
Title: Enhancing Discoverability in Enterprise Conversational Systems with Proactive Question Suggestions
Abstract: Enterprise conversational AI systems are becoming increasingly popular to assist users in completing daily tasks such as those in marketing and customer management. However, new users often struggle to ask effective questions, especially in emerging systems with unfamiliar or evolving capabilities. This paper proposes a framework to enhance question suggestions in conversational enterprise AI systems by generating proactive, context-aware questions that try to address immediate user needs while improving feature discoverability. Our approach combines periodic user intent analysis at the population level with chat session-based question generation. We evaluate the framework using real-world data from the AI Assistant for Adobe Experience Platform (AEP), demonstrating the improved usefulness and system discoverability of the AI Assistant.
Authors: Xiaobin Shen, Daniel Lee, Sumit Ranjan, Sai Sree Harsha, Pawan Sevak, Yunyao Li
Last Update: 2024-12-14 00:00:00
Language: English
Source URL: https://arxiv.org/abs/2412.10933
Source PDF: https://arxiv.org/pdf/2412.10933
Licence: https://creativecommons.org/licenses/by/4.0/
Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.
Thank you to arxiv for use of its open access interoperability.