Understand how your AI Agent works

Lovi’s AI Agent leverages Large Language Models (LLMs) to interpret customer inquiries by analyzing extensive text data, enabling it to discern the intent behind questions.

Utilizing generative AI, the agent formulates responses by assembling information from your knowledge base into natural, conversational replies.

To ensure response quality, the AI Agent employs content filters that verify each reply is:

Data: There is information about that query in the knowledge base.

Smalltalk: Interpret that this is a generic question or comment that can be answered.

No match: Interprets that it is a specific question that does not exist in the knowledge base but should exist.

Lovi’s Reasoning Engine further enhances the AI Agent’s capabilities by considering:

Conversation context: Previous interactions to provide coherent responses.

Knowledge base: Availability of pertinent information.

Business systems: Configured actions that allow the agent to retrieve the necessary information.

Based on this analysis, the AI Agent determines the appropriate course of action, such as asking follow-up questions, providing information from the knowledge base, or executing configured actions to assist the customer effectively.

How your AI Agent generates content from your knowledge base

When you connect your knowledge base to Lovi’s AI Agent, it ingests your content to efficiently provide relevant information to customer inquiries. Here’s an overview of the process:

Ingesting Your Knowledge Base

  1. Content Import: Lovi’s AI Agent imports all the content from your knowledge bases and checks for any content updates every 24 hours.

  2. Content Chunking: The content is divided into smaller sections, each focusing on a single key concept. This segmentation allows the AI Agent to search more efficiently. Each chunk retains contextual information, including preceding headings.

  3. Embedding Creation: Each chunk is processed by a Large Language Model (LLM) to generate numerical representations, known as embeddings, which capture the meaning of the content. These embeddings are stored in a database for quick retrieval.

Generating Responses

  1. Query Processing: When a customer submits a query, Lovi’s AI Agent converts it into an embedding using the LLM. It then performs a moderation check to filter out inappropriate or toxic content.

  2. Semantic Retrieval: The AI Agent compares the query’s embedding with those in its database to identify the most semantically similar content chunks. It selects the top three relevant chunks to formulate a response.

  3. Contextual Understanding: For follow-up questions, the AI Agent may rephrase the query to incorporate previous conversation context, enhancing the relevance of the retrieved information.

  4. Response Generation: The selected content chunks are sent to GPT to construct a natural-sounding response. This response undergoes three filters to ensure it is safe, relevant, and accurate before being delivered to the customer.

By following this process, Lovi’s AI Agent effectively utilizes your knowledge base to provide accurate and contextually appropriate responses to customer inquiries.