Week of Jan 22nd 2025

Updates include: Generative AI solutions: GPT-4o mini is available for testing, LLM Management page in Management Console, Data Deletion API and Report Center Updates.

Exact delivery dates may vary, and brands may therefore not have immediate access to all features on the date of publication. Please contact your LivePerson account team for the exact dates on which you will have access to the features.

The timing and scope of these features or functionalities remain at the sole discretion of LivePerson and are subject to change.

Conversational Cloud Infrastructure

Features

Generative AI solutions: GPT-4o mini is available for testing

If your solution uses Generative AI, and you’re using the LLMs made available via LivePerson (not your own LLM), this announcement is for you.

LivePerson is pleased to communicate that GPT-4o mini is available for testing. 

For several months now, LivePerson has been using GPT-4o mini in Early Access products not yet generally available. We’re very pleased with GPT-4o mini’s performance. It’s just plain better than GPT-3.5: It responds to our current prompts with fewer hallucinations and is often faster.

You can start using GPT-4o mini as soon as:

  • January 22, 2025 if your solution uses KnowledgeAI™ agents (North American and APAC regions only).
  • January 31, 2025 if your solution uses KnowledgeAI agents (EMEA region).
  • February 3, 2025 if your solution uses AutoSummarization or CoPilot Rewrite.
  • February 5, 2025 if your solution uses Conversation Assist (all regions).

Once GPT-4o mini is available for a solution, you can choose to upgrade your solution manually in your test environment.

Currently, when you use GPT-4o mini, data stored at rest remains in your specific region, but data may be processed for inferencing in any region. If you don’t have any related compliance requirements (which is largely based on your line of business), go ahead and start testing GPT-4o mini. If this is a concern, stay tuned for more updates!

FAQs

Will I need to change my custom prompts? 

Our internal testing has revealed that GPT-4o mini performs well with prompts that were originally optimized for GPT-3.5 Turbo. So, generally speaking, there’s no need for prompt customization to suit GPT-4o mini. 

That said, we have found that GPT-4o mini is just a bit more direct in its responses. While such a style is usually very beneficial in conversations, you might prefer to make some adjustments to match your desired persona.

How can I tell if my model is hallucinating less?

One of the benefits of adopting GPT-4o mini is that there are fewer hallucinations

When you move to GPT-4o mini, solutions using Retrieval Augmented Generation or RAG (for example, KnowledgeAI™ agents) might uncover cases where there is information missing in your knowledge base. When such content doesn’t exist and the model hallucinates less, you’re likely to see the model say, “I don’t know” more often. This is good, as it keeps responses grounded and surfaces knowledge/content gaps that you should address. 

How do I test GPT-4o mini?

You can make the switch to using GPT-4o mini on the Advanced tab of the prompt in the Prompt Library. This requires changing the Subscription (first) and the LLM (second).

Image

If you’re interested in testing and need help, contact your LivePerson representative.

Management Console

Features

Generative AI - More ways to manage your use of LLMs via LivePerson

If you’re using an LLM made available via LivePerson to power your Generative AI solution in Conversational Cloud, this note is for you.

In this release, we introduce an LLM Management page to the Management Console. Use this new page to manage your use of the LLMs made available via LivePerson:

  • Expose specific models for use in prompts: For various reasons (performance, cost, etc.), you might want to expose only specific LLMs to your prompt engineers who are creating the prompts for your Generative AI solution. And you can. Only the models that you select are exposed in the Prompt Library that’s used to create and manage prompts.
Image

  • View a model’s cost and RPM limit: Want more visibility into limits, so you can use this data to inform your decisions about model selections? This info is now available.
Image

  • Change the default models: You can customize which LLMs are used by default in prompts in the Prompt Library. Prompt engineers remain free to select a different model (from the list of models that you’ve specified as allowed).
Image

Getting started

Before you can configure and manage the LLMs made available via LivePerson, you must activate our Generative AI features. Think of this as a broadly applied LivePerson Generative AI switch that must be turned on.

Keep in mind that, depending on the Generative AI feature you’re using (KnowledgeAI™ agents, Routing AI agents, automated conversation summaries, etc.), you might or might not need to take additional steps to activate the specific Generative AI feature too. To learn more, refer to the documentation on the given feature.

Generative AI - Onboard and manage your use of your in-house LLM

If you’ve invested in an in-house LLM, you can use it to power the Generative AI features in your Conversational Cloud solution. This lets you align the solution with your brand’s overall LLM strategy.

Image

LivePerson now offers a self-service UI for onboarding and managing your use of your in-house LLM in your Conversational AI solution. Use the new LLM Management page in the Management Console to:

  • Add an LLM subscription: Add as many subscriptions as you require.
Image

No credentials beyond what the LLM provider requires are needed.

  • Change the default models: You can customize which LLMs are used by default in prompts in the Prompt Library. Prompt engineers remain free to select a different model (from the list of models that you’ve specified as allowed).
Image

  • Disable or enable a subscription: As you’re testing models, you might want to hide certain models from users who are creating prompts in LivePerson’s Prompt Library. Disabling a subscription hides the subscription (and, by extension, its associated models) from users, so they can’t select it for use in a prompt. You can disable and re-enable a subscription at any time.
  • Delete a subscription: Delete a subscription that you no longer have use for. Once the subscription is deleted, prompts that use the associated models are automatically switched to use a default model. This ensures the prompts continue to work.

Getting started

Before you can onboard and manage your LLMs, you must activate our Generative AI features. Think of this as a broadly applied LivePerson Generative AI switch that must be turned on.

Keep in mind that, depending on the Generative AI feature you’re using (KnowledgeAI™ agents, Routing AI agents, automated conversation summaries, etc.), you might or might not need to take additional steps to activate the specific Generative AI feature too. To learn more, refer to the documentation on the given feature.

Data Deletion Self Service

Features

Data Deletion API

The Data Deletion API is a set of REST endpoints that allow brands to permanently delete any personal data at the consumer's request. This data can include complete conversation transcripts, hosted files or links shared by the consumer, and the consumer's Personally Identifiable Information (PII).

Data Deletion Self-Service

The Data Deletion Self-Service feature enhances the customer experience related to personal data deletion. Customers can now manage deletion requests directly from the Conversational Cloud.
The interface can be accessed in the "Management Console" by searching for "Data Deletion Self-Service."



Enhancements

Secure Forms Studio, upgrading from V2 to V3

LivePerson is excited to share that we are transitioning to Secure Forms Studio V3, now available as part of the LE → Engage → Secure Forms Studio suite. In February, the previous version of Secure Forms Studio, accessible from LE → Management Console → Secure Forms Studio, will be retired as we move forward to this upgraded version.

For more information on Secure Forms Studio V3, please click here.

Image

Report Center

Features


New Metric

Summarization Metrics - The Report Center will introduce four new metrics related to summarization, designed to help brands track and analyze the usage of the summary feature. These metrics will provide valuable insights into its utilization. Please find the new metrics below:

  • Conversation Summary Rate - Percentage % of total conversations which has summary generated
  • Total Summary - Total number of summaries generated. This takes into account both transfer and close summary
  • Total Transfer summary - Total number of transfer summaries generated
  • Total Conversation Close Summary - Total number of close summaries generated


Enhancements

CSAT Metric Enhancement:

We will update the CSAT metric in the Report Center to align with the calculation used in Analytics Builder.

Current Calculation:

The CSAT metric is currently measured in the Report Center using the following formula:

(csatOriginal - 1) * (100 - 1) / (5 - 1) + 1

New Calculation:

To ensure alignment across analytics tools, we will implement a new formula:

  • New metric name: CSAT Positive Responses Rate
  • New formula:  (Number of 4 or 5 responses) / (Total number of CSAT responses)


Key Changes:

  • The existing CSAT metric will be overridden and replaced with the CSAT Positive Responses Rate metric.
  • For existing reports, the CSAT score is populated according to the new calculation
  • When a user edits an old chart the Measure name is updated with the CSAT Positive Responses Rate in I want to measure * field
  • This change ensures consistency in how CSAT is measured across tools.

Fixes

Bugs Fixes include:

1. Improved Conversation List Performance for Export: We have enhanced the performance of the conversation list to prevent timeout errors when exporting large volumes of conversations. The export limit remains set at 10,000 conversations.

2. Conversation List Sorting: In the Conversation List area, the sort function was not working correctly for certain metrics. This issue has been resolved, and the sort function now works as expected for all metrics.

3. Aggregated Metrics: Extra records were returned when the first dimension was conversationStartTime and the conversation involved multiple agents, skills, and groups. When filtering by Skill, all skills involved in the conversation were returned, regardless of the filter selection. This issue has been addressed to ensure correct filtering based on the selected criteria.