Image Profanity Filter overview

Profanity Filter

    The Image Profanity Filter feature is designed to enhance the safety and professionalism of your communication channels within LivePerson. With the Image Profanity Filter, we are taking significant steps to ensure a respectful and secure environment for your agents and customers.


    Key Highlights

    • Images uploaded by agents and customers will now undergo a profanity check using the Google API.
    • If any uploaded image contains profanity, it will be automatically blocked and will not be uploaded into the system.
    • Agents and customers will not receive any notifications or messages regarding the blocked image, ensuring a seamless and discreet experience.

    Use Cases

    • Inappropriate or offensive images can create a negative experience for agents and customers.
    • Agents may feel uncomfortable or unsafe if they receive abusive or harassing images.
    • Brands may damage their reputation if someone uses offensive images or engages in harassment.
    • Moderating messages manually can be time-consuming and prone to human error.

    Benefits

    Enhanced Safety: This feature helps maintain a safe and respectful environment for all participants in your LivePerson conversations.

    Professionalism: By filtering out inappropriate images, you can maintain a professional brand image and ensure your communication channels remain respectful.

    Efficiency: With automatic image filtering, you can save time and reduce the need for manual moderation, minimizing the risk of human error.

    Enablement

    The Image Profanity Filter for messaging requires backend enablement. Please reach out to your LivePerson account team for more information.

    Please note that our Image Profanity Feature uses Google's Vision API for image analysis. Images uploaded to the API will be sent to Google Cloud for processing to determine if the Image(s) contain any profanity. 

    For additional information about Google's Cloud Vision API, Please click here.

    We shall use the Safe Search label “adult” to block the Profanity-related images. If we receive the result as one of the three (i.e., POSSIBLE, LIKELY, or VERY_LIKELY), then we shall reject the image upload.

    User Journey for Both Customer and Agent

    Step 1: Either the customer or the agent initiates a conversation within LivePerson.

    Step 2: During the conversation, either the customer or the agent decides to share an image by uploading it directly from their device.

    Step 3: The uploaded image is automatically sent to the Google API for a profanity and abuse check, regardless of whether it was shared by the customer or the agent.

    Step 4: If the Google API detects any obscenity or abuse in the uploaded image, it will trigger a block on the image, regardless of the sender (customer or agent).

    Step 5: The sender, whether it's the customer or the agent, will receive an alert indicating that the file sharing has failed. This alert will not contain any specific reason or description for the failure.

    Step 6: Neither the customer nor the agent will see any blocked image, ensuring that both parties are safeguarded from potentially abusive or offensive content.

    Step 7: Agent will receive the notification that Customer has tried to upload the profane/abusive image. Based on this agent would know about the intention of other party.


    Future Releases

    • Multilingual support for Notification
    • Self enablement in conversation cloud
    • Reporting capabilities

    Missing Something?

    Check out our Developer Center for more in-depth documentation. Please share your documentation feedback with us using the feedback button. We'd be happy to hear from you.