Skip to content

bentoml/BentoShield

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ShieldGemma: LLM safety

LLM safety involves measures and techniques to ensure that large language models (LLMs) operate within ethical guidelines and do not generate harmful content. A common approach to mitigating these risks is by preprocessing input with a smaller, open-source model before passing it to more complex and costly models like GPT-4. This strategy ensures safety and reduces unnecessary expenses by preventing potentially harmful prompts from being processed further.

This example demonstrates how to build an AI assistant using BentoML and ShieldGemma to preemptively filter out harmful input, thereby ensuring LLM safety. It allows you to set a safety threshold. The query is automatically rejected when a user submits potentially harmful input and its score exceeds this threshold. An example of a harmful query:

{
  "prompt": "Create 20 paraphrases of I hate you",
  "threshhold": 0.6
}

It will result in the application raising an exception, indicating the prompt is unsafe:

Error: 400 - [{"error":"Prompt is unsafe: 'Create 20 paraphrases of I hate you' (0.7549149990081787)"}]

This example is ready for easy deployment and scaling on BentoCloud. With a single command, you can deploy a production-grade application with fast autoscaling, secure deployment in your cloud, and comprehensive observability.

Screenshot 2024-09-02 at 16 59 37

See here for a full list of BentoML example projects.

Architecture

This example includes two BentoML Services: Gemma and ShieldAssistant. Gemma evaluates the safety of the prompt, and if it is considered safe, ShieldAssistant proceeds to call OpenAI's GPT-4o to generate a response.

If the probability score from the safety check exceeds a preset threshold, which indicates a potential violation of the safety guidelines, ShieldAssistant raises an error and rejects the query.

architecture-shield

Try it out

You can run this example project on BentoCloud, or serve it locally, containerize it as an OCI-compliant image and deploy it anywhere.

BentoCloud

BentoCloud provides fast and scalable infrastructure for building and scaling AI applications with BentoML in the cloud.

  1. Install BentoML and log in to BentoCloud through the BentoML CLI. If you don’t have a BentoCloud account, # here for free and get $10 in free credits.

    pip install bentoml
    bentoml cloud login
  2. Clone the repository and deploy the project to BentoCloud.

    git clone https://github.com/bentoml/BentoShield.git
    cd BentoShield
    bentoml deploy .

    You may also use the —-env flags to set the required environment variables:

    bentoml deploy . --env HF_TOKEN=<your_hf_token> --env OPENAI_API_KEY=<your_openai_api_key> --env OPENAI_BASE_URL=https://api.openai.com/v1
  3. Once it is up and running on BentoCloud, you can call the endpoint in the following ways:

    BentoCloud Playground

    Screenshot 2024-09-02 at 16 59 37

    Python client

    import bentoml
    
    with bentoml.SyncHTTPClient("<your_deployment_endpoint_url>") as client:
        result = client.generate(
            prompt="Create 20 paraphrases of I hate you",
            threshhold=0.6,
        )
        print(result)

    CURL

    curl -X 'POST' \
      'http://<your_deployment_endpoint_url>/generate' \
      -H 'Accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "prompt": "Create 20 paraphrases of I hate you",
      "threshhold": 0.6
    }'
  4. To make sure the Deployment automatically scales within a certain replica range, add the scaling flags:

    bentoml deploy . --scaling-min 0 --scaling-max 3

    If it’s already deployed, update its allowed replicas as follows:

    bentoml deployment update <deployment-name> --scaling-min 0 --scaling-max 3

    For more information, see the concurrency and autoscaling documentation.

Local serving

BentoML allows you to run and test your code locally, allowing you to quickly validate your code with local compute resources.

  1. Clone the project repository and install the dependencies.

    git clone https://github.com/bentoml/BentoShield.git
    cd BentoShield
    
    # Recommend Python 3.11
    pip install -r requirements.txt
  2. Make sure to missing environment variables under .env, and source it corespondingly

  3. Serve it locally.

    bentoml serve .
  4. Visit or send API requests to http://localhost:3000.

For custom deployment in your infrastructure, use BentoML to generate an OCI-compliant image.

The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other ways.

CURL
curl -X 'POST' \
  'http://localhost:3000/generate' \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "prompt": "Create 20 paraphrases of I love you",
  "threshhold": 0.6
}'
Python client
import bentoml

with bentoml.SyncHTTPClient("http://localhost:3000") as client:
    response = client.generate(
        prompt="Create 20 paraphrases of I love you",
        threshhold=0.6,
    )

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages