VexLLM generates Vulnerability-Exploitability eXchange (VEX) information using LLM, so as to silence negligible CVE alerts that are produced by Trivy.
The following output formats are supported:
Option 1: As a standalone program:
go install github.com/AkihiroSuda/vexllm/cmd/vexllm@latest
Option 2: As a Trivy plugin:
trivy plugin install github.com/AkihiroSuda/vexllm
alias vexllm="trivy vexllm"
# Set OpenAI API key
export OPENAI_API_KEY=...
# Specify OpenAI model
export OPENAI_MODEL=gpt-4o-mini
# Generate a report using Trivy
trivy image python:3.12.4 --format=json --severity HIGH,CRITICAL >python.json
# Generate .trivyignore using VexLLM
vexllm generate python.json .trivyignore \
--hint-not-server \
--hint-compromise-on-availability \
--hint-used-commands=python3 \
--hint-unused-commands=git,wget,curl,apt,apt-get
# Print the report, using the generated .trivyignore
trivy convert --format=table python.json
The following hints are passed to the LLM:
- The image is not used as a server program
- Confidentiality and Integrity matter more than Availability for this non-server image
python3
command is known to be usedgit
,wget
,curl
,apt
,apt-get
commands are known to be unused
Output of .trivyignore
:
# {"vulnerability":{"@id":"CVE-2024-32002","description":"Git is a revision cont
rol system. Prior to versions 2.45.1, 2.44.1, 2.43.4, 2.42.2, 2.41.1, 2.40.2, an
d 2.39.4, repositories with submodules can be crafted in a way that exploits a b
ug in Git whereby it can be fooled into writing files not into the submodule's w
orktree but into a `.git/` directory. This allows writing a hook that will be ex
ecuted while the clone operation is still running, giving the user no opportunit
y to inspect the code that is being executed. The problem has been patched in ve
rsions 2.45.1, 2.44.1, 2.43.4, 2.42.2, 2.41.1, 2.40.2, and 2.39.4. If symbolic l
ink support is disabled in Git (e.g. via `git config --global core.symlinks fals
e`), the described attack won't work. As always, it is best to avoid cloning rep
ositories from untrusted sources."},"products":[{"@id":"git-man@1:2.39.2-1.1"}],
"status":"not_affected","justification":"vulnerable_code_not_in_execute_path","i
mpact_statement":"{\"confidence\":0.6,\"reason\":\"This RCE vulnerability is spe
cific to recursive clones in Git, which is not a commonly used feature in the co
ntext of a Python container image.\"}"}
CVE-2024-3200
# [...]
# {"vulnerability":{"@id":"CVE-2023-45853","description":"MiniZip in zlib throug
h 1.3 has an integer overflow and resultant heap-based buffer overflow in zipOpe
nNewFileInZip4_64 via a long filename, comment, or extra field. NOTE: MiniZip is
not a supported part of the zlib product. NOTE: pyminizip through 0.2.6 is also
vulnerable because it bundles an affected zlib version, and exposes the applica
ble MiniZip code through its compress API."},"products":[{"@id":"zlib1g-dev@1:1.
2.13.dfsg-1"}],"status":"not_affected","justification":"vulnerable_code_not_in_e
xecute_path","impact_statement":"{\"confidence\":0.7,\"reason\":\"The zlib vulne
rability related to MiniZip is not a concern as the artifact does not involve us
ing MiniZip functionality.\"}"}
CVE-2023-45853
The confidence
score and the reason
string in the impact_statement
property
are generated by the LLM.
Other properties are duplicated from the original input.
VexLLM is tested with OpenAI GPT-4o mini and Anthropic Claude 3.5 Sonnet.
The following env vars are recognized:
- OpenAI
OPENAI_API_KEY
(necessary)OPENAI_MODEL
, e.g.,gpt-3.5-turbo
(default),gpt-4o-mini
(recommended)OPENAI_BASE_URL
OPENAI_API_BASE
OPENAI_ORGANIZATION
- Anthropic
ANTHROPIC_API_KEY
(necessary)
VexLLM may also work with Google AI, and Ollama, but these backends are not tested.
See pkg/llm/...
.
Generate Vulnerability-Exploitability eXchange (VEX) information using LLM, so as to silence negligible CVE alerts that are produced by Trivy.
Usage:
vexllm generate INPUT OUTPUT
Examples:
# Basic usage
export OPENAI_API_KEY=...
export OPENAI_MODEL=gpt-4o-mini
trivy image python:3.12.4 --format=json --severity HIGH,CRITICAL >python.json
vexllm generate python.json .trivyignore \
--hint-not-server \
--hint-compromise-on-availability \
--hint-used-commands=python3 \
--hint-unused-commands=git,wget,curl,apt,apt-get
trivy convert --format=table python.json
Flags:
-h, --help help for generate
--hint stringArray Hint, as an arbitrary text
--hint-compromise-on-availability Hint: focus on Confidentiality and Integrity rather than on Availability
--hint-not-server Hint: not a server program
--hint-unused-commands strings Hint: list of unused shell commands
--hint-used-commands strings Hint: list of used shell commands
--input-format string Input format ([auto trivy]) (default "auto")
--llm string LLM backend ([auto openai ollama anthropic googleai]) (default "auto")
--llm-batch-size int Number of vulnerabilities to be processed in a single LLM API call (default 10)
--llm-temperature float Temperature
--output-format string Output format ([auto trivyignore openvex]) (default "auto")
Global Flags:
--debug debug mode [$DEBUG]
Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package manager.
To load completions in your current shell session:
source <(vexllm completion bash)
To load completions for every new session, execute once:
#### Linux:
vexllm completion bash > /etc/bash_completion.d/vexllm
#### macOS:
vexllm completion bash > $(brew --prefix)/etc/bash_completion.d/vexllm
You will need to start a new shell for this setup to take effect.
Usage:
vexllm completion bash
Flags:
-h, --help help for bash
--no-descriptions disable completion descriptions
Global Flags:
--debug debug mode [$DEBUG]