This is a library built on PyTorch and HuggingFace Transformers to measure the gap between neural text and human text with the MAUVE measure, introduced in this NeurIPS 2021 paper (Outstanding Paper Award) and this JMLR 2023 paper.
MAUVE is a measure of the gap between neural text and human text. It is computed using the Kullback–Leibler (KL) divergences between the two text distributions in a quantized embedding space of a large language model. MAUVE can identify differences in quality arising from model sizes and decoding algorithms.
New: MAUVE is available via HuggingFace Evaluate!
Features:
- MAUVE with quantization using k-means.
- Adaptive selection of k-means hyperparameters.
- Compute MAUVE using pre-computed GPT-2 features (i.e., terminal hidden state), or featurize raw text using HuggingFace transformers + PyTorch.
- MAUVE can also be used for other modalities (e.g. images or audio): pass in pre-computed feature embeddings to our API.
Further details can be found below.
For scripts to reproduce the experiments in the paper, please see this repository.
For a direct install, run this command from your terminal:
pip install mauve-text
If you wish to edit or contribute to MAUVE, you should install from source
git clone git@github.com:krishnap25/mauve.git
cd mauve
pip install -e .
Some functionality requires more packages. Please see the requirements below.
The installation command above installs the main requirements, which are:
numpy>=1.18.1
scikit-learn>=0.22.1
faiss-cpu>=1.7.0
tqdm>=4.40.0
In addition, if you wish to use featurization within MAUVE, you need to manually install:
torch>=1.1.0
: Instructionstransformers>=3.2.0
: Simply runpip install transformers
after PyTorch has been installed (Detailed Instructions)
Let p_text
and q_text
each be a list of strings, where each string is a complete generation (including context).
For best practice, MAUVE needs at least a few thousand generations each for p_text
and q_text
(the paper uses 5000 each).
For our demo, we use 100 generations each for fast running time.
To demonstrate the functionalities of this package on some real data,
this repository provides some functionalities to
download and use sample data in the ./examples
folder
(these are not a part of the MAUVE package, you need to clone the repository for these).
Let use download some Amazon product reviews as well as machine generations, provided by the GPT-2 output dataset repo by running this command in our shell (downloads ~17M in size):
python examples/download_gpt2_dataset.py
The data is downloaded into the ./data
folder.
We can load the data (100 samples out of the available 5000) in Python as
from examples import load_gpt2_dataset
p_text = load_gpt2_dataset('data/amazon.valid.jsonl', num_examples=100) # human
q_text = load_gpt2_dataset('data/amazon-xl-1542M.valid.jsonl', num_examples=100) # machine
We can now compute MAUVE as follows (note that this requires installation of PyTorch and HF Transformers).
import mauve
# call mauve.compute_mauve using raw text on GPU 0; each generation is truncated to 256 tokens
out = mauve.compute_mauve(p_text=p_text, q_text=q_text, device_id=0, max_text_length=256, verbose=False)
print(out.mauve) # prints 0.9917
This first downloads GPT-2 large tokenizer and pre-trained model (if you do not have them downloaded already).
Even if you have the model offline, it takes it up to 30 seconds to load the model the first time.
out
now contains the fields:
out.mauve
: MAUVE score, a number between 0 and 1. Larger values indicate that P and Q are closer.out.frontier_integral
: Frontier Integral, a number between 0 and 1. Smaller values indicate that P and Q are closer.out.mauve_star
andout.frontier_integral_star
: their corresponding versions computed with Krichevsky-Trofimov smoothing. See this JMLR 2023 paper on why this could be preferable.out.divergence_curve
: anumpy.ndarray
of shape (m, 2); plot it with matplotlib to view the divergence curveout.p_hist
: a discrete distribution, which is a quantized version of the text distributionp_text
out.q_hist
: same as above, but withq_text
You can plot the divergence curve using
# Make sure matplotlib is installed in your environment
import matplotlib.pyplot as plt
plt.plot(out.divergence_curve[:, 1], out.divergence_curve[:, 0])
For each text (in both p_text
and q_text
),
MAUVE internally uses the terimal hidden state from GPT-2 large as a feature representation. Of course, more recent LLMs can also be used. Generally, the better the feature embeddings, the better is the performance of MAUVE.
There are multiple ways to use this package. For instance, you can use cached hidden states directly (this does not require PyTorch and HF Transformers to be installed):
# call mauve.compute_mauve using features obtained directly
# p_feats and q_feats are `np.ndarray`s of shape (n, dim)
# we use a synthetic example here
import numpy as np
p_feats = np.random.randn(100, 1024) # feature dimension = 1024
q_feats = np.random.randn(100, 1024)
out = mauve.compute_mauve(p_features=p_feats, q_features=q_feats)
Note that this API can be used to evaluate other modalities such as images or audio with MAUVE.
You can also compute MAUVE using the tokenized (BPE) representation using the GPT-2 vocabulary
(e.g., obtained from using an explicit call to transformers.GPT2Tokenizer
).
# call mauve.compute_mauve using tokens on GPU 1
# p_toks, q_toks are each a list of LongTensors of shape [1, length]
# we use synthetic examples here
import torch
p_toks = [torch.LongTensor(np.random.choice(50257, size=(1, 32), replace=True)) for _ in range(100)]
q_toks = [torch.LongTensor(np.random.choice(50257, size=(1, 32), replace=True)) for _ in range(100)]
out = mauve.compute_mauve(p_tokens=p_toks, q_tokens=q_toks, device_id=1, max_text_length=1024)
To view the progress messages, pass in the argument verbose=True
to mauve.compute_mauve
.
You can also use different forms as inputs for p
and q
, e.g.,
p
via p_text
and q
via q_features
.
mauve.compute_mauve
takes the following arguments
p_features
:numpy.ndarray
of shape (n, d), where n is the number of generationsq_features
:numpy.ndarray
of shape (n, d), where n is the number of generationsp_tokens
: list of length n, each entry is torch.LongTensor of shape (1, length); length can vary between generationsq_tokens
: list of length n, each entry is torch.LongTensor of shape (1, length); length can vary between generationsp_text
: list of length n, each entry is a stringq_text
: list of length n, each entry is a stringnum_buckets
: the size of the histogram to quantize P and Q. Options: 'auto' (default) or an integerpca_max_data
: the number data points to use for PCA dimensionality reduction prior to clustering. If-1
, use all the data. Default -1kmeans_explained_var
: amount of variance of the data to keep in dimensionality reduction by PCA. Default 0.9kmeans_num_redo
: number of times to redo k-means clustering (the best objective is kept). Default 5kmeans_max_iter
: maximum number of k-means iterations. Default 500featurize_model_name
: name of the model from which features are obtained. Default'gpt2-large'
Use one of['gpt2', 'gpt2-medium', 'gpt2-large', 'gpt2-xl']
.device_id
: Device for featurization. Supply a GPU id (e.g. 0 or 3) to use GPU. If no GPU with this id is found, use CPUmax_text_length
: maximum number of tokens to consider. Default 1024divergence_curve_discretization_size
: Number of points to consider on the divergence curve. Default 25mauve_scaling_factor
: "c" from the paper. Default 5.verbose
: If True (default), print running time updatesseed
: random seed to initialize k-means cluster assignments.batch_size
: Batch size for feature extraction.
Note: p
and q
can be of different lengths, but it is
recommended that they are the same length.
The best way to contact the authors in case of any questions or clarifications (about the package or the paper) is by raising an issue on GitHub. We are not able to respond to queries over email.
If you find any bugs, please raise an issue on GitHub. If you would like to contribute, please submit a pull request. We encourage and highly value community contributions.
Some features which would be good to have are:
- featurization in HuggingFace Transformers with a JAX backend.
MAUVE is quite different from most metrics in common use, so here are a few guidelines on proper usage of MAUVE:
-
Relative comparisons:
- We find that MAUVE is best suited for relative comparisons while the absolute MAUVE score is less meaningful.
- For instance if we wish to find which of
model1
andmodel2
are better at generating the human distribution, we can compareMAUVE(text_model1, text_human)
andMAUVE(text_model2, text_human)
. - The absolute number
MAUVE(text_model1, text_human)
can vary based on the hyperparameters selected below, but the relative trends remain the same. - One must ensure that the hyperparameters are exactly the same for the MAUVE scores under comparison.
- Some hyperparameters are described below.
-
Number of generations:
- MAUVE computes the similarity between two distributions.
- Therefore, each distribution must contain at least a few thousand samples (we use 5000 each). MAUVE with a smaller number of samples is biased towards optimism (that is, MAUVE typically goes down as the number of samples increase) and exhibits a larger standard deviation between runs.
-
Number of clusters (discretization size):
- We take
num_buckets
to be 0.1 * the number of samples. - The performance of MAUVE is quite robust to this, provided the number of generations is not too small.
- We take
-
MAUVE is too large or too small:
- The parameter
mauve_scaling_parameter
controls the absolute value of the MAUVE score, without changing the relative ordering between various methods. The main purpose of this parameter is to help with interpretability. - If you find that all your methods get a very high MAUVE score (e.g., 0.995, 0.994),
try increasing the value of
mauve_scaling_factor
. (note: this also increases the per-run standard deviation of MAUVE). - If you find that all your methods get a very low MAUVE score (e.g. < 0.4), then
try decreasing the value of
mauve_scaling_factor
.
- The parameter
-
MAUVE takes too long to run:
- You can also try reducing the number of clusters using the argument
num_buckets
. The clustering algorithm's run time scales as the square of the number of clusters. Once the number of clusters exceeds 500, the clustering really starts to slow down. In this case, it could be helpful to set the number of clusters to 500 by overriding the default (which isnum_data_points / 10
, so use this when the number of samples for each of p and q is over 5000). - In this case, try reducing the clustering hyperparameters:
set
kmeans_num_redo
to1
, and if this does not work,kmeans_max_iter
to100
. This enables the clustering to run faster at the cost of returning a worse clustering.
- You can also try reducing the number of clusters using the argument
-
MAUVE's variance is large relative to the differences we try to quantify:
- We observed that is quite easy to capture basic errors with MAUVE but much harder to quantify subtle errors (e.g., when trying to improve over nucleus sampling).
- To measure subtle differences with confidence, the best solution is to use better embeddings, if you have access to them.
- You might also want to consider more random runs to reduce the variance: more number of k-means seeds (cheapest in terms of compute), more number of generation seeds (for sampling based algorithms), or larger number of text samples.
If you find this package useful, or you use it in your research, please cite the following papers:
@article{pillutla-etal:mauve:jmlr2023,
title={{MAUVE Scores for Generative Models: Theory and Practice}},
author={Pillutla, Krishna and Liu, Lang and Thickstun, John and Welleck, Sean and Swayamdipta, Swabha and Zellers, Rowan and Oh, Sewoong and Choi, Yejin and Harchaoui, Zaid},
journal={JMLR},
year={2023}
}
@inproceedings{pillutla-etal:mauve:neurips2021,
title={MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers},
author={Pillutla, Krishna and Swayamdipta, Swabha and Zellers, Rowan and Thickstun, John and Welleck, Sean and Choi, Yejin and Harchaoui, Zaid},
booktitle = {NeurIPS},
year = {2021}
}
@inproceedings{liu-etal:mauve-theory:neurips2021,
title={{Divergence Frontiers for Generative Models: Sample Complexity, Quantization Effects, and Frontier Integrals}},
author={Liu, Lang and Pillutla, Krishna and Welleck, Sean and Oh, Sewoong and Choi, Yejin and Harchaoui, Zaid},
booktitle={NeurIPS},
year={2021}
}
This work was supported by NSF DMS-2134012, NSF CCF-2019844, NSF DMS-2023166, the DARPA MCS program through NIWC Pacific (N66001-19-2-4031), the CIFAR "Learning in Machines & Brains" program, a Qualcomm Innovation Fellowship, and faculty research awards.