Mining Millions of Search Result Pages of Hundreds of Search Engines from 25Β Years of Web Archives.
Start now by running your custom analysis/experiment, scraping your query log, or looking at our example files.
The data in the Archive Query Log is highly sensitive (still, you can re-crawl everything from the Wayback Machine). For that reason, we ensure that custom experiments or analyses can not leak sensitive data (please get in touch if you have questions) by using TIRA as a platform for custom analyses/experiments. In TIRA, you submit a Docker image that implements your experiment. Your software is then executed in sandboxed mode (without an internet connection) to ensure that your software does not leak sensitive information. After your software execution is finished, administrators will review your submission and unblind it so that you can access the outputs.
Please refer to our dedicated TIRA tutorial as the starting point for your experiments.
For running the CLI and crawl a query log on your own machine, please refer to the instructions for single-machine deployments. If instead you want to scale up and run the crawling pipelines on a cluster, please refer to the instructions for cluster deployments.
To run the Archive Query Log CLI on your machine, you can either use our PyPi package or the Docker image. (If you absolutely need to, you can also install the Python CLI or the Docker image from source.)
First you need to install Python 3.12 and pipx (this allows you to install the AQL CLI in a virtual environment). Then, you can install the Archive Query Log CLI by running:
pipx install archive-query-log
Now you can run the Archive Query Log CLI by running:
aql --help
First, install Python 3.12 and then clone this repository. From inside the repository directory, create a virtual environment and activate it:
python3.12 -m venv venv/
source venv/bin/activate
Install the Archive Query Log by running:
pip install -e .
Now you can run the Archive Query Log CLI by running:
aql --help
You only need to install Docker.
Note: The commands below use the syntax of the PyPi installation. To run the same commands with the Docker installation, replace aql
with docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml ghcr.io/webis-de/archive-query-log
, for example:
docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml ghcr.io/webis-de/archive-query-log --help
First, install Docker and clone this repository. From inside the repository directory, build the Docker image like this:
docker build -t aql .
Note: The commands below use the syntax of the PyPi installation. To run the same commands with the Docker installation, replace aql
with docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml aql
, for example:
docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml aql --help
Crawling the Archive Query Log requires access to an Elasticsearch cluster and some S3 block storage. To configure access to the Elasticsearch cluster and S3, add a config.override.yml
file in the current directory with the following contents. Replace the placeholders with your actual credentials:
es:
host: "<HOST>"
port: 9200
username: "<USERNAME>"
password: "<PASSWORD>"
s3:
endpoint_url: "<URL>"
bucket_name: archive-query-log
access_key: "<KEY>"
secret_key: "<KEY>"
The crawling pipeline of the Archive Query Log can best be understood by looking at a small toy example. Here, we want to crawl and parse SERPs of the ChatNoir search engine from the Wayback Machine.
TODO: Add example instructions.
Add new web archive services (e.g., the Wayback Machine) to the AQL by running:
aql archives add
We maintain a list of compatible web archives below.
Below, we provide a curated list of web archives. In this list, archives that have both a CDX API and a Memento API are compatible with the Archive Query Log crawler and can be used to mine SERPs.
Name | CDX API | Memento API | Size | Funding | Notes | AQL |
---|---|---|---|---|---|---|
Wayback Machine | π© | π© | 928B | non-profit | - | π© |
Stanford Web Archive | π© | π© | - | university | Websites selected by subject specialists | π© |
Arquivo.pt | π© | π© | 47M | government | Focus on Portugese websites | π© |
Icelandic Web Archive | π© | π© | - | government | Only .is -domains and hand-picked Icelandic websites of other TLDs |
π© |
Estonian Web Archives | π© | π© | 75k | government | Only .ee -domains and hand-picked Estonian websites of other TLDs |
π© |
Estonian Web Archives | π© | π© | 75k | government | Only .ee -domains and hand-picked Estonian websites of other TLDs |
π© |
Australian Web Archive | π© | π© | 8B | government | Mostly .au -domains and other Australia-related websites |
π |
New Zealand Web Archive | π© | π© | 47k | government | Websites about New Zealand and the Pacific | π |
MNMKK OSZK WebarchΓvum | π© | π© | - | government | Focus on Hungarian websites | π |
UK Web Archive | π¨ | π¨ | - | government | UK websites | π¨ΒΉ |
archive.today | π₯ | π© | - | private | Also known as archive.is and archive.ph | π₯ |
Perma.cc | π₯ | π₯ | - | university | Maintained by the Harvard Law School Library | π₯ |
ΒΉ The UK Web Archive is currently unavailable due to a cyber-attack.
Selected archives available as Archive-it collections
- PRONI Collections
- Harvard Library
- National Library of Ireland
- National Central Library of Florence
- Stanford University Archives
- Stanford University, Social Sciences Resource Group
- California State Library
- Ivy Plus Libraries Confederation
- University of Texas at San Antonio Libraries Special Collections
- Kentucky Department for Libraries and Archives
- University of California, San Francisco
- Montana State Library
- Columbia University Libraries
- North Carolina State Archives and State Library of North Carolina
- International Internet Preservation Consortium
- EU Web Archive
See below on how to import all public Archive-it archives automatically.
Further archives with unclear status (not yet examined)
- Pagefreezer
- archive.st
- FreezePage
- WebCite
- γ¦γ§γιζ
- Ina
- Web-Archiv des Deutschen Bundestages
- WARP Web Archiving Project
- Kulturarw3
- Langzeitarchivierung im Bibliotheksverbund Bayern
- Ghostarchive
- Webarchiv Γsterreich
- EuropArchive
- Luxembourg Web Archive
- Web Archive Singapore
- DIR Slovak Internet Archive
- Spletni Arhiv Narodne
- The Web Archive of Catalonia
- Web Archive Switzerland
- θΊη£ηΆ²η«ε ΈθεΊ«
- UK Government Web Archive
- UK Parliament Web Archive
- EU Exit Web Archive
- End of Term Web Archive
- Web Archiving Project of the Pacific Islands
- Library of Congress Web Archives
- ΠΠ°ΡΠΈΠΎΠ½Π°Π»ΡΠ½ΡΠΉ ΡΠΈΡΡΠΎΠ²ΠΎΠΉ Π°ΡΡ ΠΈΠ² Π ΠΎΡΡΠΈΠΈ
- CyberCemetery
- Ξ Ολη ΞΟΟΞ΅Ξ―ΞΏΟ ΞΞ»Ξ»Ξ·Ξ½ΞΉΞΊΞΏΟ ΞΟΟΞΏΟ
- York University Libraries Wayback Machine
- NYARC Web Archive
- NLM Web Collecting and Archiving
- Common Crawl
- Webarchiv der Deutschen Nationalbibliothek
- Hrvatski Arhiv Weba
- Webarchiv
- Netarkivet
- Suomalainen Verkkoarkisto
- ΧΧ¨ΧΧΧΧ ΧΧΧΧ ΧΧ¨Χ Χ ΧΧΧ©Χ¨ΧΧΧ
We have last checked Wikipedia's list of web archiving initiatives and the Memento Depot on April 3, 2025. If an archive from there is not listed above, it is considered unavailable/broken.
If you know any other web archive service, we would appreciate a pull request adding the details to this list.
Add new search providers (e.g., Google) to the AQL by running:
aql providers add
A search provider can be any website that offers some search functionality. Ideally, you should also look at common prefixes of the URLs of the search results pages (e.g., /search
for Google). Narrowing down URL prefixes helps to avoid crawling too many captures that do not contain search results.
Refer to the import instructions below to import providers from the AQL-22 YAML file format.
Once you have added at least one archive and one search provider, we want to crawl archived captures of SERPs for each search provider and for each archive service. That is, we compute the cross-product of archives and the search providers' domains and URL prefixes (roughly: archiveΓprovider). Start building source pairs (i.e., archiveβprovider pairs) by running:
aql sources build
Running the command again after adding more archives or providers will automatically create the missing source pairs.
For each source pair, we now fetch captures from the archive service that corresponds to the provider's domain and URL prefix given in the source pair. Again, rerunning the command after adding more source pairs fetches just the missing captures.
Not every capture necessarily points to a search engine result page (SERP). But usually, SERPs contain the user query in the URL, so we can filter out non-SERP captures by parsing the URLs.
aql serps parse url-query
Parsing the query from the capture URL will add SERPs to a new, more focused index that only contains SERPs. From the SERPs, we can also parse the page number and offset of the SERP, if available.
aql serps parse url-page
aql serps parse url-offset
All the above commands can be run in parallel, and they can be run multiple times to update the SERP index. Already parsed SERPs will be skipped.
Up to this point, we have only fetched the metadata of the captures, most prominently the URL. However, the snippets of the SERPs are not contained in the metadata but only on the web page. So, we need to download the actual web pages from the archive service.
aql serps download warc
This command will download the contents of each SERP to a WARC file that is for now stored in the configured cache directory on disk, along with a reference of the SERP. In real-life scenarios, you would probably want to parallelize this step and write to a cache directory that is accessible from any of the workers, because downloads from the Internet Archive and other archives tend to be slow (but the archives can usually handle parallel requests fine).
The local WARC cache consists of many but small WARC files which is nice for parallel download stability, but not so nice for efficient storage. Hence, in this next step, we pick up WARC records from multiple smaller cache files and upload them to larger but fewer bundles on an S3-compatible block storage:
aql serps upload warc
A pointer to the WARC block in S3 is stored in the SERP index so that we can efficiently access a specific SERP's contents later.
From the WARC contents, we can now parse the query as it appears on the SERP (which can sometimes differ from the query encoded in the URL).
aql serps parse serp-query
More importantly, we can parse the snippets of the SERP.
aql serps parse serp-snippets
Parsing the snippets from the SERP's WARC contents will also add the SERP's results to a new index.
We support automatically importing providers and parsers from the AQL-22 YAML-file format (see data/selected-services.yaml
). To import the services and parsers from the AQL-22 YAML file, run the following commands:
aql providers import
aql parsers url-query import
aql parsers url-page import
aql parsers url-offset import
aql parsers warc-query import
aql parsers warc-snippets import
We also support importing a previous crawl of captures from the AQL-22 file system backend:
aql captures import aql-22
Last, we support importing all archives from the Archive-It web archive service:
aql archives import archive-it
Running the Archive Query Log on a cluster is recommended for large-scale crawls. We provide a Helm chart that automatically starts crawling and parsing jobs for you and stores the results in an Elasticsearch cluster.
Just install Helm and configure kubectl
for your cluster.
Crawling the Archive Query Log requires access to an Elasticsearch cluster and some S3 block storage. Configure the Elasticsearch and S3 credentials in a values.override.yaml
file like this:
elasticsearch:
host: "<HOST>"
port: 9200
username: "<USERNAME>"
password: "<PASSWORD>"
s3:
endpoint_url: "<URL>"
bucket_name: archive-query-log
access_key: "<KEY>"
secret_key: "<KEY>"
Let us deploy the Helm chart on the cluster (we are testing first with --dry-run
to see if everything works):
helm upgrade --install --values ./helm/values.override.yaml --dry-run archive-query-log ./helm
If everything works and the output looks good, you can remove the --dry-run
flag to actually deploy the chart.
If you no longer need the chart, you can uninstall it:
helm uninstall archive-query-log
If you use the Archive Query Log dataset or the crawling code in your research, please cite the following paper describing the AQL and its use cases:
Jan Heinrich Reimer, Sebastian Schmidt, Maik FrΓΆbe, Lukas Gienapp, Harrisen Scells, Benno Stein, Matthias Hagen, and Martin Potthast. The Archive Query Log: Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives. In Hsin-Hsi Chen et al., editors, 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023), pages 2848β2860, July 2023. ACM.
You can use the following BibTeX entry for citation:
@InProceedings{reimer:2023,
author = {Jan Heinrich Reimer and Sebastian Schmidt and Maik Fr{\"o}be and Lukas Gienapp and Harrisen Scells and Benno Stein and Matthias Hagen and Martin Potthast},
booktitle = {46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023)},
doi = {10.1145/3539618.3591890},
editor = {Hsin{-}Hsi Chen and Wei{-}Jou (Edward) Duh and Hen{-}Hsen Huang and Makoto P. Kato and Josiane Mothe and Barbara Poblete},
isbn = {9781450394086},
month = jul,
numpages = 13,
pages = {2848--2860},
publisher = {ACM},
site = {Taipei, Taiwan},
title = {{The Archive Query Log: Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives}},
url = {https://dl.acm.org/doi/10.1145/3539618.3591890},
year = 2023
}
Refer to the local Python installation instructions to set up the development environment and install the dependencies.
Then, also install the test dependencies:
pip install -e .[tests]
After having implemented a new feature, please check the code format, inspect common LINT errors, and run all unit tests with the following commands:
ruff check . # Code format and LINT
mypy . # Static typing
bandit -c pyproject.toml -r . # Security
pytest . # Unit tests
At the moment, our workflow for adding new tests for parsers goes like this:
- Select the number of tests to run per service and the number of services.
- Auto-generate unit tests and download WARCs with generate_tests.py
- Run the tests.
- Failing tests will open a diff editor with the approval and a web browser tab with the Wayback URL.
- Use the web browser dev tools to find the query input field and the search result CSS paths.
- Close diffs and tabs and re-run tests.
- Kaggle dataset of the manual test SERPs, thanks to @DiTo97
If you have found an important search provider missing from this query log, please suggest it by creating an issue. We also gratefully accept pull requests for adding search providers or new parser configurations!
If you are unsure about anything, post an issue or contact us:
- heinrich.merker@uni-jena.de
- s.schmidt@uni-leipzig.de
- maik.froebe@uni-jena.de
- lukas.gienapp@uni-leipzig.de
- harry.scells@uni-leipzig.de
- benno.stein@uni-weimar.de
- matthias.hagen@uni-jena.de
- martin.potthast@uni-leipzig.de
We are happy to help!
This repository is released under the MIT license. Files in the data/
directory are exempt from this license. If you use the AQL in your research, we would be glad if you could cite us.
The Archive Query Log (AQL) is a previously unused, comprehensive query log collected at the Internet Archive over the last 25 years. Its first version includes 356 million queries, 166 million search result pages, and 1.7 billion search results across 550 search providers. Although many query logs have been studied in the literature, the search providers that own them generally do not publish their logs to protect user privacy and vital business data. Of the few query logs publicly available, none combines size, scope, and diversity. The AQL is the first to do so, enabling research on new retrieval models and (diachronic) search engine analyses. Provided in a privacy-preserving manner, it promotes open research as well as more transparency and accountability in the search industry.