This repository contains a Python Notebook demonstrating how to use the GPT-4 model to answer questions about Dungeons and Dragons 5e. With the power of LLMs and embeddings stored in a vector database, this D&D Assistant can answer the most heated questions your players might argue about.
The code takes text inputs and outputs answers, consulting the snippets of the SRD that the model finds most relevant.
There is also a demo of using the function calling api to retrieve spell descriptions.
- Loading the Rules: The 5e SRD rules text is loaded from a Markdown document.
- Chunking Text: The SRD text is then split into manageable chunks so that the embeddings model can understand it easily.
- Creating Embeddings: Each chunk of the SRD text is then run through an embeddings model.
- Storing Embeddings: The generated embeddings are stored in a vector database for quick and easy retrieval.
- Answering Questions: When a question is asked, the model generates an embedding for the question and searches the database for the most similar embeddings (i.e., the most relevant SRD text chunks). The selected chunks are passed into a prompt for a GPT-4 model, which uses the context to answer the user's question.
- Qdrant for vector database management
- Sentence Transformers for generating embeddings
- OpenAI for LLM backend
Pull requests accepted for improvements to the demo.
This repository is licenced under MIT unless otherwise stated.
The D&D 5e SRD rules text used in this demonstration is dual licensed by Wizards of the Coast under the terms of CC-BY-4.0 or OGL 1.0a. Full details available at: https://dnd.wizards.com/resources/systems-reference-document