This repo contains the work done for the summer project - making a conversational bot.:robot:
Group :
Varun Khatri, Prateek Jain, Adit Khokhar, Atharva Umbarkar, Ishir Roongta
(P.S. For checkpoints contact any of the above mentioned 😃 )
The requirements are in the requirements.txt file.
pip install -r requirements.txt
python3 -m spacy download en
The main file to focus is tf_attention_model.py.
To run the ChatBot ->
python3 chatbot.py
To run the ConversationalBot ->
python3 conversational_bot.py
The main aim of this project was to make a conversation bot able to take in audio input and output a meaningful reply keeping in mind factors like context ant intent in the input given by user.
The three main parts of this project were:
- Speech to text
- Topic attention (to generate a response)
- Text to speech
This model is implemented to convert the audio messages of the user into text.
Please look at the file for the proper implementation :Dataset opted for training: Librespeech
This model is implemtented to cover the response generation part of the conversational bot. We trained this model on the dataset Opensubtitles
This model is implemented to add topic awareness to ENCODER - DECODER Model for better response generation by focusing it's "attention" to only specific parts of the input rather than the whole sentence.
This graph shows the optimal number of topics we need to set for news articles dataset.
- corpus — Stream of document vectors or sparse matrix of shape (num_terms, num_documents) <
- id2word – Mapping from word IDs to words. It is used to determine the vocabulary size, as well as for debugging and topic printing.
- num_topics — The number of requested latent topics to be extracted from the training corpus.
- random_state — Either a randomState object or a seed to generate one. Useful for reproducibility.
- update_every — Number of documents to be iterated through for each update. Set to 0 for batch learning, > 1 for online iterative learning.
- chunksize — Number of documents to be used in each training chunk.
- passes — Number of passes through the corpus during training.
- alpha — auto: Learns an asymmetric prior from the corpus
- per_word_topics — If True, the model also computes a list of topics, sorted in descending order of most likely topics for each word, along with their phi values multiplied by the feature-length (i.e. word count)
- The size of the bubbles tells us how dominant a topic is across all the documents (our corpus)
- The words on the right are the keywords driving that topic
- The closer the bubbles the more similar the topic. The farther they are apart the less similar
- Preferably, we want non-overlapping bubbles as much as possible spread across the chart.
gTTs, a python library was used to make a function to output audio from the generated responses.