The sorcery of Natural Language processing form basic techniques such as Word2Vec, Topic Modelling to LSTMs and pre-trained transformers.
If you have any questions or suggestions, don't hesitate to reach out!
pip install -r requirement.txt
1. Basics
This section covers the fundamentals of natural language processing (NLP). It introduces Spacy, a powerful NLP library, and covers topics like tokenization, lemmatization, stopword removal, and pattern matching.
2. POS and NER
This section focuses on Part-of-Speech (POS) tagging and Named Entity Recognition (NER). It explores how to assign POS tags to words in a sentence and how to identify and classify named entities like names, locations, and organizations.
Text classification is the task of assigning predefined categories or labels to text data. This section demonstrates how to build a text classifier to detect spam messages using machine learning techniques.
4. Word2Vec
Word2Vec is a popular word embedding technique that represents words as numerical vectors. This section delves into unsupervised sentiment analysis and how to create word embeddings using the Word2Vec model.
Topic modeling is a technique used to discover hidden thematic patterns in a collection of documents. This section explores Latent Dirichlet Allocation (LDA), a popular topic modeling algorithm.
Text generation involves creating new text based on existing patterns. This section demonstrates how to generate text using Long Short-Term Memory (LSTM) neural networks and provides resources for text generation from the novel "Moby Dick."