Skip to content

This project is developed with Python and TensorFlow and is designed to detect sign language live. It uses computer vision techniques to capture the user's gestures in real-time and predict the corresponding sign language symbol.

Notifications You must be signed in to change notification settings

SHresTho12/sign_language_detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sign Language Detection

The Sign Language Detection project is an image processing and object detection application developed in Python to detect and recognize hand gestures in still images and real-time videos captured through the webcam. The project employs the MobileNet SSD architecture, TensorFlow, and OpenCV for image processing and detection tasks.

Installation

To get started with the project, you'll need to ensure that you have all the necessary dependencies installed. Fortunately, we've included all the installation commands in the Jupyter Notebook files, so you can get up and running quickly and easily.

Here's how to install the dependencies:

  • Clone the project repository to your local machine.
  • Open the Jupyter Notebook.
  • Run each cell in the Notebook.
  • That's it! The required dependencies will be installed automatically. By following these simple steps, you'll be able to install all the necessary dependencies and start using the project in no time.

Prerequisites

  • Python 3.6 or later
  • Jupyter Notebook
  • TensorFlow
  • OpenCV
  • MobileNet SSD

Setup

  1. Clone the repository
  2. Create a virtual environment using the command python -m venv env
  3. Activate the virtual environment with source env/bin/activate (for Linux) or env\Scripts\activate (for Windows)
  4. Install the required dependencies using pip install -r requirements.txt
  5. Download and install the MobileNet SSD object detection model
  6. Launch the Jupyter Notebook with the command jupyter notebook

Usage

The project can be used to detect and recognize hand gestures in still images or real-time videos captured through the webcam. To use the application, simply run the Jupyter Notebook and follow the instructions provided in the notebook.

Contributing

Contributions to the project are welcome. To contribute, please fork the repository and create a pull request. Please ensure that your code adheres to the PEP 8 style guide and that all tests pass before submitting your pull request.

About

This project is developed with Python and TensorFlow and is designed to detect sign language live. It uses computer vision techniques to capture the user's gestures in real-time and predict the corresponding sign language symbol.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published