Note that Filip Jerga is listed as an author due to our usage of his boilerplate code (https://github.com/Jerga99/electron-react-boilerplate).
Demo and more details can be found at https://devpost.com/software/ctrlairspace.
A desktop application that allows users to execute computer commands with the use of air gestures through movement recognition models. Hack the North 2020++ 🏆! Current iteration is not perfect, so use at your own risk 😎.
Current actions:
- Mouse and click
- Scroll
- Volume control
- Switching between windows
- Typing via voice input
- Set up Python (see below)
- (Optional) If want to have voice recognition feature, follow instructions below.
- Install ui dependencies:
npm install
- In one terminal:
npm run watch
to compile react code - In another terminal:
npm start
to start Electron app
- src\assets: Change the file paths for thepath1, ..., thepath6 (gesture demonstrations)
- src\components\GestureMatch.js: Change the file path for the logo
Libraries (which you may not have yet) to install:
- mediapipe
- opencv-python
- scikit-learn
- keyboard
- pyautogui
- azure-cognitiveservices-speech
To run just the python portion of the computer control, go to the Server folder and run the following: python hello.p
To run a demo of the gesture recognition without computer control, go to the Server folder and run the following: python gesture_detector.py
- Create Azure Speech Service resource (Speech Service setup example).
- In the Server folder, create a copy of settings_template.json. Rename it to settings.json.
- From Azure, get the key (e.g. "2a27343dbaca44059d48a0f8a23bd905") and resource region (e.g. "eastus") and add them to the json file .
- Improve robustness and reliability of gesture detection
- Connect config settings from UI to Python
- Add an exit program feature
- Add post-hackathon comments/documentation