This project uses a local Roboflow Inference server to detect hand gestures to trigger movement in a robot. The robot moves if it detects two palms. It stops if it does not detect two palms.
- Rasberry Pi 4 Model B - 8GB RAM
- Freenove Hexapod Robot Kit
- This project uses this model of my hands: https://universe.roboflow.com/workspace-bdfjh/robot-gesture
- If you want to train your own model, you can run photoshoot.py to take pictures from your robot.
- Update Roboflow variables here for your new project.
- Follow Freenove instructions to setup the robot (docs)
- The robot files in this repo were taken directly from Freenove's repository here.
Look for the latest updates for these files there:
- Command.py
- Control.py
- IMU.py
- Kalman.py
- PCA9685.py
- PID.py
- Servo.py
- point.txt
- The robot files in this repo were taken directly from Freenove's repository here.
Look for the latest updates for these files there:
- Setup Rasbperry Pi OS Bookworm (docs)
- Pull down this repo on the Raspberry Pi.
- Install global packages:
sudo apt install -y python3-picamera2 python3-libcamera
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
- In the project directory:
- Setup the virtual environment:
python3 -m venv --system-site-packages venv
- ***
--system-site-packages
gives your venv access tolibcamera
. This will not work without it.
- ***
source venv/bin/activate
pip install -r requirements.txt
- Setup the virtual environment:
- Install global packages:
- Start the local inference server:
sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu
- In this project directory, ensure the virtual environment is activated:
source venv/bin/activate
- Run the robot:
python palm_detection.py
- Put your palms out and see if it works. There will be some lag. The print statements will tell you if there is delay in fetching the inference return, or if the object detection just isn't working.
- This could be a good candidate for a Raspberry Pi cluster. This could be split up into several nodes, e.g.:
- Inference server node
- Camera controller node
- Robot controller node
- Some of the lag may be due to the camera needing to write the image to the disk so that the image path can be passed to Roboflow's
predict()
function (example). Performance may be improved if a binary stream (e.g. withio.BytesIO()
) could be passed to Roboflow, instead of an image path, so that the image could be serialized in-memory instead of needing to be written to the disk.