- The Aim of this project was to develop a Autonomous Car Prototype, which had automatic steering control, traffic sign recognition, traffic light detection and other object detection features.
- The Project runs on a Model Car which uses a Raspberry Pi 4B+, assisted by 1-3 external computing hardware based on the GPU memory capacity. The Model Car collects input from a camera module, an ultrasonic sensor, sends data to a external computer over IP. The computer processes the input data for movement controls, object detection (traffic sign and traffic light) and collision avoidance.
- All these features are achieved using latest technologies such as, Machine Learning Algorithms, Artificial Neural Networks, Sensor Fusion and Computer Vision.
- For DataSet and Training Procedures used to train traffic sign model refer to my other repository -
ML-Traffic-Sign-Classification - For DataSet and Training Procedures used to train driver model refer to my other repository -
- For DataSet - GhostCar-TrainingData
- For Training - GhostCar-Training
The following Programs are run on an external system with good GPU computation capacity.
angle.txt
: File buffer for TCP data transfer.
modelall.h5
: Trained model file to control the Steering for the Prototype Car.
modelleft.h5
: Trained model file to control the Steering for the Prototype Car - only left turns.
modelright.h5
: Trained model file to control the Steering for the Prototype Car - only right turns.
run.bat
: Batch File to manage multiple windows.
uploadTCP.py
: Establishes TCP connection to the prototype car.
videoPredict.py
: Program that predicts steering angle using trained Convolution Neural Networks.
videoPredict.py
: Program that predicts steering angle using trained Convolution Neural Networks.
-
For DataSet used to train the following model refer to my other repository - GhostCar-TrainingData
-
For Training Procedures and Techniques refer to my other repository - GhostCar-Training
lanesImage.py
: Detects Lanes on Road - for Image Input.
lanesTuner.py
: Helps in tuning lanesImage.py
lanesVideo.py
: Detects Lanes on Road - for Video Input.
test_images.jpg
: Samples Images to test lanesImage.py
test_video.mp4
: Samples Videos to test lanesVideo.py
coco.names
: COCO is a large-scale object detection, segmentation, and captioning dataset.
to know more you can visit Click Me .
OD.py
: Object Detection Program - for Video Input.
ODimage.py
: Object Detection Program - for Image Input.
test.mp4
: Sample Video for Testing Object Detection.
yolov3(380).cfg
: Yolov3 config files.
yolov3(608).cfg
: Yolov3 config files - Standard.
yolov3-tiny.cfg
: Yolov3 tiny config files - Light weight.
yolov3.weights
: Yolov3 tiny neural networks weights files - Standard.
yolov3-tiny.weights
: Yolov3 tiny neural networks weights files - Light weight.
- Yolov3 Config and Weight files obtained from YoloV3 by Joseph Chet Redmon
The following programs are run on the Raspberri Pi mounted on the Car Prototype.
GhostCarDrive.py
: Program that controls the Prototype Car's components for the autonomous driving.
IPStreaming.py
: Program that streams car's camera input to external computer over IP.
SampleControlServoMotor.py
: Sample program to test Servo Motors manually.
SampleTestMotor.py
: Sample program to test DC motor through L298N driver.
client.py
: Sample TCP Client Program - Runs on RPi to establish connection with External GPU.
server.py
: Sample TCP Server Program - Runs on laptop to send back data to RPi.
drive relay.py
: Program that connects with UDACITY Autonomous Driving Simulator.
drive.py
: Program to test trained models on the simulator.
model speed.h5
: Trained Model to control the Car's Speed.
model steering.h5
: Trained Model to control the Car's Steering.
model throttle.h5
: Trained Model to control the Car's Throttle.
TrafficLight.py
: Program that recognises Traffic Lights.
Sample_Video.mp4
: Sample video to test Traffic light Recognition.
Sample_Output.mp4
: Sample video output for the Traffic light Recognition.
Traffic Signs Detection.py
: Program to Predcit Traffic signs - for Video Input.
trafficmodel.h5
: CNN model for traffic sign prediction.
- For DataSet and Training Procedures used to train the following model refer to my other repository -
ML-Traffic-Sign-Classification
Udacity Simulator - Windows64 - Installer.zip
: Windows Installer for Open Source Simulation Software
developed by UDACITY using Unity Engine for Autonomous Car Simulation.
For the Actual Project Refer the following link. https://github.com/udacity/self-driving-car-sim
videoCapture.py
: Program to record video from a camera or mp4 or mjpeg stream.
videoFlipper.py
: Program that flip the video horizontally or vertically to alter different positions of camera.
videoFramer.py
: Program that saves each frame in a Separate folder for a Video Input also with a config file
with all the frame files names, usually used for dataset generation.
The hardware used in this project on the prototype was a Raspberry Pi 4 B+ 4GB RAM Model.
- Install
Python 3.6.0+
- Copy the 'RPi Programs' folder onto the Raspberri Pi directory.
- Run the following commmand on your terminal
pip3 install -r Requirements.txt
The external hardware used in this project consists of two laptops. Each running Steering Control and Object Detection Respectively
- Install
Python 3.6.0+
- Copy all the project file system.
- Run the following commmand on your terminal
pip install -r Requirements.txt
-
Testing: Run the Sample programs under each sub-folder to test the functioning for all the features.
-
Training (optional): Run the training programs along with relevant dataset to customize the neural network models as per requirements
-
Actuation: Test all the hardware components and use the Pin Configuration table to connect all the components on the model prototype.
-
Camera Input over IP Stream: Run
python3 IPStreaming.py
on RPi4 and check for output on web browser at www.[YourRPiIP]:[PortNumber]/index.html NOTE:"Make sure to use a dedicated RaspberriPi Camera Module" -
Steering Control Prediction: Run
python videoPredict.py
on External System to start steering control prediction using CNN. -
Establish TCP Communication: Create a TCP Server connection to send back valuable data to the prototype model. Run
python uploadTCP.py
to start a server to send data upon request from the RPi. -
Self-driving of prototype: Run
python3 GhostCarDrive.py
on RPi4, wait for the program to perform GPIO pin setup and establish a connection with the TCP server. Once the above actions are performed the car with start self-driving.
NOTE: To terminate the Program - Press Ctrl + C only once!, pressing multiple times with forcefully terminate the program causing GPIO pins to misbehave due improper program termination.