The project mainly aims to develop an autonomous car using a Unity 3D environment and complete the NHTSA Levels of Automation one by one. See Roadmap and Technologies for more detail.
-
Simple Unity 3D Environment
- The environment should be suitable for object detection tasks.
- Car controls must be realistic enough to simulate an autonomous vehicle.
- Integration with stereo camera sensors for realistic input data.
-
Road Segmantation and Lane Detection
-
Model Training with Reinforcement Learning for Level 1 Vehicle (Steering) Assist.
-
3D Object Detection Model (KITTI Dataset)
- Train a 3D object detection model using the KITTI dataset.
- The autonomous car should rely on stereo cameras (no LiDAR).
- Apply the trained model to detect and localize objects in the Unity environment.
-
Bird’s Eye View (BEV) Transformer System
- Implement a BEV Transformer System for 3D space prediction.
- Utilize stereo vision to predict the 3D structure of the environment.
- Explore combining the stereo vision system with more advanced models for enhanced accuracy.
-
Level 2 Vehicle Assist
-
Rule-based System Design for Realistic Traffic Ride
-
Level 3 Vehicle Assist
-
Level 4 Vehicle (Parking in Selected Parking Areas and Highway Riding) Assist
-
Level 5 Vehicle (Currently Dreaming) Assist
See the detailed documentation
.
See my notes
.
See the LICENSE
Screenshots will be updated.
Technologies will be updated.
Installation will be updated.
Local Run will be updated.
For support, email orhun868@gmail.com.
All contributions are welcome.
See CONTRIBUTING
for ways to get started.
Please adhere to this project's CODE OF CONDUCT
.
Appendix will be updated.