Skip to content

Commit

Permalink
add more tests for search, update readme, update max missed frames, a…
Browse files Browse the repository at this point in the history
…nd debug
  • Loading branch information
iBoot32 committed Dec 21, 2024
1 parent b18d1b8 commit 5efb5bd
Show file tree
Hide file tree
Showing 10 changed files with 2,200 additions and 53 deletions.
42 changes: 40 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,38 @@
# auto-aiming
Revitalized auto-aiming suite for Purdue RoboMaster Club 2024-2025.

## Overall Suite Requirements
### Functional Requirements:
- [ ] **Detect an enemy armor plate in the camera's FOV.**
- [ ] Meet the following detection rate and accuracy requirements:
- [ ] 5 meters: 90% detection rate, 5% pixel loss
- [ ] 3 meters: 95% detection rate, 5% pixel loss
- [ ] 2 meters: 95% detection rate, 5% pixel loss
- [ ] Reduce search area around previously detected plates ("search area reduction").
- [ ] Achieve 120 Hz detection frequency.
- [ ] Classify the robot type based on its armor plate sticker.
- [ ] **Compute camera-relative XYZ pose via PnP solving with 5% error margin.**
- [ ] **Filter out false positives and noise in the detection results.**
- [ ] Use a Kalman filter to smooth XYZ pose estimates.
- [ ] Apply a "validity filter" for erroneous detection/pose results (e.g., based on distance, XYZ shifts).
- [ ] **Compute the gimbal angles (yaw and pitch) required to accurately land projectiles on the detected armor plate.**
- [ ] Compute pitch using an easily-adjustable lookup table or ballistic model based on distance to target.
- [ ] Compute yaw using a predictive model using the detected armor's XYZ pose, rotation, and velocity
- [ ] **Send the computed gimbal angles to the STM32 control board via UART.**

### Non-Functional Requirements:
- [ ] **Performance**
- Ensure real-time end-to-end performance of 120 Hz.
- [ ] **Testability**
- Include a comprehensive suite of unit tests for all modules to verify component correctness.
- [ ] **Maintainability**
- Maintain modularity by separating ROS2 and C++ logic into `xyzNode.cpp` and `xyz.cpp` files.
- Provide thorough documentation, including doxygen-style comments for functions and README files for modules.



## Usage
To use the run script, execute the following commands in your terminal. The script supports multiple options:
Clone this repository into your `ros2-ws` directory. We provide a `run` script that can be used to build, run, test, and clean the workspace with the following options:

- **Build**
- `./auto-aiming/run build`
Expand All @@ -17,4 +47,12 @@ To use the run script, execute the following commands in your terminal. The scri
- `--debug`: Builds with debug flags enabled.

### Example:
`./auto-aiming/run --debug --quiet run video2detector.py`
```
./auto-aiming/run --debug --quiet run video2detector.py
```

## Architecture Diagram

<div style="max-width: 600px; margin: auto;">
<img src="https://user-content.gitlab-static.net/e4204bbed045ad52aa41d39922ba810a488a8b23/68747470733a2f2f6769746875622e636f6d2f526f626f4d61737465722d436c75622f507572647565524d2d57696b692f626c6f622f67682d70616765732f646f63732f616c676f726974686d2f7265736f75726365732f616c677465616d706c6f742e6a70673f7261773d74727565" alt="alt text">
</div>
2 changes: 1 addition & 1 deletion run
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ test() {
colcon test >> "$LOG_FILE" 2>&1
colcon test-result --verbose >> "$LOG_FILE" 2>&1
else
colcon test > /dev/null 2>&1
colcon test
colcon test-result --verbose
fi

Expand Down
4 changes: 2 additions & 2 deletions src/prm_launch/launch/video2detector.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@

def generate_launch_description():
webcam_publisher = get_package_share_path('webcam_publisher')
video_path = "/home/tom/Videos/far_back_spin_and_move.avi" # example, can change to your liking
video_path = "/home/tom/Videos/close.avi" # example, can change to your liking
return LaunchDescription([
Node(
package='webcam_publisher',
output='screen',
emulate_tty=True,
executable='VideoCaptureNode',
parameters=[{'source': str(video_path),
'fps': 6,
'fps': 10,
'frame_id': 'video',
}]
),
Expand Down
75 changes: 41 additions & 34 deletions src/prm_vision/opencv_armor_detector/src/OpenCVArmorDetector.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ void OpenCVArmorDetector::setConfig(DetectorConfig config)

std::vector<_Float32> OpenCVArmorDetector::search(cv::Mat &frame)
{
std::vector<_Float32> keypoints_msg(8, 0);
std::vector<_Float32> detected_keypoints(8, 0);
static auto last_time = std::chrono::steady_clock::now(); // Static to persist across calls

if (_reset_search_area)
Expand All @@ -39,6 +39,7 @@ std::vector<_Float32> OpenCVArmorDetector::search(cv::Mat &frame)
// Detect the armor in the cropped frame
std::vector<cv::Point2f> points = detectArmorsInFrame(croppedFrame);

// Print FPS every 500 frames
if (_frame_count % 500 == 0 && _frame_count != 0)
{
// Calculate and print FPS
Expand All @@ -49,45 +50,51 @@ std::vector<_Float32> OpenCVArmorDetector::search(cv::Mat &frame)
}
_frame_count++;

// Display the cropped frame
// Display the cropped frame for debugging
#ifdef DEBUG
cv::imshow("detect", croppedFrame);
cv::waitKey(1);
cv::resize(croppedFrame, croppedFrame, cv::Size(WIDTH / 2, HEIGHT / 2));

// Create a static window name
const std::string window_name = "Detection Results";
cv::imshow(window_name, croppedFrame);

// Update the window title (OpenCV >= 4.5)
cv::setWindowTitle(window_name,
"detected: " + std::to_string(_detected_frame) + " / " +
std::to_string(_frame_count) + " (" +
std::to_string(_detected_frame * 100 / _frame_count) + "%)");

cv::waitKey(10);
#endif

// If we didn't find an armor for 3 frames, reset the search area (this is configurable)
// If we didn't find an armor for a few frames (ROS2 param), reset the search area
if (points.size() == 0)
{
_missed_frames++;
if (_missed_frames >= _max_missed_frames)
if (_missed_frames > _max_missed_frames)
{
_reset_search_area = true;
_missed_frames = 0;
}
}
else
{
// We found an armor, so reset the missed frames and return the keypoints
_missed_frames = 0;
// TODO: Perform armor classification here

// Convert the points to the message format
std::vector<cv::Point2f> image_points;
for (int i = 0; i < 4; i++)
{
keypoints_msg[i * 2] = points.at(i).x + _search_area[0];
keypoints_msg[i * 2 + 1] = points.at(i).y + _search_area[1];

image_points.emplace_back(cv::Point2f(points.at(i).x + _search_area[0], points.at(i).y + _search_area[1]));
detected_keypoints[i * 2] = points.at(i).x + _search_area[0];
detected_keypoints[i * 2 + 1] = points.at(i).y + _search_area[1];
}

if (_reduce_search_area)
{
// Change the search area to the bounding box of the armor with a 50 pixel buffer
_reset_search_area = false;
int x_min = (int)std::min(keypoints_msg[0], keypoints_msg[2]);
int x_max = (int)std::max(keypoints_msg[4], keypoints_msg[6]);
int y_min = (int)std::min(keypoints_msg[1], keypoints_msg[5]);
int y_max = (int)std::max(keypoints_msg[3], keypoints_msg[7]);
_reset_search_area = false; // We got a detection, so don't reset the search area next frame
int x_min = (int)std::min(detected_keypoints[0], detected_keypoints[2]);
int x_max = (int)std::max(detected_keypoints[4], detected_keypoints[6]);
int y_min = (int)std::min(detected_keypoints[1], detected_keypoints[5]);
int y_max = (int)std::max(detected_keypoints[3], detected_keypoints[7]);
_search_area[0] = std::max(x_min - 50, 0);
_search_area[1] = std::max(y_min - 50, 0);
_search_area[2] = std::min(x_max + 50, WIDTH);
Expand All @@ -96,11 +103,7 @@ std::vector<_Float32> OpenCVArmorDetector::search(cv::Mat &frame)
_detected_frame++;
}

// cv::resize(croppedFrame, croppedFrame, cv::Size(WIDTH, HEIGHT));
// cv::imshow("detect", croppedFrame);
// cv::waitKey(300);

return keypoints_msg;
return detected_keypoints;
}

/**
Expand Down Expand Up @@ -136,9 +139,8 @@ std::vector<cv::Point2f> OpenCVArmorDetector::detectArmorsInFrame(cv::Mat &frame
}

// Find contours in the masked image
cv::Canny(result, result, 30.0, 90.0, 3, false);

std::vector<std::vector<cv::Point>> contours;
cv::Canny(result, result, 30.0, 90.0, 3, false);
cv::findContours(result, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);

// We find the light bar candidates among the contours
Expand All @@ -147,18 +149,17 @@ std::vector<cv::Point2f> OpenCVArmorDetector::detectArmorsInFrame(cv::Mat &frame
{
if (contour.size() > 20)
{
// Fit an ellipse to the contour, and check if it's likely a light bar
cv::RotatedRect rect = cv::fitEllipseDirect(contour);
if (!isLightBar(rect))
if (isLightBar(rect))
{
continue;
light_bar_candidates.push_back(rect);
}

// Add each candidate to a vector
light_bar_candidates.push_back(rect);
}
}

// Give priority to the light bar with the leftmost center
// TODO: Have a better metric such as distance from last detected armor
std::sort(light_bar_candidates.begin(), light_bar_candidates.end(), [](cv::RotatedRect &a, cv::RotatedRect &b)
{ return a.center.x < b.center.x; });

Expand All @@ -172,15 +173,21 @@ std::vector<cv::Point2f> OpenCVArmorDetector::detectArmorsInFrame(cv::Mat &frame
cv::RotatedRect rect1 = light_bar_candidates[i];
cv::RotatedRect rect2 = light_bar_candidates[j];

// Check if the pair of light bars likely form an armor plate
if (isArmor(rect1, rect2))
{
// We have found a match, return the pair (sorted left, right)
auto &first = (rect1.center.x < rect2.center.x) ? rect1 : rect2;
auto &second = (rect1.center.x < rect2.center.x) ? rect2 : rect1;

// Draw the ellipses on the frame
cv::ellipse(frame, rect1, cv::Scalar(255, 0, 0), 1); // Blue ellipse for rect1 with 1px thickness
cv::ellipse(frame, rect2, cv::Scalar(0, 255, 0), 1); // Green ellipse for rect2 with 1px thickness
std::vector<cv::Point2f> armor_points_1 = rectToPoint(first);
std::vector<cv::Point2f> armor_points_2 = rectToPoint(second);

// Draw a dot on the top and bottom of each light bar using rectToPoint
cv::circle(frame, armor_points_1[0], 0, cv::Scalar(0, 255, 0), -1);
cv::circle(frame, armor_points_1[1], 0, cv::Scalar(0, 255, 0), -1);
cv::circle(frame, armor_points_2[0], 0, cv::Scalar(0, 255, 0), -1);
cv::circle(frame, armor_points_2[1], 0, cv::Scalar(0, 255, 0), -1);

return {rectToPoint(first)[0], rectToPoint(first)[1], rectToPoint(second)[0], rectToPoint(second)[1]};
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ OpenCVArmorDetectorNode::OpenCVArmorDetectorNode(
_saturation_lower_limit = this->declare_parameter("_saturation_lower_limit", 100);
_value_lower_limit = this->declare_parameter("_value_lower_limit", 150);
_target_color = this->declare_parameter("_target_red", true) ? RED : BLUE;
_max_missed_frames = this->declare_parameter("_max_missed_frames", 2);
_max_missed_frames = this->declare_parameter("_max_missed_frames", 1);
_reduce_search_area = this->declare_parameter("_reduce_search_area", true);

// Callbacks and pub/sub
Expand Down
Binary file not shown.
Loading

0 comments on commit 5efb5bd

Please # to comment.