Measuring fire size from bounding box #20
-
Hello! Looking for some directions on the following:
I have forked this repo, set up a simple script to read frames from video and ran the included YOLO model on the (2) Raw video from Zenmuse X4S cameras for one specific pile from the FLAME dataset. Here's the script: Github Link My next goal is to plot the relative changes from the normalized width and height of the bounding boxes, pending further clarifications. I would also like to work on improving the model for the detection. Since this is primarily a prototype I'm working on, I would greatly appreciate any feedback. Am I on the right track? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Hi Tahsen, Yes, calculating the size of the bounding box is using its width * height, while we need to exclude the overlaying boxes. So far, we only need to consider the sizes on the image, not the actual fire sizes on the ground. I believe you are on the right track and you already had a good start here, just the YOLO model you trained so far might need some improvements. I've trained several YOLO models before, and they marked many overlapped boxes. When calculating the box sizes, the overlapped boxes need to be excluded, that's saying we're calculating the area only once. Thanks, |
Beta Was this translation helpful? Give feedback.
Hi Tahsen,
Yes, calculating the size of the bounding box is using its width * height, while we need to exclude the overlaying boxes. So far, we only need to consider the sizes on the image, not the actual fire sizes on the ground. I believe you are on the right track and you already had a good start here, just the YOLO model you trained so far might need some improvements. I've trained several YOLO models before, and they marked many overlapped boxes. When calculating the box sizes, the overlapped boxes need to be excluded, that's saying we're calculating the area only once.
Thanks,
Yali Wang