-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Failed to build Superpoint engine. #174
Comments
Hello, I haven't developed programs on Windows before. The provided source code relies on APIs from libraries used on Ubuntu, and I'm not sure if they work the same way on Windows. |
Hi, it actually seems to be related to the 40 series GPU. (I am on a 4090). I think these cards require tensorrt 10. Is that possible? |
I have tested it on a 4080, and TensorRT 8.6 works properly. |
Strange. I wonder if it's a windows thing. With airvo I had no issues building on windows |
Okay @xukuanHIT i rebuilt everything and the engine is built! Yay! Now I have a new crash. Running the test feature application, the code crashes here: Line 160 in 2a23e66
If I comment out that line, it passes the function, if I leave it in, it crashes. I am using teh euroc dataset, everything default. What might be happening here? Thanks again! |
AH there was a problem with my windows conversion. I have test feature running now! Next step: visual odometry! |
Hi again, I have visual odometry running great, however when i run map refine, or relocation, I have a crash on loading the vocabulary file.
What version of boost should I be using? Can you think of anything that might cause this? Thank you. |
@antithing Hi, we use boost 1.71.0 on ubuntu. Can you confirm if the dictionary path is correct? |
Hi, yes the path is correct, i am actually hardcoding it just before the Is the dbow superpoint training code included here? I can try training the vocabulary again perhaps |
@antithing Hi, you can refer to and modify this code to train the dictionary. |
Thank you! What dataset did you train your dictionary on? |
@antithing Hi, please refer to the Section VI-A of the paper. |
Hi , I have looked into this more and it looks like boost binary files are not portable between Linux and Windows. If so would you be able to upload it? Thank you! |
@antithing I have uploaded a txt version of the vocabulary. However, in offline optimization stage, AirSLAM will build a scene-specific binary vocabulary, I am not sure if it can run on Windows. |
I am still struggling with this. Now it's the map loading. It saves fine from Visual odometry but I can't load txt or binary without a crash! Do you have any thoughts on where I could look to debug this? Thank you! |
Aha! By adding the BOOST_SERIALIZATION_SHARED_PTR(Map) macro and copying the _map data to a new Map object before i save it, I can now load the map bin! I am still stuck with loading the voc file however, i get this error: incompatible native format - size of long" Are you able to share the exact code to create this file? Thank you! |
Sorry, I am currently busy with some tasks and may be unable to re-organize the code in the short term. You still can not load the txt vocabulary? |
The txt voc gives the same error unfortunately. I am building airslam inside Docker on WSL, so i may be able to use boost::portable_binary_archive, but I am struggling with ROS as I have never used it before. Will keep testing! |
@xukuanHIT sorry to bother you again, I have been fighting with this for weeks. Are you able to share the vocabulary training code that you used so I can just make a new voc file from scratch? Thank you very much! |
Aha! After all that digging, i was able to save a vocabulary in Linux using the inbuilt DBow:
to cv::FileStorage, and load it on windows using:
|
One more thing... :) When I run visual odometry on euroc, I get a frame time of 25 - 30ms. (aside from keyframe creation) Once I refine the map and run the relocalization application, I see a frame time of around 50 ms. relocalization should be faster than vo, right? Is there a way to run super fast localization in the optimized map? Thanks! |
Well done! If you are particularly concerned about the speed of relocalization, you can try the following methods to accelerate it:
We provided an ablation study on this, which can be referred to Section VII.E and Table 3 of the paper. |
Thank you! i will take a look at those. What I want is to run odometry in localisation mode, so tracking runs on an existing map without adding any new keyframes. |
also one more question :) I am now trying to run live on my own camera. I have a factory calibrated stereo camera, that gives me the following data:
I have added this to a camera config as so:
But I think I need a transpose on the T matrix. I get:
|
You can refer to this issue #154 to verify the stereo rectification. |
Thanks! i have resolved this and am running on my own data. Back to the re-localization question.. What i actually want is to pre-map, and then track in the existing map, without having to add keyframes or optimize. so just the tracking thread, running on a pre-created map, like ORBSLAM does with localization mode. Would you be able to point me at any code changes to do this? |
Sorry, I haven't tried doing it this way. You can follow the ORB-SLAM approach, and we can discuss further if you encounter any issues. |
Hi @xukuanHIT, one more question! I am trying to use the point matcher to match mappoints (with existing descriptors) to feature points, however the mappoint descriptor has 256 cols:
while the point features have 259:
Can i pad the mappoints with zeros to get to 259 and run:
Or is there a better way to match to mappoints? Thanks you! |
Hi, and thank you for this code! I am compiling on Windows, with CUDA 12.1, TensorRT 8.6, and a RTX 4090 GPU.
When running the test_features application, I get
Error in SuperPoint building
triggered here:AirSLAM/src/feature_detector.cc
Line 24 in 2a23e66
Digging in more, this errors at:
AirSLAM/src/super_point.cpp
Line 23 in 2a23e66
What could this problem be?
I have tried upgrading to TensorRT 10, but I get a lot of compile errors.
Thanks!
The text was updated successfully, but these errors were encountered: