Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running RTABMAP on Robot #1196

Open
imranhussain14 opened this issue Aug 13, 2024 · 35 comments
Open

Running RTABMAP on Robot #1196

imranhussain14 opened this issue Aug 13, 2024 · 35 comments

Comments

@imranhussain14
Copy link

imranhussain14 commented Aug 13, 2024

I am using a depthai camera (Model: OAK-D PRO POE-W) with the rtabmap for creating a map. but i am facing a problem in tf. as i am new to this field so i don't know whats the problem. I am following the below steps:
1: roslaunch robot_bringup robot_base_2.launch ( This launch file include the robot.urdf.xacro file and robot_state publisher and joint state publisher along with a node cansend_message which is used to send the CAN control message to robot for movement.
2: roslaunch depthai_examples stereo_inertial_node.launch
3: rosrun imu_filter_madgwick imu_filter_node imu/data_raw:=/stereo_inertial_publisher/imu imu/data:=/stereo_inertial_publisher/imu/data _use_mag:=false _publish_tf:=false
4: roslaunch rtabmap_launch rtabmap.launch args:="--delete_db_on_start" rgb_topic:=/stereo_inertial_publisher/color/image depth_topic:=/stereo_inertial_publisher/stereo/depth camera_info_topic:=/stereo_inertial_publisher/color/camera_info imu_topic:=/stereo_inertial_publisher/imu/data frame_id:=oak-d_frame approx_sync:=true wait_imu_to_init:=true
After this my robot tf frames are broken and i got below in rviz.
image

and when i set the argument fixed frame to base_footprint. then the camera and imu link is broken. i am not sure about this. can anyone help me regarding this?
I am also uplaoding the frame.pdf
frame_id_oak_d_frame.pdf
frame_id_base_footprint.pdf

@matlabbe
Copy link
Member

For rtabmap launch, you should use frame_id:=base_footprint to not break the tree. Not sure why camera and imu link are broken, because in your base_footprint tf tree, everything seems there.

@imranhussain14
Copy link
Author

Thank you for your reply @matlabbe ,
This is what happened in rviz when i launch the rtabmap.launch and set the fixed frame from base_footprint to map.
Fixed frame to base_footprint:
image
Fixed Frame to map:
image
I am not sure about this like why this is happenening, even my transform trees are perfecr as attached above.
Any suggestion or help from your side, i will be glad.
Thank You

@matlabbe
Copy link
Member

matlabbe commented Aug 21, 2024

Is the camera upside down? Maybe there is a missing rotation in the URDF.

@imranhussain14
Copy link
Author

Thank You for your reply, I fix this by deleting the static transform in my robot_base launch file. My next plan is to use the rtabmap for the autonomous navigation. I already followed the "https://wiki.ros.org/rtabmap_ros/Tutorials/MappingAndNavigationOnTurtlebot" but it couldn't work for me. Can you kindly help me with this?.

@matlabbe
Copy link
Member

The navigation demo didn't work? which step failed?

@imranhussain14
Copy link
Author

imranhussain14 commented Aug 23, 2024

Actually, after creating the map. I am running the rtabmap in localization mode like

roslaunch rtabmap_launch rtabmap.launch \
    args:="--delete_db_on_start:=false" \
    rgb_topic:=/stereo_inertial_publisher/color/image \
    depth_topic:=/stereo_inertial_publisher/stereo/depth \
    camera_info_topic:=/stereo_inertial_publisher/color/camera_info \
    imu_topic:=/stereo_inertial_publisher/imu/data \
    frame_id:=base_footprint \
    approx_sync:=true \
    wait_imu_to_init:=true \
    localization:=true
    database_path:=/home/darlab/Maps/rtabmap.db

and then i am running the move_base node and then giving the goal using rviz but the robot is not moving. I am not sure weather I am doing the right way or not. Can you help me ?

here is my move_base launch file 👍

<!-- Arguments -->
<!--arg name="map_file" default="$(find hada_navigation)/maps/map1.yaml"/-->
<arg name="map_file" default="$(find hada_navigation)/maps/rtabmap.yaml"/>

<arg name="odom_frame" default="rtabmap/odom"/>
<arg name="base_frame" default="base_footprint"/>
<arg name="scan_topic" default="/rtabmap/scan_map"/>
<arg name="use_sim_time" default="false"/>

<!-- Use the robot's map -->
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />


<!-- Move Base -->
<node name="move_base" pkg="move_base" type="move_base" respawn="false" output="screen">
    <param name="base_global_planner" value="navfn/NavfnROS"/>
    <param name="base_local_planner" value="dwa_local_planner/DWAPlannerROS"/>
    <remap from="odom" to="/rtabmap/localization_pose"/>

        <!-- Costmaps -->
        <rosparam file="$(find hada_navigation)/config/costmap_common_params.yaml" command="load" ns="global_costmap" />
        <rosparam file="$(find hada_navigation)/config/costmap_common_params.yaml" command="load" ns="local_costmap" />
        <rosparam file="$(find hada_navigation)/config/local_costmap_params.yaml" command="load" />
        <rosparam file="$(find hada_navigation)/config/global_costmap_params.yaml" command="load" />
        <rosparam file="$(find hada_navigation)/config/base_local_planner_params.yaml" command="load" />

    <!-- Goal Tolerance Parameters -->
    <param name="controller_frequency" value="10.0"/>
    <param name="planner_patience" value="5.0"/>
    <param name="controller_patience" value="3.0"/>
    <param name="max_planning_retries" value="5"/>
    <param name="oscillation_timeout" value="10.0"/>
    <param name="oscillation_distance" value="0.5"/>
    
</node>

@imranhussain14
Copy link
Author

@matlabbe Can you kindly help me . i still facing issues with navigation.

@matlabbe
Copy link
Member

matlabbe commented Sep 5, 2024

 <remap from="odom" to="/rtabmap/localization_pose"/>

That should be mapped to an odometry topic (in your case it seems the visual odometry topic /rtabmap/odom), not the output localization pose. If you launch rtabmap in localization mode, you don't need to launch a map_server at the same time. rtabmap will already publish the map. Just make sure to remap at the right place for move_base.

The scan_topic should be the original laser scan topic.

odom_frame is odom by default.

You may try turtlebot3 example on https://wiki.ros.org/rtabmap_ros/Tutorials/MappingAndNavigationOnTurtlebot to see how topics are connected and look also the TF tree (rosrun tf2_tools view_frames.py).

cheers,
Mathieu

@imranhussain14
Copy link
Author

@matlabbe ,
I followed the above tutorial but the robotis still not moving and rostopic echo /cmd_vel is not giving me any data. I am not sure is it the problrm with my map or the navigation algortithm. Secondaly, I am also not getting the good 2D map. i think this is the main problem. Iam not sure is it the problem with the camera sensor or the point cloud is not properly converting into laser scan. can you help me in this ?
map1

Thank You

@matlabbe
Copy link
Member

Can you share the database? With stereo indoor, the default Grid/ parameters may not be the best for a clean occupancy grid. You could try with Grid/NormalsSegmentation=false (simple passthrough) with Grid/MaxGroundHeight=0.2, Grid/MaxObstacleHeight=1, Grid/3D=false and Grid/RayTracing.

@imranhussain14
Copy link
Author

Here is the link of my database :
https://drive.google.com/file/d/1enbrCAwQY-832kBlXwIbF-PYxLPLm6qT/view?usp=sharing
I will ollow your guidlines and perform the algorithm again and will let you know about it.
Thank you for your response.

@matlabbe
Copy link
Member

Peek 2024-09-12 19-57

It seems the frame_id is set on the camera, not the base of the robot. It seems roughly 1.5 meters over the ground. It will be easier to set the Grid ground height and obstacle height thresholds if the frame_id is a frame at ground level.

With current frame_id at camera height, I adjusted the parameters above to Grid/MaxGroundHeight=-1.2, Grid/MaxObstacleHeight=0.1. For the second session in the database:
Before
Screenshot from 2024-09-12 20-14-37

After
Screenshot from 2024-09-12 20-14-20

Another issue is that it seems to have quite drift in roll/pitch and also in Z. Based on your original post, can I assume you are using visual odometry for rtabmap? I checked at the images, I don't know if I hallucinate this, but they don't look rectified. Here the beam doesn't look straight:
image

If they still have distortion, visual odometry won't work well.

@imranhussain14
Copy link
Author

imranhussain14 commented Sep 13, 2024 via email

@matlabbe
Copy link
Member

What is the transform between base_footprint and oak-d-base-frame? Is it xyz=0,0,0? If so, you should change the TF between base_footprint and the camera base fame to match your robot measurements. But yeah, you should use base_footprint as frame_id for rtabmap.

For better visual odometry, I think you may have to re-calibrate your camera to remove any distortion. Then feed the rectified image to rtabmap.

@imranhussain14
Copy link
Author

Actually, i don't have a direct transform between base_footprint to oak-d-base-frame. I have a 3 different launch files.
the first one is my robot_base launch file in that i include the urdf file that defines the transform between the camera and robot base with a tree. The second one is the camera.launch (for that i am directly using the file from depthai_examples/stereo_inertial_node.launch file). The third one is for rtabmap.
Here is my urdf.xacro file:
https://drive.google.com/file/d/1J54Hi-HtSCofu030d5E7vG0qkuJt1Eb7/view?usp=sharing
and the frame.pdf is as follow:
base_with_camera_2.pdf
frame_id_base_footprint.pdf

Is this true? or i need a direct transformation between the base_footprint and camera abse frame as well. ?

For visusal odometry, i recalibrate my camera and checking again for that.
Tahnk You

@imranhussain14
Copy link
Author

imranhussain14 commented Sep 20, 2024

@matlabbe , I again create an outdoor map with a artifical trees of 5 rows, 3 columns setup but this time i attached my camera directly on the base_link. and the map looks fine but i don't know what happened when i turn fron first row the map didn't get turn and its start going stright and didn't update the map.:
Here is the graph :
https://drive.google.com/file/d/1V1RvtO1phvpXtdtN11xOFuMfXG4ZNqB9/view?usp=sharing
Here is the rtabmap.db:
https://drive.google.com/file/d/1J54Hi-HtSCofu030d5E7vG0qkuJt1Eb7/view?usp=sharing
Is their any way to tune parameter in rtabmap.launch file for better result? and if i visit the same place many times during mapping to map the area more accurtae. is it create any problem with loop closure or not?

@matlabbe
Copy link
Member

The link of the database is the one of the image on your previous post. For TF, no need to have a direct TF link between base_footprint and the camera link. The TF tree looks fine. In the URDF, the camera should have been 1.18 m in z, though in first database I saw there was no offset. I'll wait to see the new database.

@imranhussain14
Copy link
Author

Hello @matlabbe ,
Sorry for the wrong link. Here is mynlink for database:
https://drive.google.com/file/d/1enbrCAwQY-832kBlXwIbF-PYxLPLm6qT/view?usp=drivesdk.
and secondly, its very difficult to measure the exaxt same distance in real robot so if their is a litle bit error of about .05m . Is it will be fine.???.
Thank you for the helping. I am waiting for your response.

@matlabbe
Copy link
Member

The frustum looks at the right place now, the stereo point cloud is aligned with xy plane. There are a couple of issues:

  1. As observed in a previous post, the camera is not correctly calibrated. As long as there are distortions in the images, rtabmap's visual odometry will be pretty bad.
    Screenshot from 2024-09-23 20-57-26

  2. Reg/Strategy=1 ? You don't use a lidar, set it to 0. For outdoor operation, I would not use Reg/Force3DoF=true either.

  3. Your visual odometry processing time is too long (~400 ms):
    Screenshot from 2024-09-23 20-43-55
    I suggest to use lower resolution. If you cannot do it with your camera driver, you can set Odom/ImageDecimation=2 for odometry node, Mem/ImagePostDecimation=2 and Mem/ImagePostDecimation=2 for rtabmap node.

  4. Related to previous point, high resolution images in that kind of environment make the detector extracting all features in the background, which have less depth accuracy:
    Screenshot from 2024-09-23 21-01-10
    Decrease resolution like previous point, and you may also increase GFTT/MinDistance from 7 to 10 or 12 pixels to extract features everywhere in the image.

  5. I see multiple sessions in same database. Is it on purpose? Unless you want to merge multiple sessions together, I would create one database for each test (that would be more efficient in term of processing time and memory).

cheers,
Mathieu

@imranhussain14
Copy link
Author

Thank You @matlabbe ,
I followed your guidline and this time the maps looks better than the old one but still the visual odometry is lost sometimes. IF this is a problem of calibration i already calibrating my camera several time and the calibration errors are:
calibaration
I am using the depthai OAK-D-PRO POE camera from luxonis. I contacted them before and they says that callibration errors are fine. Secondly, I want to compare the results of both rgbd and stereo odometry so when i am trying to use the stereoodometry instead of rgbd_odometry . I am getting the error: stereo baseline (-0.075725) should be positive (baseline=-Tx/fx). We assume a horizontal left/right stereo setup where the Tx (or P(0,3)) is negative in the right camera info msg. [rtabmap/rtabmap viz-3] killing on exit .
I need your guidance on this as well and i also need your suggestions on some parameters you suggested me above. My camera height from the ground is 1.56m and from the base_link is 1.53m So what will be the best selection for Grid/MaxGroundHeight and Grid/MaxObstacleHeight. Also I need suggestion of the below for indoor and outdoor mapping which setting will be more good.
Grid/3D=false and Grid/RayTracing, Grid/NormalsSegmentation.

@matlabbe
Copy link
Member

1.4 reproj error looks high to me, but left/right errors look fine (<0.5). Maybe the distortion is coming from a bad calibration of the RGB camera. If you can indeed compare with using Right IR camera + depth aligned to right (which should be already the case), to do so (refer to this tutorial following depthai example "IR-Depth Mode"):

$ roslaunch depthai_examples stereo_inertial_node.launch depth_aligned:=false

$ rosrun imu_filter_madgwick imu_filter_node \
   imu/data_raw:=/stereo_inertial_publisher/imu \
   imu/data:=/stereo_inertial_publisher/imu/data  \
   _use_mag:=false \
   _publish_tf:=false

$ roslaunch rtabmap_launch rtabmap.launch \
    args:="--delete_db_on_start" \
    rgb_topic:=/stereo_inertial_publisher/right/image_rect \
    depth_topic:=/stereo_inertial_publisher/stereo/depth \
    camera_info_topic:=/stereo_inertial_publisher/right/camera_info \
    imu_topic:=/stereo_inertial_publisher/imu/data \
    frame_id:=base_link \
    approx_sync:=true \
    approx_sync_max_interval:=0.001 \
    wait_imu_to_init:=true

For the stereo mode, you can try this tutorial following depthai instructions. For the baseline error, the camera_info may be not correctly generated (inverted sign on the P(0,3) value).

The Grid/MaxGroundHeight parameter is relative to frame_id you set. If it is base_link, then if the value is Grid/MaxGroundHeight=0.05, then it means in your case that all points below 8 cm will be ground. A common way to set relative height to real ground instead of base_link is to add in URDF a frame base_footprint as parent to base_link and just under it but aligned to ground. In your case, it seems that transform could be base_footprint -> base_link = 0 0 0.03 0 0 0, then make rtabmap uses base_footprint instead of base_link.

Grid/MaxObstacleHeight would be set roughly just slightly higher than the maximum height of the robot. Too low means the robot could collide with tall obstacles, too high means tall obstacles would appear as obstacle even if the robot could physically move under the obstacle (e.g., ceiling).

For a ground robot moving on a relativaly flat ground, Grid/3D=false could work most of the time and is a lot less computation expensive than Grid/3D=true. Grid/RayTracing helps to get denser occupancy grid with empty cells, and also help to clear noise and dynamic obstacles. Grid/NormalsSegmentation=true can be useful when the ground is not even, with slope that can be over Grid/MaxObstacleHeight at Grid/RangeMax distance (this is considering that the robot can climb that slope, and we don't want the slope appears like an obstacle). I suggest also reading the beginning of Section V of this paper, which shows some trade-off between 2D and 3D maps (note that Grid/NormalsSegmentation=true is used in that paper, Figure 7).

cheers,
Mathieu

@imranhussain14
Copy link
Author

Thank you for the detailed answer, i tried the both Right depth IR and the stereo(Left/righ) . In right IR tge odometry is lost every time when i move my camera but the stereo(left/right is working good). But in both cases i am getting the error :Overwriting previous data! Make sure IMU is published faster than data rate. (last image stamp buffered=1727833812.942230 and new one is 1727833813.008896, last imu stamp received=1727833812.823021)Overwriting previous data! Make sure IMU is published faster than data rate. (last image stamp buffered=1727833812.942230 and new one is 1727833813.008896, last imu stamp received=1727833812.823021).
As, I am using the error from the camera and when i tried to chk the average rate of imu publishing ,its like :
subscribed to [/stereo_inertial_publisher/imu]
average rate: 198.449
min: 0.000s max: 0.137s std dev: 0.02000s window: 192
average rate: 195.862
min: 0.000s max: 0.140s std dev: 0.02094s window: 380
average rate: 193.268
min: 0.000s max: 0.162s std dev: 0.02153s window: 562

How i resolve this error.. and secondly in your previous reply you plot a graph of visual odometry processing. I tried to plot that graph from rtabmap but i didn't get the data for this. what i am getting is just the odometry trajectory from rtabmap. Can you tell me the way how i get that data? .

Here is my new map with stereo:
graph
database link:
https://drive.google.com/file/d/1bmUuWNmqSmYjiHP6p2APzQT_ujGkR1zr/view?usp=sharing

Regards,
Imran

@matlabbe
Copy link
Member

matlabbe commented Oct 5, 2024

Hi,

For the IMU data, the max latency looks pretty high, as it should be max 1 ms maybe, not >=137 ms. That is maybe why you get the warning (the imu msg is received after the image msg, as image latency should be higher than IMU, that should not be possible).

For the right/IR approach, not sure why it didn't work, I would have to check the data. In your database, the stereo images look a lot better calibrated than the color images you sent previously. However, did you use approx_sync_max_interval:=0.001 argument? There is a huge issue about left/right images synchronization. Here is an example of two consecutive frames added to rtabmap (with left/right overlapped images in both cases):
Screenshot from 2024-10-05 12-53-54
On right, we can see huge disparity on far features, which shouldn't be the case (like the left example). Looking at corresponding point clouds (red=left stereo images, blue=right stereo images), we see better the scale difference:
Peek 2024-10-05 12-55

I strongly suggest to use approx_sync_max_interval:=0.001 if you didn't. If you did, there is something wrong with the camera driver not giving close stamps (<1 ms) for stereo images taken at the same time.

For odometry processing time, it seems still high (>200 ms), you may set previous suggestion to use Odom/ImageDecimation=2 or set camera driver to send 360p images instead of 720p. To show it, you can open Statistics tab in rtabmap_viz or rtabmap-databaseViewer and do:

2024-10-05_12-58

cheers,
Mathieu

@imranhussain14
Copy link
Author

imranhussain14 commented Oct 6, 2024

Hello,

I have already set approx_sync_max_interval:=0.001, and after testing the mapping again, I noticed an improvement. The left and right images now appear much better than before, and the odometry time estimation has decreased to less than 55 ms. Here is the link to the database:
https://drive.google.com/file/d/1uk6xdgG05d4YLS5Oj7VSrwzWSgxfPat0/view?usp=sharing

I am not sure how to compare the disparity between the left and right stereo images. Could you kindly explain that to me? I apologize if this is a basic question, as I am still new to working with visual SLAM with rtabmap.

Regarding the IMU publishing rate, I am working on resolving that as soon as possible.
Also, I am trying to use the g2o, and GTSAM for the graph optimization.
So i installed the ,g2o and GTSAM from source and both insatlled successfully but when i tried to make the catkin_make, I am getting the error:
usr/bin/ld: warning: libopencv_calib3d.so.410, needed by /usr/local/lib/librtabmap_core.so.0.21.6, may conflict with libopencv_calib3d.so.4.2
/usr/bin/ld: warning: libopencv_core.so.410, needed by /usr/local/lib/librtabmap_core.so.0.21.6, may conflict with libopencv_core.so.4.2
/usr/bin/ld: /usr/local/lib/librtabmap_core.so.0.21.6: undefined reference to g2o::OptimizableGraph::Edge::clone() const' /usr/bin/ld: /usr/local/lib/librtabmap_core.so.0.21.6: undefined reference to g2o::OptimizableGraph::Vertex::clone() const'
/usr/bin/ld: /usr/local/lib/librtabmap_core.so.0.21.6: undefined reference to g2o::VertexSBAPointXYZ::VertexSBAPointXYZ()' /usr/bin/ld: /usr/local/lib/librtabmap_core.so.0.21.6: undefined reference to g2o::Factory::registerType(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, g2o::AbstractHyperGraphElementCreator*)'
collect2: error: ld returned 1 exit status
make[2]: *** [rtabmap_ros/rtabmap_examples/CMakeFiles/rtabmap_external_loop_detection_example.dir/build.make:357: /home/darlab/hada_ws_final/devel/lib/rtabmap_examples/external_loop_detection_example] Error 1
make[1]: *** [CMakeFiles/Makefile2:9594: rtabmap_ros/rtabmap_examples/CMakeFiles/rtabmap_external_loop_detection_example.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Invoking "make -j8 -l8" failed

Thank you for your ongoing support and help.

@matlabbe
Copy link
Member

matlabbe commented Oct 6, 2024

To examine the disparity, you can do it in rtabmap-databaseViewer:
image

Show 3D view to see the resulting point cloud. You can enable the grid (right-click) to see if the scale looks good (the grid is 1 meter cell). I checked the new database, and I don't see the stereo sync issue anymore. You can also right-click on Image view and enable "Depth Image" (which in our case is the right image) to see left and right overlap.

Which g2o version are you using? It should build with ros-noetic-libg2o version. For gtsam, the same, ros-noetic-gtsam.

sudo apt install ros-noetic-libg2o ros-noetic-gtsam

For opencv, it looks like you have two different versions of opencv. That would make everything easier if you uninstall that 410 version, and keep only the one ros is installing (default ubuntu version). If you want to rebuild opencv from source, I recommend to build the same version than the system binary one (or the one ros is using), so that resulting libraries are compatible.

@imranhussain14
Copy link
Author

Thank you.
The g2o and GTSAM problem has been resolved. Could you let me know if it is possible to add ground truth from an external sensor, such as an RTK-GPS, to RTAB-Map in order to compare the trajectories and calculate the RMSE error?

What I aim to do is assess the error and perform a comparison. Do you have any suggestions for a better approach?"

@matlabbe
Copy link
Member

matlabbe commented Oct 9, 2024

It is possible, only thing to have is a TF representing the ground truth. You can then set the fixed and moving frames with those parameters under rtabmap node:

<param name="ground_truth_frame_id" type="string" value="world"/>
<param name="ground_truth_base_frame_id" type="string" value="base_link_gt"/>

where world is the fixed frame of the ground truth system, and base_link_gt a frame on the robot that is not linked to TF tree of the robot. Afterwards, when opening the database in rtabmap-databaseViewer, open Graph view and you will see a gray path under the normal blue graph, the gray path is the ground truth. A RMSE value is also automatically computed under the Graph view for convenience. You can also use rtabmap-report tool to compare results and export poses.

Example of usage can be found in this example:

<param name="ground_truth_frame_id" type="string" value="world"/>
<param name="ground_truth_base_frame_id" type="string" value="kinect_gt"/>

@imranhussain14
Copy link
Author

imranhussain14 commented Oct 10, 2024

I followed your instructions and used an RTK-GPS as the ground truth, but there is significant error, likely due to the inherent inaccuracies of the GPS system. Is there another method to obtain more accurate ground truth? I also didn't fully understand the example you provided earlier—specifically, what type of ground truth was used. (I installed the GPS Antenna to the camera)
Here is my Graph :
image
Here is the DataBase Link:
https://drive.google.com/file/d/1_-wS6ewX1oUykb4MuZrZKdk3f_1Ps26P/view?usp=sharing

Here is the frames:
frames_gt.pdf

@matlabbe
Copy link
Member

matlabbe commented Oct 11, 2024

Hello,

The ground truth is wrong, it should be relative to a local frame and in meters / rad (starting at 0,0,0) and having all 6DOF estimated if you are comparing to 6doF poses like this.
2024-10-10_19-08

The IMU TF frame looks wrong, in yellow below is the gravity estimated from that frame, it should be more like red arrow that I added. It makes the whole map folding in a strange roll/pitch value. I remember having to fix the IMU rotation in https://github.com/luxonis/depthai-ros/blob/6c07c27324201013befb751139fe7252ee9a93e9/depthai_descriptions/urdf/include/base_macro.urdf.xacro#L70-L97 for a depthai camera because it was inverted (or make sure you use the right model).
2024-10-10_19-20
Screenshot from 2024-10-10 19-10-20

In rtabmap-databaseViewer, when setting Optimizer/GravitySigma=0 (see Core Parameters panel) to disable gravity optimization and checking only GPS under Graph View (click on "Span to all maps" checkbox two times to refresh if you just changed a Core Parameter), the paths are roughly more aligned (though there could be a scale issue from the stereo calibration):
Screenshot from 2024-10-10 19-11-32

I don't know if the GPS values are super accurate or not, but if VLSAM path here is wrong, maybe try not to tilt too much toward the sky, features on the sky are very far and would make position estimation difficult (ideally, point horizontally or slightly down).

cheers,
Mathieu

@imranhussain14
Copy link
Author

imranhussain14 commented Oct 11, 2024 via email

@matlabbe
Copy link
Member

You may use code like this https://gist.github.com/MikeK4y/1d99b93f806e7d535021b15afd5bb04f to convert GPS values to local ENU system (the format that should be sent on TF). The ground truth should look better afterwards (to compute valid RMSE values).

Do you use a D-GPS ? With a calibrated base station, you could get more accurate GPS values (<1 cm error). This would work only outside though and GPS signal should be fairly stable.

@imranhussain14
Copy link
Author

imranhussain14 commented Oct 16, 2024

The IMU issue has been resolved, but I am still facing problems with the ground truth. I might be confused about which topic to publish the ENU values to. Should I publish them on the /gps/fix topic, and will that be sent to the transform as well?

Here are the steps I have taken so far. Based on the example you shared, I added the following transform in my rtabmap.launch file:

<!-- TF FRAMES -->
 <node pkg="tf" type="static_transform_publisher" name="world_to_map" args="0.0 0.0 0.0 0.0 0.0 0.0 /world /map 100" />

Additionally, I configured the following parameters in the same launch file:

<arg name="ground_truth_frame_id" default="world"/>
<arg name="ground_truth_base_frame_id" default="base_link_gt"/>

However, I am still unsure how to correctly set up the GPS node so that it works properly with RTAB-Map. Do you have any examples or suggestions that might help?

For the GPS, I am using a calibrated RTK GPS, but the values are floats rather than fixed, which could be causing an error of around 2-3cm, I am also working on it but i need to resolve the ground truth first. Here is the link to my new database, but I am still not receiving ground truth information:
Database: https://drive.google.com/file/d/1v1dDYsu33AbKJlG-WqwxioTo-VJ1b9NT/view?usp=sharing

GPS_Code:
hada_gps_2.txt

Thanks for helping,

@matlabbe
Copy link
Member

The /gps/fix topic would contain GPS output directly (latitude/longitude/altitude/bearing). For the TF ground truth, the GPS values should be converted in local ENU (so we have 3D pose in meters with values relative to initial gps position). Note that if you are not sure how to do that, just don't provide a ground truth TF as you are already giving the same data as /gps/fix. I see in the database that the ground truth poses seem more noisy than the GPS values, maybe something wrong in the conversion. Blue is VSLAM, green is GPS and gray is ground truth.

image

@imranhussain14
Copy link
Author

imranhussain14 commented Oct 18, 2024

Thank you for your response. If I understand correctly, you're suggesting that I use the /gps/fix topic and not rely on any ground truth data. Then, I can compare the GPS trajectory with the visual robot trajectory. However, I’m unsure how to align both graphs to the same axis as my robot’s.

Additionally, I have another question. I tested localization indoors in an orchard-like environment (since my focus is on orchards). During mapping, when I mapped the environment with just one straight round, I successfully achieved loop closure detection along the path I followed. However, in the opposite direction, I didn’t detect any loop closure. I then tried mapping the entire environment by covering all areas from different directions, but many loops were rejected, and I only managed to achieve 20 successful loop closures despite covering all areas.

I've also encountered some errors, which I’ve attached in the output file below. Could you suggest any improvements to enhance the mapping process, so the robot can detect loop closures consistently from all angles?
Output Log File:
output.txt

Database with Simple Way Mapping:
https://drive.google.com/file/d/1q40ijOaweB2btEcQds3osjHfMYnBU5tv/view?usp=sharing

Database with full Area:
https://drive.google.com/file/d/1kG08sER1IFplmU68kzbdA32lBxTdli4w/view?usp=sharing

Thank You for Helping and your contribution.

@matlabbe
Copy link
Member

Looking at this map without user loop closures:
Screenshot from 2024-10-19 11-40-18

I think most of the loop closures it can detect are detected. Visual loop closures with single stereo camera are limited to locations with similar point of view (around same position, same orientation). Having a forward and backward cameras would make it easier to detect loop closures when traversing an area in the opposite direction. Otherwise, with single camera and a little bit of planning, there could be a path that maximizes loop closures in same direction, like this:
Screenshot from 2024-10-19 11-53-47

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants