Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vins-Mono / Vins-Fusion usage #259

Open
aimiliosnrobotics opened this issue Sep 26, 2024 · 0 comments
Open

Vins-Mono / Vins-Fusion usage #259

aimiliosnrobotics opened this issue Sep 26, 2024 · 0 comments

Comments

@aimiliosnrobotics
Copy link

Hello and thank you very much for the nice work! This question is about the usage of the vins-mono and vins-fusion packages with my own data. I'm facing a problem that i cant use the vins-fusion project. I am running successfully the vins-mono with my setup (custom monocular camera + phidget spatial IMU) and I am able to get correct results but when running the vins-fusion with the exact same configurations as the vins mono the camera_pose drifts heavily away as it can be seen in the following images.

VINS MONO:
vins_mono_term
VINS FUSION:
vins_fusion_drift

Has something changed between the packages ? Its also worth mentioning, that for the vins mono i had to inflate the imu noise parameters in order to make it run successfully, although i got the imu parameters with alan variance analysis and was able to successfully get very accurate results using these parameters with other VI-slam packages like orb-slam. Is the vins package more sensitive to bad imu measurements or exclusively planar motion with approximately constant acceleration ? My ultimate goal is to use VI-multicamera slam for a wheeled robot and the whole package works well with orb slam 3 but have some cases that there is drifting and wrong scaling and after doing a lot of tests I thought it was because of the planar setup and approximately constant acceleration. Thats why im trying to ultimately use the vins-gps-wheel package to fuse wheel odometry to recover the missing scale information and optionally gps. I am running vins-mono almost successfully (apart from the inflated imu noises) so i think the vins-wheel odometry coupling would be successful as the package is based on vins-mono (already can see the feature pointcloud published. However, im trying the vins-fusion package because it is said that it supports multiple cameras, so maybe i could use it to finally have the multicamera-VI slam - wheel odometry fusion. So does anyone have any insight on this and why the Vins-fusion package is not running ? Thank you very much in advance!

the configuration files are following:

VINS-MONO:

%YAML:1.0

#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; 
imu_topic: "/imu/data_raw"
image_topic: "/camera0/image_raw"
#image1_topic: "/camera1/image_raw"
output_path: "~/output/"

# model_type: KANNALA_BRANDT
# camera_name: camera0
# image_width: 640
# image_height: 480
# distortion_parameters:
#    k1: 0.40000818354392403
#    k2: -0.12081171870317779
#    p1: -0.05019646239461993
#    p2: 0.03191883340304046
# projection_parameters:
#    fx: 249.11357612128313
#    fy: 249.11690732189138
#    cx: 326.68403379291203
#    cy: 243.31467455420616   

model_type: PINHOLE
camera_name: camera0
image_width: 640
image_height: 480
distortion_parameters:
   k1: -0.06155816710653534
   k2: -0.013961160967415768
   p1: 0.0007546126094149863
   p2: -0.0008787572974087528
projection_parameters:
   fx: 258.51567840106213
   fy: 258.5752935695983
   cx: 321.1744166775173
   cy: 242.3615494668013

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.

#### KANNALABRANDT #############
# extrinsicRotation: !!opencv-matrix
#    rows: 3
#    cols: 3
#    dt: d
#    data: [-0.99926158, -0.02532093, -0.02889875, 
#          -0.02768236, -0.04713959, 0.99850466, 
#          -0.02664534, 0.99856733, 0.04640383]
# #Translation from camera frame to imu frame, imu^T_cam
# extrinsicTranslation: !!opencv-matrix
#    rows: 3
#    cols: 1
#    dt: d
#    data: [0.04549169,0.31585158, 0.03692635]

#### PINHOLE #############
extrinsicRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [-0.99948037, -0.01843143, -0.0264439, 
          -0.02545083, -0.0521769, 0.99831349,
          -0.01978011, 0.99846775, 0.0516807]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [0.04378348,0.32805217, 0.03791052]

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
equalize: 1             # if image is too dark or light, trun on equalize to find enough features
fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

# #imu parameters       The more accurate parameters you provide, the better performance
# acc_n: 0.0032171141081418185          # accelerometer measurement noise standard deviation. 
# gyr_n: 0.00047514840397045084         # gyroscope measurement noise standard deviation.     
# acc_w: 0.002008277168849152        # accelerometer bias random work noise standard deviation.  
# gyr_w: 1.4322615524203976e-05       # gyroscope bias random work noise standard deviation.     
# g_norm: 9.81007     # gravity magnitude

acc_n: 0.08          # accelerometer measurement noise standard deviation. #0.2   0.04
gyr_n: 0.004         # gyroscope measurement noise standard deviation.     #0.05  0.004
acc_w: 0.00004         # accelerometer bias random work noise standard deviation.  #0.02
gyr_w: 2.0e-6       # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.81007     # gravity magnitude



#loop closure parameters
loop_closure: 1                    # start loop closure
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
fast_relocalization: 0             # useful in real-time and large project
pose_graph_save_path: "/home/shaozu/output/pose_graph/" # save and load path

#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#rolling shutter parameters
rolling_shutter: 0                  # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0               # unit: s. rolling shutter read out time per frame (from data sheet). 

#visualization parameters
save_image: 1                   # save image in pose graph for visualization prupose; you can close this function by setting 0 
visualize_imu_forward: 0        # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4      # size of camera marker in RVIZ

VINS-FUSION:

%YAML:1.0

#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; 
imu: 1         
num_of_cam: 1  

imu_topic: "/imu/data_raw"
image0_topic: "/camera0/image_raw"
#image1_topic: "/camera1/image_raw"
output_path: "~/output/"

cam0_calib: "cam0.yaml"
#cam1_calib: "cam1.yaml"
image_width: 640
image_height: 480
   

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.

#### KANNALABRANDT #############
# body_T_cam0: !!opencv-matrix
#    rows: 4
#    cols: 4
#    dt: d
#    data: [-0.99926158, -0.02532093, -0.02889875, 0.04549169, 
#           -0.02768236, -0.04713959, 0.99850466, 0.31585158,
#           -0.02664534, 0.99856733, 0.04640383, 0.03692635,
#           0,0,0,1]

#### PINHOLE #############
body_T_cam0: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [-0.99948037, -0.01843143, -0.0264439, 0.04378348,
          -0.02545083, -0.0521769, 0.99831349, 0.32805217,
          -0.01978011, 0.99846775, 0.0516807, 0.03791052,
          0,0,0,1]

#Multiple thread support
multiple_thread: 1

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 1            # perform forward and backward optical flow to improve feature tracking accuracy

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
# acc_n: 0.0032171141081418185         # accelerometer measurement noise standard deviation. 
# gyr_n: 0.00047514840397045084         # gyroscope measurement noise standard deviation.     
# acc_w: 0.002008277168849152        # accelerometer bias random work noise standard deviation.  
# gyr_w: 1.4322615524203976e-05       # gyroscope bias random work noise standard deviation.     
# g_norm: 9.81007     # gravity magnitude

acc_n: 0.08          # accelerometer measurement noise standard deviation. #0.2   0.04
gyr_n: 0.004         # gyroscope measurement noise standard deviation.     #0.05  0.004
acc_w: 0.00004         # accelerometer bias random work noise standard deviation.  #0.02
gyr_w: 2.0e-6       # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.81007     # gravity magnitude

# acc_n: 0.1          # accelerometer measurement noise standard deviation. 
# gyr_n: 0.01         # gyroscope measurement noise standard deviation.     
# acc_w: 0.001        # accelerometer bias random work noise standard deviation.  
# gyr_w: 0.0001       # gyroscope bias random work noise standard deviation.     
# g_norm: 9.81007     # gravity magnitude
#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#loop closure parameters
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "~/output/pose_graph/" # save and load path
save_image: 1                   # save image in pose graph for visualization prupose; you can close this function by setting 0 

cam0.yaml

%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 480
distortion_parameters:
   k1: -0.06155816710653534
   k2: -0.013961160967415768
   p1: 0.0007546126094149863
   p2: -0.0008787572974087528
projection_parameters:
   fx: 258.51567840106213
   fy: 258.5752935695983
   cx: 321.1744166775173
   cy: 242.3615494668013

# model_type: KANNALA_BRANDT
# camera_name: camera0
# image_width: 640
# image_height: 480
# distortion_parameters:
#    k1: 0.40000818354392403
#    k2: -0.12081171870317779
#    p1: -0.05019646239461993
#    p2: 0.03191883340304046
# projection_parameters:
#    fx: 249.11357612128313
#    fy: 249.11690732189138
#    cx: 326.68403379291203
#    cy: 243.31467455420616

Best regards
Aimilios Petroulas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant