On the uncertainty estimation for robust sensor fusion강인한 센서 융합을 위한 불확실성 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 271
  • Download : 0
To improve robustness of the state estimation in robotics, optimizing over multiple sensor observations should be based on reliable measurement model. In this thesis, we present the measurement error model focusing on uncertainty in different sensor observations. In particular, our focus is on sensor fusion applications where reliable uncertainty is useful. Uncertainty estimation becomes a general solution to weigh on better mean values among complementary estimations. Although uncertainty does not improve the individual estimations, but it does improve robustness of the entire system by choosing reliable observation. First, we examine different state parameterizations for robot pose. We find out that the state represented with Lie groups is advantageous especially in reliable uncertainty propagation without suffering from linearization error. To prove it, we analyzed monotonicity of uncertainty propagated on different state space (pose-only state versus pose and velocity state) and different geometric space (Euclidean vector space versus Lie groups). We have shown that inclusion of velocity state helps shrinkage of uncertainty during exploration less likely happen, but the ultimate solution is to use Lie groups for propagating uncertainty. Second, we introduce covariance learning for visual odometry particularly in regard to integration to another sensor. Differing from the existing supervised manner, we suggest an unsupervised loss for uncertainty modelling by learning balanced uncertainties. Most importantly, we overcome the limitation of learning a single sensor uncertainty by newly introducing uncertainty balancing between different sensor modalities. In doing so, we alleviate the uncertainty balancing issue between sensors often encountered in the multi-sensor SLAM application. Third, we propose a robust-backend, which is applicable even when the measurement uncertainty is unknown. We solve this problem by considering different sensor measurements as a pair. Specifically, we penalize measurements with large relative errors compared to their paired counterparts, instead of penalizing measurements with large absolute errors. As a result, the proposed method is less vulnerable to tuning user parameters, such as initial switching value γ in Switchable Constraints (SC) or scaling prior Φ in Dynamic Covariance Scaling (DCS).
Advisors
Kim, Ayoungresearcher김아영researcher
Description
한국과학기술원 :건설및환경공학과,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 건설및환경공학과, 2021.8,[V, 63 :]

Keywords

Map Based Localization▼aUncertainty Estimation▼aVisual Odometry; Deep Learning; SLAM; 맵기반 위치인식▼a불확실성 추정▼a비주얼 오도메트리▼a딥러닝▼a슬램

URI
http://hdl.handle.net/10203/292512
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=962574&flag=dissertation
Appears in Collection
CE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0