On the uncertainty estimation for robust sensor fusion강인한 센서 융합을 위한 불확실성 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 270
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKim, Ayoung-
dc.contributor.advisor김아영-
dc.contributor.authorKim, Youngji-
dc.date.accessioned2022-04-13T05:40:12Z-
dc.date.available2022-04-13T05:40:12Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=962574&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/292512-
dc.description학위논문(박사) - 한국과학기술원 : 건설및환경공학과, 2021.8,[V, 63 :]-
dc.description.abstractTo improve robustness of the state estimation in robotics, optimizing over multiple sensor observations should be based on reliable measurement model. In this thesis, we present the measurement error model focusing on uncertainty in different sensor observations. In particular, our focus is on sensor fusion applications where reliable uncertainty is useful. Uncertainty estimation becomes a general solution to weigh on better mean values among complementary estimations. Although uncertainty does not improve the individual estimations, but it does improve robustness of the entire system by choosing reliable observation. First, we examine different state parameterizations for robot pose. We find out that the state represented with Lie groups is advantageous especially in reliable uncertainty propagation without suffering from linearization error. To prove it, we analyzed monotonicity of uncertainty propagated on different state space (pose-only state versus pose and velocity state) and different geometric space (Euclidean vector space versus Lie groups). We have shown that inclusion of velocity state helps shrinkage of uncertainty during exploration less likely happen, but the ultimate solution is to use Lie groups for propagating uncertainty. Second, we introduce covariance learning for visual odometry particularly in regard to integration to another sensor. Differing from the existing supervised manner, we suggest an unsupervised loss for uncertainty modelling by learning balanced uncertainties. Most importantly, we overcome the limitation of learning a single sensor uncertainty by newly introducing uncertainty balancing between different sensor modalities. In doing so, we alleviate the uncertainty balancing issue between sensors often encountered in the multi-sensor SLAM application. Third, we propose a robust-backend, which is applicable even when the measurement uncertainty is unknown. We solve this problem by considering different sensor measurements as a pair. Specifically, we penalize measurements with large relative errors compared to their paired counterparts, instead of penalizing measurements with large absolute errors. As a result, the proposed method is less vulnerable to tuning user parameters, such as initial switching value γ in Switchable Constraints (SC) or scaling prior Φ in Dynamic Covariance Scaling (DCS).-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectMap Based Localization▼aUncertainty Estimation▼aVisual Odometry-
dc.subjectDeep Learning-
dc.subjectSLAM-
dc.subject맵기반 위치인식▼a불확실성 추정▼a비주얼 오도메트리▼a딥러닝▼a슬램-
dc.titleOn the uncertainty estimation for robust sensor fusion-
dc.title.alternative강인한 센서 융합을 위한 불확실성 연구-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :건설및환경공학과,-
dc.contributor.alternativeauthor김영지-
Appears in Collection
CE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0