000060641 001__ 60641
000060641 005__ 20190529115143.0
000060641 0247_ $$2doi$$a10.1007/s11263-011-0492-5
000060641 0248_ $$2sideral$$a76654
000060641 037__ $$aART-2012-76654
000060641 041__ $$aeng
000060641 100__ $$aSolà, J.
000060641 245__ $$aImpact of landmark parametrization on monocular EKF-SLAM with points and lines
000060641 260__ $$c2012
000060641 5060_ $$aAccess copy available to the general public$$fUnrestricted
000060641 5203_ $$aThis paper explores the impact that landmark parametrization has in the performance of monocular, EKF-based, 6-DOF simultaneous localization and mapping (SLAM) in the context of undelayed landmark initialization.
Undelayed initialization in monocular SLAM challenges EKF because of the combination of non-linearity with the large uncertainty associated with the unmeasured degrees of freedom. In the EKF context, the goal of a good landmark parametrization is to improve the model’s linearity as much as possible, improving the filter consistency, achieving robuster and more accurate localization and mapping.
This work compares the performances of eight different landmark parametrizations: three for points and five for straight lines. It highlights and justifies the keys for satisfactory operation: the use of parameters behaving proportionally to inverse-distance, and landmark anchoring. A unified EKF-SLAM framework is formulated as a benchmark for points and lines that is independent of the parametrization used. The paper also defines a generalized linearity index suited for the EKF, and uses it to compute and compare the degrees of linearity of each parametrization. Finally, all eight parametrizations are benchmarked employing analytical tools (the linearity index) and statistical tools (based on Monte Carlo error and consistency analyses), with simulations and real imagery data, using the standard and the robocentric EKF-SLAM formulations.
000060641 536__ $$9info:eu-repo/grantAgreement/ES/MICINN/DPI2009-07130$$9info:eu-repo/grantAgreement/EC/FP7/248942/EU/RoboEarth: robots sharing a knowledge base for world modelling and learning of actions/RoboEarth
000060641 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000060641 590__ $$a3.623$$b2012
000060641 591__ $$aCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE$$b10 / 115 = 0.087$$c2012$$dQ1$$eT1
000060641 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000060641 700__ $$aVidal-Calleja, T.
000060641 700__ $$0(orcid)0000-0003-1368-1151$$aCivera, J.$$uUniversidad de Zaragoza
000060641 700__ $$0(orcid)0000-0002-3627-7306$$aMartinez Montiel, J.M.$$uUniversidad de Zaragoza
000060641 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000060641 773__ $$g97, 3 (2012), 339-368$$pInt. j. comput. vis.$$tINTERNATIONAL JOURNAL OF COMPUTER VISION$$x0920-5691
000060641 8564_ $$s1315080$$uhttps://zaguan.unizar.es/record/60641/files/texto_completo.pdf$$yPostprint
000060641 8564_ $$s98580$$uhttps://zaguan.unizar.es/record/60641/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000060641 909CO $$ooai:zaguan.unizar.es:60641$$particulos$$pdriver
000060641 951__ $$a2019-05-29-11:32:21
000060641 980__ $$aARTICLE