
参考文献:
[1] John J, Hugh F. Simultaneous map buildingand localization for an autonomous mobile robot[J]. IEEE/RSJ International Workshop on Intelligent Robots ,1991, 7(6): 1442–1447.
[2] Durrant H, Bailey T. Simultaneous localization and mapping(slam):Part i[J]. IEEE Robotics & Automation Magazine, 2006, 13(2): 99-110.
[3] Bailey T, Durran H. Simultaneous localization and mapping(slam):Part ii[J]. IEEE Robotics & Automation Magazine, 2006, 13(3): 108-117.
[4] Randall S, Matthew S. Estimating uncertain spatial relationships in robotics[J]. Autonomous robot vehicles, 1987, 13(3): 167–193.
[5] Durrant H, Rye D. Localization of autonomous guided vehicles[J]. Robotics Research.1996, 12(1): 613-625.
[6] Aulinas J, Petillot Y, Salvi Joaquim, et al. The slam problem: A survey [J]. Artificial Intelligence Research and Developmen, 2008, 11(2): 363-371.
[7] Dissanayake G, Huang S, Wang Z, et al. Areview of recent developments in simultaneous localization and mapping[J]. IEEE International Conference on Industrial and Information Systems. 2011,5(2): 477-482.
[8] Candena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping:Towards the Robust-Perception Age[J]. IEEE Transactions on Robotics, 2016, 32(6): 1309-1332.
[9] Giorgio G, Cyrill S, Wolfram B, et al. Improved techniques for grid mapping with rao-black wellized particle filters[J]. IEEE transactions on Robotics, 2007,23(1): 34-50.
[10] Michael M, Sebastian Thrun, Daphne K, Ben W, et al. Fast-slam 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges[J]. 2003, 32(6): 1151–1156.
[11] Luca C, Rosario A. A linear approximation for graph-based simultaneous localization and mapping[J]. Robotics: Science and Systems VII, 2012, 12(5): 41–48.
[12] Wolfgang H, Damon K, Holger R, and Daniel A. Realtime loop closure in 2d lidar slam[J]. IEEE International Conference on Robotics and Automation, 2016, 15(5): 1271–1278.
[13] Ji Z, Sanjiv S. Loam: Lidar odometry and mapping in real-time[J]. Science and Systems, 2014, 15(5): 9-25.
[14] Tixiao S, Brendan E. Lego-loam: Lightweight and ground optimized lidar odometry and mapping on variable terrain[J]. International Conference on Intelligent Robots and Systems, 2018,14(5): 4758–4765.
[15] Zhong W, Yan C, Yue M. IMU-Assisted 2d slam method for low-texture and dynamic environments[J]. Applied Sciences, 2018, 8(12): 25-34.
[16] Aisha W, Michael K, Hordur J. Dynamic pose graph slam: Long-term mapping in low dynamic environments[J]. International Conference on Intelligent Robots and Systems, 2012, 5(5): 1871–1878.
[17] Raul M, Jose M. Orb-slam: a versatile and accurate monocular slam system[J]. IEEE transactions on robotics, 2015, 31(5):1147–1163.[18] Tong Q, Peiliang L, and Shaojie S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4):1004–1020.
[19] Simon L, Torsten S, Michael B. Get out of my lab: Large-scale, real time visual-inertial localization[J]. In Robotics: Science and Systems, 2015, 18(5): 10-25.
[20] Guillermo G, Tobi D, Garrick O, et al. Event-based vision: A survey[J]. IEEE Transactions on Robotics, 2019, 34(4):1004–1020.
[21] Patrick L, Christoph P, and Tobi D. A 128x128120db 15us latency asynchronous temporal contrast vision sensor[J]. IEEE journal of solid-state circuits, 2008, 43(2):566–576.
[22] Klein G,Murray D. Parallel tracking and mapping for small AR workspaces[J]. Proc of IEEE and ACM International Symposium on Mixed and Augmented Reality. 2007, 10(5): 1-10.
[23] Ethan R, Vincent R, Kurt K, and Gary R. Orb: An efficient alternative to sift or surf[J]. ICCV, 2011,10(5): 2-15.
[24] Guofeng Z, Haomin L, et al. Efficient non-consecutive feature tracking for robust structure-from-motion[J]. IEEE Transactions on Image Processing, 2016, 25(12):5957–5970.
[25] Jakob E, Thomas S, and Daniel C. Lsd-slam: Largescale direct monocular slam[J]. European conference on computer ision, 2014, 12(5): 834–849.
[26] Felix E, Hess J, Daniel C. 3-d mapping with an rgb-d camera[J]. IEEE transactions on robotics, 2014, 30(1):177–187.[27] Henri R, Timo H, Guillermo G. Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time[J]. IEEE Robotics and Automation Letters, 2016, 2(2):593–600.
[28] Muhammad S, Gon W. Simultaneous localization and mapping in the epoch of semantics: A survey[J]. International Journal of Control, Automation and Systems, 2019, 17(3): 729–742.
[29] Nikolay A, Sean L, Kostas D. A unifying view of geometry, semantics, and data association in slam[J]. IJCAI, 2018, 12(4): 5204–5208.
[30] Jesse L, Sebastian T. Automatic online calibration of cameras and lasers[J]. Robotics: Science and Systems, 2013,10(5): 2-16.
[31] Dhall A, Chelani, V, Radhakrishnan M. Camera calibration using 3D-3D point correspondences[J]. ArXiv eprints, 2017, 11(4): 24-36.[32] Wen S, Othman K, Rad A, et al. Indoor slam using laser and camera with closed-loop controller for nao hunmanoid robot[J]. Abstract and Applied Analysis,2014, 12(5): 1-8.
[33]Chang H, Lee C, Lu Y. P-SLAM: Simultaneous localization and mapping with environmental-structure prediction[J]. IEEE Transactions on Robotics, 2007, 23(2): 281-293.
[34] Ji Z and Sanjiv S. Visual-lidar odometry and mapping: Low-drift, robust, and fast[J]. IEEE International Conference on Robotics and Automation, 2015, 10(5): 2174–2181.
[35] Stefan M, Georg A, Christian Witt. Visual slam for automated driving: exploring the applications of deep learning[J]. IEEE Conference on Computer Vision and Pattern Recognition , 2018, 10(5): 360–370.
[36] Oscar G, Grasa1 J. EKF Monocular slam 3D modeling, measuring and augmented reality from endoscope image sequences[J]. IEEE International Conference on Robotics and Automation , 2009, 10(1): 174–185.
[37] Urtasun R,Lenz P,Geiger A. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]. IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society,2012: 3354-3361.
重磅!3DCVer-学术论文写作投稿 交流群已成立
扫码添加小助手微信,可申请加入3D视觉工坊-学术论文写作与投稿 微信交流群,旨在交流顶会、顶刊、SCI、EI等写作与投稿事宜。
同时也可申请加入我们的细分方向交流群,目前主要有3D视觉、CV&深度学习、SLAM、三维重建、点云后处理、自动驾驶、CV入门、三维测量、VR/AR、3D人脸识别、医疗影像、缺陷检测、行人重识别、目标跟踪、视觉产品落地、视觉竞赛、车牌识别、硬件选型、学术交流、求职交流等微信群。
一定要备注:研究方向+学校/公司+昵称,例如:”3D视觉 + 上海交大 + 静静“。请按照格式备注,可快速被通过且邀请进群。原创投稿也请联系。

▲长按加微信群或投稿 
▲长按关注公众号

▲长按关注公众号
3D视觉从入门到精通知识星球:针对3D视觉领域的知识点汇总、入门进阶学习路线、最新paper分享、疑问解答四个方面进行深耕,更有各类大厂的算法工程人员进行技术指导。与此同时,星球将联合知名企业发布3D视觉相关算法开发岗位以及项目对接信息,打造成集技术与就业为一体的铁杆粉丝聚集区,近2000星球成员为创造更好的AI世界共同进步,知识星球入口:
学习3D视觉核心技术,扫描查看介绍,3天内无条件退款 
圈里有高质量教程资料、可答疑解惑、助你高效解决问题 整理不易,请给工坊点赞和在看!

