This SLAM-for-AR competition is sponsored exclusively by SenseTime Group Limited.
As one of the most important techniques for AR applications, SLAM has achieved the level of maturity, and entered the stage of product landing. At this stage, more and more efforts are being made to improve the overall performance of a SLAM system rather than some individual technical indicators. In addition, compared to other applications like robotics, landing on AR products poses higher requirement to handle a variety of challenging situations since a home user may not carefully move the AR device, and the real environment may be quite complex.
In this context, this year we launch a SLAM competition specifically designed for AR applications, emphasizing on the overall performance of SLAM systems. We provide a visual-inertial dataset which mimics the usage scenarios of AR. The visual-inertial data are captured by a mobile phone (i.e. Xiaomi Mi 8), and the ground-truth data is obtained by a VICON motion capture system. All sensor data are calibrated and synchronized. Based on this dataset, we evaluate the overall performance of a SLAM system considering tracking accuracy, initialization quality, tracking robustness, relocalization time and the computation efficiency.
If you are interested in this SLAM for AR competition, please fill the registration form and send it to firstname.lastname@example.org. If you have any requests or suggestions, please feel free to contact us.
Visual SLAM and visual-inertial SLAM will be evaluated separately. In both category, there will be three winners with corresponding prizes:
l 1st winner : 3000 US dollars
l 2nd winner: 1500 US dollars
l 3rd winner: 750 US dollars
The teams in the finalists will be invited to present their works at SLAM for AR Competition workshop. We also encourage the invited teams to give a live demo on site.
l Training dataset (ground-truth included) release – August 6th, 2019
n Download the training dataset: http://www.zjucvg.net/eval-vislam/ismar19-slam-competition/. The data format is the same as the competition dataset.
n Develop and improve your algorithm with the training dataset.
n Evaluation tool: https://github.com/zju3dv/eval-vislam
l First Round – August 6th-September 20th, 2019
n Register for the competition, and you will receive the download link of the competition dataset.
n Run your system on the competition dataset and submit the result (email to email@example.com) before the deadline.
n Write at least two-page description of your system, and submit it before the deadline.
u Explanation of your system.
u Explanation of the novelties.
u Any format is OK, but we recommend you to use the template (TEX, DOC) from IEEE VGTC:
u Cite the related published papers (could be arxiv papers) of yours, if any. If you already have a paper that exactly describes your system, you can directly submit your paper.
n Teams with highest scores will be notified and invited to attend the final round and present their works on the workshop.
l Final Round - Sep -Oct., 2019
n More competition data will be used for the final round. The running time will be also taken into account.
n The executable systems from each team shall be provided and run on our benchmarking PC with a controlled environment (Linux or Windows).
l Workshop – Oct. 14th, 2019
n Each team will give a presentation about the techniques used in their system.
n Final ranks and winners will be announced in the Closing Session on Oct. 17th, 2019.
l We require your system should be able to run at 30FPS or above averagely on a normal desktop PC (e.g. an Intel i7 CPU with a single NVIDIA graphics card, and the memory space is not larger than 16G).
l If the speed of your system is less than 30FPS on our benchmarking PC, the final score will be lowered.
Submission Format & Evaluation Criteria
l The estimated 6 DoF camera poses (from camera coordinate to the world coordinate) and running time are required to evaluate the performance.
l We evaluate the overall performance of a SLAM system considering tracking accuracy, initialization quality, tracking robustness, relocalization time and the computation efficiency. The running time will be taken into account in the final round competition.
l Please refer to evaluation instruction file for the details.
Zhejiang University, China
Beijing Institute of Technology, China
University of Delaware, USA