Tutorials

T1. OpenARK — Tackling Augmented Reality Challenges via an Open-Source Software Development Kit

 

The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded by Dr. Allen Yang at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently, OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, OpenARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK currently also receives funding support from a research grant by Intel RealSense project and the NSF.

 

Website: https://vivecenter.berkeley.edu/courses/openark-ismar-2019-tutorial/

 

Presenters:

Allen Y. Yang, UC Berkeley

Luisa Caldas, UC Berkeley

Joseph Menke, UC Berkeley

 

Dates: October 14, 2019

 

T2. Interaction Paradigms in MR – Lessons from Art

 

This tutorial aims to share our best practices in teaching artists to best express themselves via interactive technology, but also in turn, what we have learnt from being part of their creative journey. Examples include navigation in VR with LeapMotion, narrative change based on user’s head orientation, interaction with mathematical equation visualisation using hand gestures, audio-visual interface where participants use their voice to control lighting in the scene, or controlling the flow of particles through breathing as part of a meditation experience. The tutorial will also include a review of the state of the art in brain-computer interface and neurophysiological interfaces with XR. 

 

Website: https://home.doc.gold.ac.uk/~xpan001/humancentric-ARVR/

 

Presenters:

Xueni Pan, University of London

William Latham, University of London

Doron Friedman, Sammy Ofer School of Communications

 

Dates: October 18, 2019


T3. Bridging the gap between research and practice in AR

 

AR has involved plenty of domains and this interdiscipline boosts the AR researches and applications. However, plenty of research efforts can not be applied in practice readily since some researches lacked of user feedbacks thus only tackled the academic issues. This tutorial addresses this gap between research and practice and shares our experiences to bridge it.

The first topic is about the engineering gap in traditional 6DOF tracking. We will introduce the NetEase AR oriented VIO dataset and new metrics accounting for user experiences rather than the normally used ATE or RPE. The differences from the previous datasets and metrics will be illustrated. We hope our dataset can help researchers develop more user-friendly VIOs for AR.

The second topic is about gaps in AI based virtual content generation for AR, and we will take our melody driven choreography (MDC) research and embodied conversational agent (ECA) as examples. Both MDC and ECA have already successfully powered multiple video games from NetEase.

The third topic is about a data-driven approach to natural head animation generation for human-like virtual characters who are talking. The synthesized head movements can reflect speech prosody simultaneously. The automatic synthesis process relies on deep learning algorithms in less than one second, without any manual intervention. This course will first teach the related fundamental probabilistic models from LSTM to Seq2Seq (Transformer). Then we describe the implementation details for head animation generation. Finally, we will discuss the practicing issues in visual prosody generation and upcoming challenges.

 

Website:https://ar.163.com/ismar2019_tutorial

 

Presenters:

Haiwei Liu, Hangzhou EasyXR co., ltd. China

Xiang Wen, Fuxi AI Lab, NetEase, China

Zhimeng Zhang, Fuxi AI Lab, NetEase, China

 

Dates: October 18th, 2019




Thanks to our sponsors

We thank our sponsors for supporting the ISMAR 2019 conference.

DIAMOND Platinum GOLD      SILVER BRONZE

Support Units