Motion and Expression Platinum Pass Full Conference Pass Full Conference One-Day Pass Date: Monday, November 18th Time: 11:00am - 12:45pm Venue: Plaza Meeting Room P3 Session Chair: Wan-Chun Alex Ma, Google, United States of America Fast Terrain-Adaptive Motion Generation Using Deep Neural Networks Author(s)/Presenter(s): Moonwon Yu, NCSOFT, South KoreaByungjun Kwon, NCSOFT, South KoreaJongmin Kim, Kangwon National University, South KoreaShinjin Kang, Hongik University, South KoreaHanyoung Jang, NCSOFT, South Korea Abstract: Our neural network system makes it possible to generate terrain-adaptive motions of a large number of game characters. In addition, the generated motions retain human nuances. Interactive editing of performance-based facial animation Author(s)/Presenter(s): Yeongho Seol, Weta Digital, New ZealandMichael Cozens, Weta Digital, New Zealand Abstract: We present a set of interactive editing solutions for performance-based facial animation. The presented solutions allow artists to enhance the result of the automatic solve-retarget with a few tweaks. Piku Piku Interpolation: An artist-guided sampling algorithm for synthesizing detail applied to facial animation Author(s)/Presenter(s): Richard Andrew Roberts, CMIC, Victoria University of Wellington, New ZealandRafael Kuffner dos Anjos, CMIC, Victoria University of Wellington, New ZealandKen Anjyo, CMIC, Victoria University of Wellington; OLM Digital, Inc., JapanJ.P. Lewis, Victoria University of Wellington, United States of America Abstract: We present a new sampling algorithm that adds realism to early-stage facial animation by recreating detail observed FACS data extracted from videos. Saliency Diagrams: A tool for analyzing animation through the relative importance of keyposes Author(s)/Presenter(s): Nicolas Xuan Tan Nghiem, Visual Media Lab, KAIST; École Polytechnique, FranceRichard Roberts, CMIC, Victoria University of Wellington, New ZealandJP Lewis, Victoria University, New ZealandJunyong Noh, Visual Media Lab, KAIST, South Korea Abstract: In this paper, we take inspiration from keyframe animation to compute what we call the Saliency diagram of the animation which can be used to analyze the motion.