Inverse Dynamic Hair Modeling with Frictional Contact

Alexandre Derouet-Jourdan, Florence Bertails-Descoubes, Gilles Daviet, JoŽlle Thollot
ACM Transactions on Graphics, November 2013 (Proceedings of the ACM SIGGRAPH Asia 2013 Conference)


In the latest years, considerable progress has been achieved for accurately acquiring the geometry of human hair, thus largely improving the realism of virtual characters. In parallel, rich and robust physics-based simulators have been successfully designed to capture the intricate dynamics of hair due to contact and friction. However, at the moment there exists no consistent pipeline for converting a given hair geometry into a realistic physics-based hair model. Current approaches simply initialize the hair simulator with the input geometry in the absence of external forces. This results in an undesired sagging effect when the dynamic simulation is started, which basically ruins all the efforts put into the accurate design and/or capture of the input hairstyle. In this paper we propose the first method which consistently and robustly accounts for surrounding forces --- gravity and frictional contacts, including hair self-contacts --- when converting a geometric hairstyle into a physics-based hair model. Taking an arbitrary hair geometry as input together with a corresponding body mesh, we interpret the hair shape as a static equilibrium configuration of a hair simulator, in the presence of gravity as well as hair-body and hair-hair frictional contacts. Assuming hair parameters are homogeneous and lie in a plausible range of physical values, we show that this large, underdetermined inverse problem can be formulated as a well-posed constrained optimization problem, which can be robustly and efficiently solved by leveraging the frictional contact solver of the direct hair simulator. Our method was successfully applied to the animation of various hair geometries, ranging from synthetic hairstyles manually designed by an artist to the most recent human hair data reconstructed from capture.



We would like to thank Laurence Boissieux for creating the character meshes and motions as well as the synthetic wavy hairstyle, and Romain Casati for producing many of the final renderings, using the opensource YafaRay raytracer. We are also very grateful to Tomas Lay Herrera, Arno Zinke, Andreas Weber (Bonn University) and Linjie Luo, Hao Li, Szymon Rusinkiewicz (Princeton University) for sharing with us their latest data of captured hair. Finally, we would like to thank the anonymous reviewers for their useful comments.