44-Issue 6
Permanent URI for this collection
Browse
Browsing 44-Issue 6 by Subject "behavioural animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data(The Eurographics Association and John Wiley & Sons Ltd., 2025) Gong, Xianjin; Gain, James; Rohmer, Damien; Lyonnet, Sixtine; Pettré, Julien; Cani, Marie-Paule; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerWe present a method for animating herds that automatically tunes a microscopic herd model based on a short video clip of real animals. Our method handles videos with dense herds, where individual animal motion cannot be separated out. Our contribution is a novel framework for extracting macroscopic herd behaviour from such video clips, and then deriving the microscopic agent parameters that best match this behaviour. To support this learning process, we extend standard agent models to provide a separation between leaders and followers, better match the occlusion and field-of-view limitations of real animals, support differentiable parameter optimization and improve authoring control. We validate the method by showing that once optimized, the social force and perception parameters of the resulting herd model are accurate enough to predict subsequent frames in the video, even for macroscopic properties not directly incorporated in the optimization process. Furthermore, the extracted herding characteristics can be applied to any terrain with a palette and region-painting approach that generalizes to different herd sizes and leader trajectories. This enables the authoring of herd animations in new environments while preserving learned behaviour.Item MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lemonari, Marilena; Panayiotou, Andreas; Kyriakou, Theodoros; Pelechano, Nuria; Chrysanthou, Yiorgos; Aristidou, Andreas; Charalambous, Panayiotis; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerSimulating believable crowds for applications like movies or games is challenging due to the many components that comprise a realistic outcome. Users typically need to manually tune a large number of simulation parameters until they reach the desired results. We introduce MPACT, a framework that leverages image-based encoding to convert unlabelled crowd data into meaningful and controllable parameters for crowd generation. In essence, we train a parameter prediction network on a diverse set of synthetic data, which includes pairs of images and corresponding crowd profiles. The learned parameter space enables: (a) implicit crowd authoring and control, allowing users to define desired crowd scenarios using real-world trajectory data, and (b) crowd analysis, facilitating the identification of crowd behaviours in the input and the classification of unseen scenarios through operations within the latent space. We quantitatively and qualitatively evaluate our framework, comparing it against real-world data and selected baselines, while also conducting user studies with expert and novice users. Our experiments show that the generated crowds score high in terms of simulation believability, plausibility and crowd behaviour faithfulness.