PG2024 Conference Papers and Posters
Permanent URI for this collection
Browse
Browsing PG2024 Conference Papers and Posters by Issue Date
Now showing 1 - 20 of 57
Results Per Page
Sort Options
Item Img2PatchSeqAD: Industrial Image Anomaly Detection Based on Image Patch Sequence(The Eurographics Association, 2024) Liu, Yang; Ji, Ya Tu; Xue, Xiang; Xu, H. T.; Ren, Qing Dao Er Ji; Shi, Bao; Wu, N. E.; Lu, M.; Xu, Xuan Xuan; Guo, H. X.; Wang, L.; Dai, L. J.; Yao, Miao Miao; Li, Xiao Mei; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyIn the domain of industrial Visual Anomaly Detection(VAD), methods based on image reconstruction are the most popular and successful approaches. However, current image reconstruction methods rely on global image information, which proves to be both blind and inefficient for anomaly detection tasks. Our approach tackles these limitations by taking advantage of neighboring image patches to assess the presence of anomalies in the current image and then selectively reconstructing those patches. In this paper, we introduce a novel architecture for image anomaly detection, named Img2PatchSeqAD. Specifically, we employ a row-wise scanning method to construct sequences of image patches and design a network framework based on an image patch sequence encoder-decoder structure. Additionally, we utilize the KAN model and ELA attention mechanism to develop methods for image patch vectorization and establish an image reconstruction pipeline. Experimental results on the MVTec-AD and VisA datasets demonstrate the effectiveness of our approach, achieving localization and detection scores of 81.3 (AUROC) and 91.9 (AP) on the multi-class MVTec-AD dataset.Item Continuous Representation based Internal Self-supporting Structure via Ellipsoid Hollowing for 3D Printing(The Eurographics Association, 2024) Wang, Shengfa; Yang, Jun; Hu, Jiangbei; Lei, Na; Luo, Zhongxuan; Liu, Ligang; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyHollowing is an effective way to achieve lightweight objectives by removing material from the interior volume while maintaining feasible mechanical properties. However, hollowed models often necessitate the use of additional support materials to prevent collapse during the printing process, which can substantially negate the benefits of weight reduction. We introduce a framework for designing and optimizing self-supporting infill cavities, which are represented and optimized directly using continuous functions based on ellipsoids. Ellipsoids are favored as filling structures due to their advantageous properties, including their self-supporting nature, precise mathematical definability, variable controllability, and stress concentration mitigation capabilities. Thanks to the explicit definability, we formulate the creation of self-supporting infill cavities as a structural stiffness optimization problem using function representations. The utilization of function representation eliminates the necessity for remeshing to depict structures and shapes, thereby enabling the direct computation of integrals and gradients on the functions. Based on the representations, we propose an efficient optimization strategy to determine the shapes, positions, and topology of the infill cavities, with the goal of achieving multiple objectives, including minimizing material cost, maximizing structural stiffness, and ensuring self-supporting. We perform various experiments to validate the effectiveness and convergence of our approach. Moreover, we demonstrate the self-supporting and stability of the optimized structures through actual 3D printing trials and real mechanical testing.Item CKD-LQPOSE: Towards a Real-World Low-quality Cross-Task Distilled Pose Estimation Architecture(The Eurographics Association, 2024) Liu, Tao; Yao, Beiji; Huang, Jun; Wang, Ya; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyAlthough human pose estimation (HPE) methods have achieved promising results, they remain challenging in real-world lowquality (LQ) scenarios. Moreover, due to the general lack of modeling of LQ information in currently public HPE datasets, it is difficult to accurately evaluate the performance of the HPE methods in LQ scenarios. Hence, we propose a novel CKD-LQPose architecture, which is the first architecture fusing cross-task feature information in HPE that uses a cross-task distillation method to merge HPE information and well-quality (WQ) information. The CKD-LQPose architecture effectively enables adaptive feature learning from LQ images and improves their quality to enhance HPE performance. Additionally, we introduce the PatchWQ-Gan module to obtain WQ information and the refined transformer decoder (RTD) module to refine the features further. In the inference stage, CKD-LQPose removes the PatchWQ-Gan and RTD modules to reduce the computational burden. Furthermore, to accurately evaluate the HPE methods in LQ scenarios, we develop an RLQPose-DS test benchmark. Extensive experiments on RLQPose-DS, real-world images, and LQ versions of well-known datasets such as COCO, MPII, and CrowdPose demonstrate CKD-LQPose outperforms state-of-the-art approaches by a large margin, demonstrating its effectiveness in realworld LQ scenarios.Item MGS-SLAM: Dense RGB-D SLAM via Multi-level Gaussian Splatting(The Eurographics Association, 2024) Wang, Xu; Liu, Ying; Chen, Xiaojun; Wu, Jialin; Zhang, Xiaohao; Li, Ruihui; Chen, Renjie; Ritschel, Tobias; Whiting, EmilySimultaneous localization and mapping (SLAM) are key technologies for scene perception, localization, and map construction. 3D Gaussian Splatting (3DGS), as a powerful method for geometric and appearance representation, has brought higher performance to SLAM systems. However, the existing methods based on 3D Gaussian representation use the single level of 3D Gaussian for the entire scene, resulting in their inability to effectively capture the geometric shapes and texture details of all objects in the scene. In this work, we propose a monocular dense RGB-D SLAM system that integrates multi-level features, which is achieved by using different levels of Gaussians to separately reconstruct geometric shapes and texture details. Specifically, through the Fourier transform, we capture the geometric shapes (low frequency) and texture details (high frequency) of the scene in the frequency domain, serving as the initial conditions for the Gaussian distribution. Additionally, to address the issue of different rendering outcomes (such as specular reflections) for the same 3D Gaussian under different viewpoints, we have integrated local adaptation Gaussian and local optimization techniques to compensate the discrepancies introduced by the 3D Gaussian across different viewpoints. Extensive quantitative and qualitative results demonstrate that our method outperforms the state-of-the-art methods.Item Inverse Rendering of Translucent Objects with Shape-Adaptive Importance Sampling(The Eurographics Association, 2024) Son, Jooeun; Jung, Yucheol; Lee, Gyeongmin; Kim, Soongjin; Lee, Joo Ho; Lee, Seungyong; Chen, Renjie; Ritschel, Tobias; Whiting, EmilySubsurface scattering is ubiquitous in organic materials and has been widely researched in computer graphics. Inverse rendering of subsurface scattering, however, is often constrained by the planar geometry assumption of traditional analytic Bidirectional Surface Scattering Reflectance Distribution Functions (BSSRDF). To address this issue, a shape-adaptive BSSRDF model has been proposed to render translucent objects on curved geometry with high accuracy. In this paper, we leverage this model to estimate parameters of subsurface scattering for inverse rendering. We compute the finite difference of the rendering equation for subsurface scattering and iteratively update material parameters. We demonstrate the performance of our shapeadaptive inverse rendering model by analyzing the estimation accuracy and comparing to inverse rendering with plane-based BSSRDF models and volumetric methods.Item Pacific Graphics 2024 - Conference Papers and Posters: Frontmatter(The Eurographics Association, 2024) Chen, Renjie; Ritschel, Tobias; Whiting, Emily; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyItem Data Parallel Ray Tracing of Massive Scenes based on Neural Proxy(The Eurographics Association, 2024) Xu, Shunkang; Xu, Xiang; Xu, Yanning; Wang, Lu; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyData-parallel ray tracing is an important method for rendering massive scenes that exceed local memory. Nevertheless, its efficacy is markedly contingent upon bandwidth owing to the substantial ray data transfer during the rendering process. In this paper, we advance the utilization of neural representation geometries in data-parallel rendering to reduce ray forwarding and intersection overheads. To this end, we introduce a lightweight geometric neural representation, denoted as a ''neural proxy.'' Utilizing our neural proxies, we propose an efficient data-parallel ray tracing framework that significantly minimizes ray transmission and intersection overheads. Compared to state-of-the-art approaches, our method achieved a 2.29∼ 3.36× speedup with an almost imperceptible image quality loss.Item P-NLOS: A Prompt-Based Method for Robust NLOS Imaging(The Eurographics Association, 2024) Su, Xiongfei; Zhu, Tianyi; Liu, Lina; Zhang, Yuanlong; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyThe field of non-line-of-sight (NLOS) imaging is experiencing rapid advancement, offering the potential to reveal hidden scenes that are otherwise obscured from direct view. Despite this promise, NLOS systems face obstacles in managing a variety of sampling noise, as well as spatial and temporal variations, which limit their practical deployment. This paper introduces a novel strategy to overcome these challenges. It employs prompts to encode latent information, which is then leveraged to dynamically guide the NLOS reconstruction network. The proposed method, P-NLOS, consists of two branches: a reconstruction branch that handles the restoration of sampled information, and a prompting branch that captures the original information. The prompting branch supplies reliable content to the reconstruction branch, thereby enhancing the guidance of the reconstruction process and enhancing the quality of the recovered images. Overall, P-NLOS demonstrates robustness in real-world applications by effectively handling a wide range of corruption types in NLOS reconstruction tasks, including varying noise levels, diverse blur kernels, and temporal resolution variations.Item TPAM: Transferable Perceptual-constrained Adversarial Meshes(The Eurographics Association, 2024) Kang, Tengjia; Li, Yuezun; Zhou, Jiaran; Xin, Shiqing; Dong, Junyu; Tu, Changhe; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyTriangle meshes are widely used in 3D data representation due to their efficacy in capturing complex surfaces. Mesh classification, crucial in various applications, has typically been tackled by Deep Neural Networks (DNNs) with advancements in deep learning. However, these mesh networks have been proven vulnerable to adversarial attacks, where slight distortions to meshes can cause large prediction errors, posing significant security risks. Although several mesh attack methods have been proposed recently, two key aspects of Stealthiness and Transferability remain underexplored. This paper introduces a new method called Transferable Perceptual-constrained Adversarial Meshes (TPAM) to investigate these aspects in adversarial attacks further. Specifically, we present a Perceptual-constrained objective term to restrict the distortions and introduce an Adaptive Geometry-aware Attack Optimization strategy to adjust attacking strength iteratively based on local geometric frequencies, striking a good balance between stealthiness and attacking accuracy. Moreover, we propose a Bayesian Surrogate Network to enhance transferability and introduce a new metric, the Area Under Accuracy (AUACC), for comprehensive performance evaluation. Experiments on various mesh classifiers demonstrate the effectiveness of our method in both white-box and black-box settings, enhancing the attack stealthiness and transferability across multiple networks. Our research can enhance the understanding of DNNs, thus improving the robustness of mesh classifiers. The code is available at https://github.com/Tengjia-Kang/TPAM.Item Physics-Informed Neural Fields with Neural Implicit Surface for Fluid Reconstruction(The Eurographics Association, 2024) Duan, Zheng; Ren, Zhong; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyRecovering fluid density and velocity from multi-view RGB videos poses a formidable challenge. Existing solutions typically assume knowledge of obstacles and lighting, or are designed for simple fluid scenes without obstacles or complex lighting. Addressing these challenges, our study presents a novel hybrid model named PINFS, which ingeniously fuses the capabilities of Physics-Informed Neural Fields (PINF) and Neural Implicit Surfaces (NeuS) to accurately reconstruct scenes containing smoke. By combining the capabilities of SIREN-NeRFt in PINF for creating realistic smoke representations and the accuracy of NeuS in depicting solid obstacles, PINFS excels in providing detailed reconstructions of smoke scenes with improved visual authenticity and physical precision. PINFS distinguishes itself by incorporating solid's view-independent opaque density and addressing Neumann boundary conditions through signed distances from NeuS. This results in a more realistic and physically plausible depiction of smoke behavior in dynamic scenarios. Comprehensive evaluations of synthetic and real-world datasets confirm the model's superior performance in complex scenes with obstacles. PINFS introduces a novel framework for realistically and physically consistent rendering of complex fluid dynamics scenarios, pushing the boundaries in the utilization of mixed physical and neural-based approaches. The code is available at https://github.com/zduan3/pinfs_code.Item DreamMapping: High-Fidelity Text-to-3D Generation via Variational Distribution Mapping(The Eurographics Association, 2024) Cai, Zeyu; Wang, Duotun; Liang, Yixun; Shao, Zhijing; Chen, Ying-Cong; Zhan, Xiaohang; Wang, Zeyu; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyScore Distillation Sampling (SDS) has emerged as a prevalent technique for text-to-3D generation, enabling 3D content creation by distilling view-dependent information from text-to-2D guidance. However, they frequently exhibit shortcomings such as over-saturated color and excess smoothness. In this paper, we conduct a thorough analysis of SDS and refine its formulation, finding that the core design is to model the distribution of rendered images. Following this insight, we introduce a novel strategy called Variational Distribution Mapping (VDM), which expedites the distribution modeling process by regarding the rendered images as instances of degradation from diffusion-based generation. This special design enables the efficient training of variational distribution by skipping the calculations of the Jacobians in the diffusion U-Net. We also introduce timestep-dependent Distribution Coefficient Annealing (DCA) to further improve distilling precision. Leveraging VDM and DCA, we use Gaussian Splatting as the 3D representation and build a text-to-3D generation framework. Extensive experiments and evaluations demonstrate the capability of VDM and DCA to generate high-fidelity and realistic assets with optimization efficiency.Item GazeMoDiff: Gaze-guided Diffusion Model for Stochastic Human Motion Prediction(The Eurographics Association, 2024) Yan, Haodong; Hu, Zhiming; Schmitt, Syn; Bulling, Andreas; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyHuman motion prediction is important for many virtual and augmented reality (VR/AR) applications such as collision avoidance and realistic avatar generation. Existing methods have synthesised body motion only from observed past motion, despite the fact that human eye gaze is known to correlate strongly with body movements and is readily available in recent VR/AR headsets. We present GazeMoDiff - a novel gaze-guided denoising diffusion model to generate stochastic human motions. Our method first uses a gaze encoder and a motion encoder to extract the gaze and motion features respectively, then employs a graph attention network to fuse these features, and finally injects the gaze-motion features into a noise prediction network via a cross-attention mechanism to progressively generate multiple reasonable human motions in the future. Extensive experiments on the MoGaze and GIMO datasets demonstrate that our method outperforms the state-of-the-art methods by a large margin in terms of multi-modal final displacement error (17.3% on MoGaze and 13.3% on GIMO). We further conducted a human study (N=21) and validated that the motions generated by our method were perceived as both more precise and more realistic than those of prior methods. Taken together, these results reveal the significant information content available in eye gaze for stochastic human motion prediction as well as the effectiveness of our method in exploiting this information.Item TSDN: Transport-based Stylization for Dynamic NeRF(The Eurographics Association, 2024) Gong, Yuning; Song, Mingqing; Ren, Xiaohua; Liao, Yuanjun; Zhang, Yanci; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWhile previous Neural Radiance Fields (NeRF) stylization methods achieve visually appealing results on transferring color style for static NeRF scenes, they lack the ability to stylize dynamic NeRF scenes with geometrically stylized features (like brushstrokes or feature elements from artists' works), which is also important for style transfer. However, directly stylizing each frame of dynamic NeRF independently with geometrically stylized features would lead to flickering results due to bad feature alignment. To overcome these problems, in this paper, we propose Transport-based Stylization for Dynamic NeRF (TSDN), a new dynamic NeRF stylization method that is able to stylize geometric features and align them with the motion in the scene. TSDN utilizes stylization guiding velocity fields to advect dynamic NeRF to get stylized results and then transfers these velocity fields between frames to maintain feature alignment. Also, to deal with the noisy stylized results due to the ambiguity of the deformation field, we propose a feature advection scheme and a novel regularization function specified for dynamic NeRF. The experiment results show that our method has the ability to stylize dynamic scenes with detailed geometrically stylized features from videos or multi-view image inputs, while preserving the original color style if desired. This capability is not present in previous video stylization methods.Item High-Quality Cage Generation Based on SDF(The Eurographics Association, 2024) Qiu, Hao; Liao, Wentao; Chen, Renjie; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyCages are widely used in various applications of computer graphics, including physically-based rendering, shape deformation, physical simulation, etc. Given an input shape, we present an efficient and robust method for the automatic construction of high quality cage. Our method follows the envelope-and-simplify paradigm. In the enveloping stage, an isosurface enclosing the model is extracted from the signed distance field (SDF) of the shape. By leveraging the versatility of SDF, we propose a straightforward modification to SDF that enables the resulting isosurface to have better topological structure and capture the details of the shape well. In the simplification stage, we use the quadric error metric to simplify the isosurface and construct a cage, while rigorously ensuring the cage remains enclosing and does not self-intersect. We propose to further optimize various qualities of the cage for different applications, including distance to the original mesh and meshing quality. The cage generated by our method is guaranteed to be strictly enclosing the input shape, free of self-intersection, has the user-specified complexity and provides a good approximation to the input, as required by various applications. Through extensive experiments, we demonstrate that our method is robust and efficient for a wide variety of shapes with complex geometry and topology.Item Geodesic Distance Propagation Across Open Boundaries(The Eurographics Association, 2024) Chen, Shuangmin; Yue, Zijia; Wang, Wensong; Xin, Shiqing; Tu, Changhe; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyThe computation of geodesic distances on curved surfaces stands as a fundamental operation in digital geometry processing. Throughout distance propagation, each surface point assumes the dual role of a receiver and transmitter. Despite substantial research on watertight triangle meshes, algorithms designed for broken surfaces-those afflicted with open-boundary defects-remain scarce. Current algorithms primarily focus on bridging holes and gaps in the embedding space to facilitate distance propagation across boundaries but fall short in addressing large open-boundary defects in highly curved regions. In this paper, we delve into the prospect of inferring defect-tolerant geodesics exclusively within the intrinsic space. Our observation reveals that open-boundary defects can give rise to a ''shadow'' region, where the shortest path touches open boundaries. Based o n such an observation, we have made three key adaptations to the fast marching method (FMM). Firstly, boundary points now exclusively function as distance receivers, impeding any further distance propagation. Secondly, bidirectional distance propagation is permitted, allowing the prediction of geodesic distances in the shadow region based on those in the visible region (even if the visual region is a little more distant from the source). Lastly, we have redefined priorities to harmonize distance propagation between the shadow and visible regions. Notably intrinsic, our algorithm distinguishes itself from existing counterparts. Experimental results showcase its exceptional performance and accuracy, even in the presence of large and irregular open boundaries.Item 3D-SSGAN: Lifting 2D Semantics for 3D-Aware Compositional Portrait Synthesis(The Eurographics Association, 2024) Liu, Ruiqi; Zheng, Peng; Wang, Ye; Ma, Rui; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyExisting 3D-aware portrait synthesis methods can generate impressive high-quality images while preserving strong 3D consistency. However, most of them cannot support the fine-grained part-level control over synthesized images. Conversely, some GAN-based 2D portrait synthesis methods can achieve clear disentanglement of facial regions, but they cannot preserve view consistency due to a lack of 3D modeling abilities. To address these issues, we propose 3D-SSGAN, a novel framework for 3D-aware compositional portrait image synthesis. First, a simple yet effective depth-guided 2D-to-3D lifting module maps the generated 2D part features and semantics to 3D. Then, a volume renderer with a novel 3D-aware semantic mask renderer is utilized to produce the composed face features and corresponding masks. The whole framework is trained end-to-end by discriminating between real and synthesized 2D images and their semantic masks. Quantitative and qualitative evaluations demonstrate the superiority of 3D-SSGAN in controllable part-level synthesis while preserving 3D view consistency.Item Deep-PE: A Learning-Based Pose Evaluator for Point Cloud Registration(The Eurographics Association, 2024) Gao, Junjie; Wang, Chongjian; Ding, Zhongjun; Chen, Shuangmin; Xin, Shiqing; Tu, Changhe; Wang, Wenping; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyIn the realm of point cloud registration, the most prevalent pose evaluation approaches are statistics-based, identifying the optimal transformation by maximizing the number of consistent correspondences. However, registration recall decreases significantly when point clouds exhibit a low overlap ratio, despite efforts in designing feature descriptors and establishing correspondences. In this paper, we introduce Deep-PE, a lightweight, learning-based pose evaluator designed to enhance the accuracy of pose selection, especially in challenging point cloud scenarios with low overlap. Our network incorporates a Pose-Aware Attention (PAA) module to simulate and learn the alignment status of point clouds under various candidate poses, alongside a Pose Confidence Prediction (PCP) module that predicts the likelihood of successful registration. These two modules facilitate the learning of both local and global alignment priors. Extensive tests across multiple benchmarks confirm the effectiveness of Deep-PE. Notably, on 3DLoMatch with a low overlap ratio, Deep-PE significantly outperforms state-of-the-art methods by at least 8% and 11% in registration recall under handcrafted FPFH and learning-based FCGF descriptors, respectively. To the best of our knowledge, this is the first study to utilize deep learning to select the optimal pose without the explicit need for input correspondences.Item Real-Time Rendering of Glints in the Presence of Area Lights(The Eurographics Association, 2024) Kneiphof, Tom; Klein, Reinhard; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyMany real-world materials are characterized by a glittery appearance. Reproducing this effect in physically based renderings is a challenging problem due to its discrete nature, especially in real-time applications which require a consistently low runtime. Recent work focuses on glittery appearance illuminated by infinitesimally small light sources only. For light sources like the sun this approximation is a reasonable choice. In the real world however, all light sources are fundamentally area light sources. In this paper, we derive an efficient method for rendering glints illuminated by spatially constant diffuse area lights in real time. To this end, we require an adequate estimate for the probability of a single microfacet to be correctly oriented for reflection from the source to the observer. A good estimate is achieved either using linearly transformed cosines (LTC) for large light sources, or a locally constant approximation of the normal distribution for small spherical caps of light directions. To compute the resulting number of reflecting microfacets, we employ a counting model based on the binomial distribution. In the evaluation, we demonstrate the visual accuracy of our approach, which is easily integrated into existing real-time rendering frameworks, especially if they already implement shading for area lights using LTCs and a counting model for glint shading under point and directional illumination. Besides the overhead of the preexisting constituents, our method adds little to no additional overhead.Item CNCUR : A simple 2D Curve Reconstruction Algorithm based on constrained neighbours(The Eurographics Association, 2024) Antony, Joms; Reghunath, Minu; Muthuganapathy, Ramanathan; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyGiven a planar point set S ∈ R2 (where S = {v1, . . . , vn}) sampled from an unknown curve Σ, the goal is to obtain a piece-wise linear reconstruction the curve from S that best approximates Σ. In this work, we propose a simple and intuitive Delaunay triangulation(DT)-based algorithm for curve reconstruction. We start by constructing a Delaunay Triangulation (DT) of the input point set. Next, we identify the set of edges, ENp in the natural neighborhood of each point p in the DT. From the set of edges in ENp, we retain the first two shorter edges connected to each point. To take care of open curves, one of the retained edges has to be removed based on a parameter δ. Here, δ is a parameter used to eliminate the longer edge based on the allowable ratio between the maximum and minimum edge lengths. Our algorithm inherently handles self-intersections, multiple components, sharp corners, and different levels of Gaussian noise, all without requiring any parameters, pre-processing, or post-processing.Item PhysHand: A Hand Simulation Model with Physiological Geometry, Physical Deformation, and Accurate Contact Handling(The Eurographics Association, 2024) Sun, Mingyang; Kou, Dongliang; Yuan, Ruisheng; Yang, Dingkang; Zhai, Peng; Zhao, Xiao; Jiang, Yang; Li, Xiong; Li, Jingchen; Zhang, Lihua; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyIn virtual Hand-Object Interaction (HOI) scenarios, the authenticity of the hand's deformation is important to immersive experience, such as natural manipulation or tactile feedback. Unrealistic deformation arises from simplified hand geometry, neglect of the different physics attributes of the hand, and penetration due to imprecise contact handling. To address these problems, we propose PhysHand, a novel hand simulation model, which enhances the realism of deformation in HOI. First, we construct a physiologically plausible geometry, a layered mesh with a ''skin-flesh-skeleton'' structure. Second, to satisfy the distinct physics features of different soft tissues, a constraint-based dynamics framework is adopted with carefully designed layer-corresponding constraints to maintain flesh attached and skin smooth. Finally, we employ an SDF-based method to eliminate the penetration caused by contacts and enhance its accuracy by introducing a novel multi-resolution querying strategy. Extensive experiments have been conducted to demonstrate the outstanding performance of PhysHand in calculating deformations and handling contacts. Compared to existing methods, our PhysHand: 1) can compute both physiologically and physically plausible deformation; 2) significantly reduces the depth and count of penetration in HOI.
- «
- 1 (current)
- 2
- 3
- »