44-Issue 7
Permanent URI for this collection
Browse
Browsing 44-Issue 7 by Issue Date
Now showing 1 - 20 of 49
Results Per Page
Sort Options
Item Hybrid Sparse Transformer and Feature Alignment for Efficient Image Completion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Chen, L.; Sun, Hao; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenIn this paper, we propose an efficient single-stage hybrid architecture for image completion. Existing transformer-based image completion methods often struggle with accurate content restoration, largely due to their ineffective modeling of corrupted channel information and the attention noise introduced by softmax-based mechanisms, which results in blurry textures and distorted structures. Additionally, these methods frequently fail to maintain texture consistency, either relying on imprecise mask sampling or incurring substantial computational costs from complex similarity calculations. To address these limitations, we present two key contributions: a Hybrid Sparse Self-Attention (HSA) module and a Feature Alignment Module (FAM). The HSA module enhances structural recovery by decoupling spatial and channel attention with sparse activation, while the FAM enforces texture consistency by aligning encoder and decoder features via a mask-free, energy-gated mechanism without additional inference cost. Our method achieves state-of-the-art image completion results with the fastest inference speed among single-stage networks, as measured by PSNR, SSIM, FID, and LPIPS on CelebA-HQ, Places2, and Paris datasets.Item Feature Disentanglement in GANs for Photorealistic Multi-view Hair Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2025) Xu, Jiayi; Wu, Zhengyang; Zhang, Chenming; Jin, Xiaogang; Ji, Yaohua; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenFast and highly realistic multi-view hair transfer plays a crucial role in evaluating the effectiveness of virtual hair try-on systems. However, GAN-based generation and editing methods face persistent challenges in feature disentanglement. Achieving pixel-level, attribute-specific modifications-such as changing hairstyle or hair color without affecting other facial features- remains a long-standing problem. To address this limitation, we propose a novel multi-view hair transfer framework that leverages a hair-only intermediate facial representation and a 3D-guided masking mechanism. Our approach disentangles triplane facial features into spatial geometric components and global style descriptors, enabling independent and precise control over hairstyle and hair color. By introducing a dedicated intermediate representation focused solely on hair and incorporating a two-stage feature fusion strategy guided by the generated 3D mask, our framework achieves fine-grained local editing across multiple viewpoints while preserving facial integrity and improving background consistency. Extensive experiments demonstrate that our method produces visually compelling and natural results in side-to-front view hair transfer tasks, offering a robust and flexible solution for high-fidelity hair reconstruction and manipulation.Item MF-SDF: Neural Implicit Surface Reconstruction using Mixed Incident Illumination and Fourier Feature Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhou, Xueyang; Shen, Xukun; Hu, Yong; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenThe utilization of neural implicit surface as a geometry representation has proven to be an effective multi-view surface reconstruction method. Despite the promising results achieved, reconstructing geometry from objects in real-world scenes remains challenging due to the interaction between surface materials and complex ambient light, as well as shadow effects caused by self-occlusion, making it a highly ill-posed problem. To address this challenge, we propose MF-SDF, a method that use a hybrid neural network and spherical gaussian representation to model environmental lighting, so that the model can express the situation of multiple light sources including directional light (such as outdoor sunlight) in real-world scenarios. Benefit from this, our method effectively reconstructs coherent surfaces and accurately locates the shadow location on the surface. Furthermore, we adopt a shadow aware multi-view photometric consistency loss, which mitigates the erroneous reconstruction results of previous methods on surfaces containing shadows, thereby improve the overall smoothness of the surface. Additionally, unlike previous approaches that directly optimize spatial features, we propose a Fourier feature optimization method that directly optimizes the tensorial feature in the frequency domain. By optimizing the high-frequency components, this approach further enhances the details of surface reconstruction. Finally, through experiments, we demonstrate that our method outperforms existing methods in terms of reconstruction accuracy on real captured data.Item Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Yixin; Zhou, Yang; Huang, Hui; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenRecently, 2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS by using 2D surfels to approximate thin surfaces. However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas. We find that the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives. To address that, we first replace the depth distortion loss in 2DGS with a novel depth convergence loss, which imposes a strong constraint on depth continuity. Then, we rectify the depth criterion in determining the actual surface, which fully accounts for all the intersecting Gaussians along the ray. Qualitative and quantitative evaluations across various datasets reveal that our method significantly improves reconstruction quality, with more complete and accurate surfaces than 2DGS. Code is available at https://github.com/ XiaoXinyyx/Unbiased_Surfel.Item Automatic Reconstruction of Woven Cloth from a Single Close-up Image(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Chenghao; Khattar, Apoorv; Zhu, Junqiu; Pettifer, Steve; Yan, Lingqi; Montazeri, Zahra; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenDigital replication of woven fabrics presents significant challenges across a variety of sectors, from online retail to entertainment industries. To address this, we introduce an inverse rendering pipeline designed to estimate pattern, geometry, and appearance parameters of woven fabrics given a single close-up image as input. Our work is capable of simultaneously optimizing both discrete and continuous parameters without manual interventions. It outputs a wide array of parameters, encompassing discrete elements like weave patterns, ply and fiber number, using Simulated Annealing. It also recovers continuous parameters such as reflection and transmission components, aligning them with the target appearance through differentiable rendering. For irregularities caused by deformation and flyaways, we use 2D Gaussians to approximate them as a post-processing step. Our work does not pursue perfect matching of all fine details, it targets an automatic and end-to-end reconstruction pipeline that is robust to slight camera rotations and room light conditions within an acceptable time (15 minutes on CPU), unlike previous works which are either expensive, require manual intervention, assume given pattern, geometry or appearance, or strictly control camera and light conditions.Item Projective Displacement Mapping for Ray Traced Editable Surfaces(The Eurographics Association and John Wiley & Sons Ltd., 2025) Hoetzlein, Rama; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenDisplacement mapping is an important tool for modeling detailed geometric features. We explore the problem of authoring complex surfaces while ray tracing interactively. Current techniques for ray tracing displaced surfaces rely on acceleration structures that require dynamic rebuilding when edited. These techniques are typically used for massive static scenes or the compression of detailed source assets. Our interest lies in modeling and look development of artistic features with real-time ray tracing. We introduce projective displacement mapping as a direct sampling method combined with a hardware BVH. Quality and performance are improved over existing methods with smoothed displaced normals, thin feature sampling, tight prism bounds and ray bi-linear patch intersections.Item Geometric Integration for Neural Control Variates(The Eurographics Association and John Wiley & Sons Ltd., 2025) Meister, Daniel; Harada, Takahiro; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenControl variates are a variance-reduction technique for Monte Carlo integration. The principle involves approximating the integrand by a function that can be analytically integrated, and integrating using the Monte Carlo method only the residual difference between the integrand and the approximation, to obtain an unbiased estimate. Neural networks are universal approximators that could potentially be used as a control variate. However, the challenge lies in the analytic integration, which is not possible in general. In this manuscript, we study one of the simplest neural network models, the multilayered perceptron (MLP) with continuous piecewise linear activation functions, and its possible analytic integration. We propose an integration method based on integration domain subdivision, employing techniques from computational geometry to solve this problem in 2D. We demonstrate that an MLP can be used as a control variate in combination with our integration method, showing applications in the light transport simulation.Item Uncertainty-Aware Adjustment via Learnable Coefficients for Detailed 3D Reconstruction of Clothed Humans from Single Images(The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Yadan; Li, Yunze; Ying, Fangli; Phaphuangwittayakul, Aniwat; Dhuny, Riyad; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenAlthough single-image 3D human reconstruction has made significant progress in recent years, few of the current state-of-theart methods can accurately restore the appearance and geometric details of loose clothing. To achieve high-quality reconstruction of a human body wearing loose clothing, we propose a learnable dynamic adjustment framework that integrates side-view features and the uncertainty of the parametric human body model to adaptively regulate its reliability based on the clothing type. Specifically, we first adopt the Vision Transformer model as an encoder to capture the image features of the input image, and then employ SMPL-X to decouple the side-view body features. Secondly, to reduce the limitations imposed by the regularization of the parametric model, particularly for loose garments, we introduce a learnable coefficient to reduce the reliance on SMPLX. This strategy effectively accommodates the large deformations caused by loose clothing, thereby accurately expressing the posture and clothing in the image. To evaluate the effectiveness, we validate our method on the public CLOTH4D and Cape datasets, and the experimental results demonstrate better performance compared to existing approaches. The code is available at https://github.com/yyd0613/CoRe-Human.Item StyleMM: Stylized 3D Morphable Face Model via Text Driven Aligned Image Translation(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lee, Seungmi; Yun, Kwan; Noh, Junyong; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWe introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through imagebased training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at kwanyun.github.io/stylemm_page.Item Single-Line Drawing Vectorization(The Eurographics Association and John Wiley & Sons Ltd., 2025) Magne, Tanguy; Sorkine-Hornung, Olga; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenVectorizing line drawings is a repetitive, yet necessary task that professional creatives must perform to obtain an easily editable and scalable digital representation of a raster sketch. State-of-the-art automatic methods in this domain can create series of curves that closely fit the appearance of the drawing. However, they often neglect the line parameterization. Thus, their vector representation cannot be edited naturally by following the drawing order. We present a novel method for single-line drawing vectorization that addresses this issue. Single-line drawings consist of a single stroke, where the line can intersect itself multiple times, making the drawing order non-trivial to recover. Our method fits a single parametric curve, represented as a Bézier spline, to approximate the stroke in the input raster image. To this end, we produce a graph representation of the input and employ geometric priors and a specially trained neural network to correctly capture and classify curve intersections and their traversal configuration. Our method is easily extended to drawings containing multiple strokes while preserving their integrity and order.We compare our vectorized results with the work of several artists, showing that our stroke order is similar to the one artists employ naturally. Our vectorization method achieves state-of-the-art results in terms of similarity with the original drawing and quality of the vectorization on a benchmark of single-line drawings. Our method's results can be refined interactively, making it easy to integrate into professional workflows. Our code and results are available at https://github.com/tanguymagne/SLD-Vectorization.Item DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2025) Saha, Ripon Kumar; Zhang, Yufan; Ye, Jinwei; Jayasuriya, Suren; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenSimulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (> 10 FPS) rates for image resolutions up to 1024×1024 (real-time 35 FPS at 256×256 resolution with depth or 512×512 at 33 FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py- Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAATSim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim.Item LayoutRectifier: An Optimization-based Post-processing for Graphic Design Layout Generation(The Eurographics Association and John Wiley & Sons Ltd., 2025) Shen, I-Chao; Shamir, Ariel; Igarashi, Takeo; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenRecent deep learning methods can generate diverse graphic design layouts efficiently. However, these methods often create layouts with flaws, such as misalignment, unwanted overlaps, and unsatisfied containment. To tackle this issue, we propose an optimization-based method called LayoutRectifier, which gracefully rectifies auto-generated graphic design layouts to reduce these flaws while minimizing deviation from the generated layout. The core of our method is a two-stage optimization. First, we utilize grid systems, which professional designers commonly use to organize elements, to mitigate misalignments through discrete search. Second, we introduce a novel box containment function designed to adjust the positions and sizes of the layout elements, preventing unwanted overlapping and promoting desired containment. We evaluate our method on content-agnostic and content-aware layout generation tasks and achieve better-quality layouts that are more suitable for downstream graphic design tasks. Our method complements learning-based layout generation methods and does not require additional training.Item EmoDiffGes: Emotion-Aware Co-Speech Holistic Gesture Generation with Progressive Synergistic Diffusion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Xinru; Lin, Jingzhong; Zhang, Bohao; Qi, Yuanyuan; Wang, Changbo; He, Gaoqi; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenCo-speech gesture generation, driven by emotional expression and synergistic bodily movements, is essential for applications such as virtual avatars and human-robot interaction. Existing co-speech gesture generation methods face two fundamental limitations: (1) producing inexpressive gestures due to ignoring the temporal evolution of emotion; (2) generating incoherent and unnatural motions as a result of either holistic body oversimplification or independent part modeling. To address the above limitations, we propose EmoDiffGes, a diffusion-based framework grounded in embodied emotion theory, unifying dynamic emotion conditioning and part-aware synergistic modeling. Specifically, a Dynamic Emotion-Alignment Module (DEAM) is first applied to extract dynamic emotional cues and inject emotion guidance into the generation process. Then, a Progressive Synergistic Gesture Generator (PSGG) iteratively refines region-specific latent codes while maintaining full-body coordination, leveraging a Body Region Prior for part-specific encoding and Progressive Inter-Region Synergistic Flow for global motion coherence. Extensive experiments validate the effectiveness of our methods, showcasing the potential for generating expressive, coordinated, and emotionally grounded human gestures.Item RT-HDIST: Ray-Tracing Core-based Hausdorff Distance Computation(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kim, YoungWoo; Lee, Jaehong; Kim, Duksu; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenThe Hausdorff distance is a fundamental metric with widespread applications across various fields. However, its computation remains computationally expensive, especially for large-scale datasets. This work targets exact point-to-point Hausdorff distance on point sets. In this work, we present RT-HDIST, the first Hausdorff distance algorithm accelerated by ray-tracing cores (RT-cores). By reformulating the Hausdorff distance problem as a series of nearest-neighbor searches and introducing a novel quantized voxel-index space, RT-HDIST achieves significant reductions in computational overhead while maintaining exact results. Extensive benchmarks demonstrate up to a two-order-of-magnitude speedup over prior state-of-the-art methods, underscoring RT-HDIST's potential for real-time and large-scale applications.Item LTC-IR: Multiview Edge-Aware Inverse Rendering with Linearly Transformed Cosines(The Eurographics Association and John Wiley & Sons Ltd., 2025) Park, Dabeen; Park, Junsuh; Son, Jooeun; Lee, Seungyong; Lee, Joo Ho; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenDecomposing environmental lighting and materials is challenging as they are tightly intertwined and integrated over the hemisphere. In order to precisely decouple them, the lighting representation must represent general image features such as object boundaries or texture contrast, called light edges, which are often neglected in the existing inverse rendering methods. In this paper, we propose an inverse rendering method that efficiently captures light edges. We introduce a triangle mesh-based light representation that can express light edges by aligning triangle edges with light edges. We exploit the linearly transformed cosines as BRDF approximations to efficiently compute environmental lighting with our light representation. Our edge-aware inverse rendering precisely decouples distributions of reflectance and lighting through differentiable rendering by jointly reconstructing light edges and estimating the BRDF parameters. Our experiments, including various material/scene settings and ablation studies, demonstrate the reconstruction performance and computational efficiency of our method.Item IPFNet: Implicit Primitive Fitting for Robust Point Cloud Segmentation(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhou, Shengdi; Zan, Xiaoqiang; Zhou, Bin; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenThe segmentation and fitting of geometric primitives from point clouds is a widely adopted approach for modelling the underlying geometric structure of objects in reverse engineering and numerous graphics applications. Existing methods either overlook the role of geometric information in assisting segmentation or incorporate reconstruction losses without leveraging modern neural implicit field representations, leading to limited robustness against noise and weak expressive power in reconstruction. We propose a point cloud segmentation and fitting framework based on neural implicit representations, fully leveraging neural implicit fields' expressive power and robustness. The key idea is the unification of geometric representation within a neural implicit field framework, enabling seamless integration of geometric loss for improved performance. In contrast to previous approaches that focus solely on clustering in the feature embedding space, our method enhances instance segmentation through semanticaware point embeddings and simultaneously improves semantic predictions via instance-level feature fusion. Furthermore, we incorporate 3D-specific cues such as spatial dimensions and geometric connectivity, which are uniquely informative in the 3D domain. Extensive experiments and comparisons against previous methods demonstrate our robustness and superiority.Item G-SplatGAN: Disentangled 3D Gaussian Generation for Complex Shapes via Multi-Scale Patch Discriminators(The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Jiaqi; Dang, Haochuan; Zhou, Zhi; Zhu, Junke; Huang, Zhangjin; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenGenerating 3D objects with complex topologies from monocular images remains a challenge in computer graphics, due to the difficulty of modeling varying 3D shapes with disentangled, steerable geometry and visual attributes. While NeRF-based methods suffer from slow volumetric rendering and limited structural controllability. Recent advances in 3D Gaussian Splatting provide a more efficient alternative and its generative modeling with separate control over structure and appearance remains underexplored. In this paper, we propose G-SplatGAN, a novel 3D-aware generation framework that combines the rendering efficiency of 3D Gaussian Splatting with disentangled latent modeling. Starting from a shared Gaussian template, our method uses dual modulation branches to modulate geometry and appearance from independent latent codes, enabling precise shape manipulation and controllable generation. We adopt a progressive adversarial training scheme with multi-scale and patchbased discriminators to capture both global structure and local detail. Our model requires no 3D supervision and is trained on monocular images with known camera poses, reducing data reliance while supporting real image inversion through a geometryaware encoder. Experiments show that G-SplatGAN achieves superior performance in rendering speed, controllability and image fidelity, offering a compelling solution for controllable 3D generation using Gaussian representations.Item Real-Time Per-Garment Virtual Try-On with Temporal Consistency for Loose-Fitting Garments(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Zaiqiang; Shen, I-Chao; Igarashi, Takeo; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenPer-garment virtual try-on methods collect garment-specific datasets and train networks tailored to each garment to achieve superior results. However, these approaches often struggle with loose-fitting garments due to two key limitations: (1) They rely on human body semantic maps to align garments with the body, but these maps become unreliable when body contours are obscured by loose-fitting garments, resulting in degraded outcomes; (2) They train garment synthesis networks on a per-frame basis without utilizing temporal information, leading to noticeable jittering artifacts. To address the first limitation, we propose a two-stage approach for robust semantic map estimation. First, we extract a garment-invariant representation from the raw input image. This representation is then passed through an auxiliary network to estimate the semantic map. This enhances the robustness of semantic map estimation under loose-fitting garments during garment-specific dataset generation. To address the second limitation, we introduce a recurrent garment synthesis framework that incorporates temporal dependencies to improve frame-to-frame coherence while maintaining real-time performance. We conducted qualitative and quantitative evaluations to demonstrate that our method outperforms existing approaches in both image quality and temporal coherence. Ablation studies further validate the effectiveness of the garment-invariant representation and the recurrent synthesis framework.Item Accelerating Signed Distance Functions(The Eurographics Association and John Wiley & Sons Ltd., 2025) Hubert-Brierre, Pierre; Guérin, Eric; Peytavie, Adrien; Galin, Eric; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenProcessing and particularly visualizing implicit surfaces remains computationally intensive when dealing with complex objects built from construction trees. We introduce optimization nodes to reduce the computational cost of the field function evaluation for hierarchical construction trees, while preserving the Lipschitz or conservative properties of the function. Our goal is to propose acceleration nodes directly embedded in the construction tree, and avoid external, accompanying data-structures such as octrees. We present proxy and continuous level of detail nodes to reduce the overall evaluation cost, along with a normal warping technique that enhances surface details with negligible computational overhead. Our approach is compatible with existing algorithms that aim at reducing the number of function calls. We validate our methods by computing timings as well as the average cost for traversing the tree and evaluating the signed distance field at a given point in space. Our method speeds-up signed distance field evaluation by up to three orders or magnitude, and applies both to ray-surface intersection computation in Sphere Tracing applications, and to polygonization algorithms.Item FAHNet: Accurate and Robust Normal Estimation for Point Clouds via Frequency-Aware Hierarchical Geometry(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Chengwei; Wu, Wenming; Fei, Yue; Zhang, Gaofeng; Zheng, Liping; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenPoint cloud normal estimation underpins many 3D vision and graphics applications. Precise normal estimation in regions of sharp curvature and high-frequency variation remains a major bottleneck; existing learning-based methods still struggle to isolate fine geometry details under noise and uneven sampling. We present FAHNet, a novel frequency-aware hierarchical network that precisely tackles those challenges. Our Frequency-Aware Hierarchical Geometry (FAHG) feature extraction module selectively amplifies and merges cross-scale cues, ensuring that both fine-grained local features and sharp structures are faithfully represented. Crucially, a dedicated Frequency-Aware geometry enhancement (FA) branch intensifies sensitivity to abrupt normal transitions and sharp features, preventing the common over-smoothing limitation. Extensive experiments on synthetic benchmarks (PCPNet, FamousShape) and real-world scans (SceneNN) demonstrate that FAHNet outperforms state-of-the-art approaches in normal estimation accuracy. Ablation studies further quantify the contribution of each component, and downstream surface reconstruction results validate the practical impact of our design.
- «
- 1 (current)
- 2
- 3
- »