PG2025 Conference Papers, Posters, and Demos
Permanent URI for this collection
Browse
Browsing PG2025 Conference Papers, Posters, and Demos by Subject "Artificial intelligence"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Distance-Aware Tri-Perspective View for Efficient 3D Perception in Autonomous Driving(The Eurographics Association, 2025) Tang, Yutao; Zhao, Jigang; Qin, Zhengrui; Qiu, Rui; Zhao, Lingying; Ren, Jie; Chen, Guangxi; Christie, Marc; Han, Ping-Hsuan; Lin, Shih-Syun; Pietroni, Nico; Schneider, Teseo; Tsai, Hsin-Ruey; Wang, Yu-Shuen; Zhang, EugeneThree-dimensional environmental perception remains a critical bottleneck in autonomous driving, where existing vision-based dense representations face an intractable trade-off between spatial resolution and computational complexity. Current methods, including Bird's Eye View (BEV) and Tri-Perspective View (TPV), apply uniform perception precision across all spatial regions, disregarding the fundamental safety principle that near-field objects demand high-precision detection for collision avoidance while distant objects permit lower initial accuracy. This uniform treatment squanders computational resources and constrains real-time deployment. We introduce Distance-Aware Tri-Perspective View (DA-TPV), a novel framework that allocates computational resources proportional to operational risk. DA-TPV employs a hierarchical dual-plane architecture for each viewing direction: low-resolution planes capture global scene context while high-resolution planes deliver fine-grained perception within safety-critical reaction zones. Through distance-adaptive feature fusion, our method dynamically concentrates processing power where it most directly impacts vehicle safety. Extensive experiments on nuScenes demonstrate that DA-TPV matches or exceeds single high-resolution TPV performance while reducing memory consumption by 26.3% and achieving real-time inference. This work establishes distance-aware perception as a practical paradigm for deploying sophisticated three-dimensional understanding within automotive computational constraints. Code is available at https://github.com/yytang2012/DA-TPVFormer.Item Region-Adaptive Low-Light Image Enhancement with Light Effect Suppression and Detail Preservation(The Eurographics Association, 2025) Luo, Liheng; Xie, Wantong; Xia, Xiushan; Li, Zerui; Zhao, Yunbo; Christie, Marc; Han, Ping-Hsuan; Lin, Shih-Syun; Pietroni, Nico; Schneider, Teseo; Tsai, Hsin-Ruey; Wang, Yu-Shuen; Zhang, EugeneLow-light image enhancement seeks to improve the visual quality of images captured under poor illumination, yet existing methods often struggle with unnatural artifacts, overexposure, or detail loss, particularly in challenging real-world scenarios like underground coal mines. We propose a novel unsupervised region-adaptive framework that integrates light effect suppression and detail preservation to address these issues. Leveraging Retinex theory, our approach decomposes images into illumination and reflectance components, employing a region segmentation module to distinguish dark and bright areas for targeted enhancement. A lightweight denoising network mitigates noise, while an adaptive illumination enhancer and light effect suppressor collaboratively optimize illumination to ensure natural appearance and correct visual imbalances. A composite loss function balances brightness enhancement, structural integrity, and artifact suppression across regions. Extensive experiments on the LOL-v2, LSRW and our private datasets demonstrate superior performance. For instance, on our dataset, improvements of 3.26% in BRISQUE, 0.24% in NIQE, and 11.22% in PIQE were achieved compared to state-of-the-art methods, providing visually pleasing results with enhanced brightness, reduced artifacts, and preserved textures, making it well-suited for real-world applications.