43-Issue 6
Permanent URI for this collection
Browse
Browsing 43-Issue 6 by Issue Date
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item DeforestVis: Behaviour Analysis of Machine Learning Models with Surrogate Decision Stumps(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Chatzimparmpas, Angelos; Martins, Rafeal M.; Telea, Alexandru C.; Kerren, Andreas; Alliez, Pierre; Wimmer, MichaelAs the complexity of machine learning (ML) models increases and their application in different (and critical) domains grows, there is a strong demand for more interpretable and trustworthy ML. A direct, model‐agnostic, way to interpret such models is to train surrogate models—such as rule sets and decision trees—that sufficiently approximate the original ones while being simpler and easier‐to‐explain. Yet, rule sets can become very lengthy, with many if–else statements, and decision tree depth grows rapidly when accurately emulating complex ML models. In such cases, both approaches can fail to meet their core goal—providing users with model interpretability. To tackle this, we propose DeforestVis, a visual analytics tool that offers summarization of the behaviour of complex ML models by providing surrogate decision stumps (one‐level decision trees) generated with the Adaptive Boosting (AdaBoost) technique. DeforestVis helps users to explore the complexity versus fidelity trade‐off by incrementally generating more stumps, creating attribute‐based explanations with weighted stumps to justify decision making, and analysing the impact of rule overriding on training instance allocation between one or more stumps. An independent test set allows users to monitor the effectiveness of manual rule changes and form hypotheses based on case‐by‐case analyses. We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.Item PhysOM: Physarum polycephalum Oriented Microstructures(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Garnier, David‐Henri; Schmidt, M. P.; Rohmer, Damien; Alliez, Pierre; Wimmer, MichaelBiological shapes possess fascinating properties and behaviours that are the result of emergent mechanisms: they can evolve over time, dynamically adapt to changes in their environment, while also exhibiting interesting mechanical properties and aesthetic appeal. In this work, we bring and extend an existing biological‐inspired model of the , aka , to the field of computer graphics, in order to design porous organic‐like microstructures that resemble natural foam‐like cells or filament‐like patterns with variable local properties. In contrast to approaches based on static global optimization that provides only limited expressivity over the result, our method allows precise control over the local orientation of 3D patterns, relative cell extension and precise infill of shapes with well defined boundaries. To this end, we extend the classical agent‐based model for Physarum to fill an arbitrary domain with local anisotropic behaviour. We further provide a detailed analysis of the model parameters, contributing to the understanding of the system behaviour. The method is fast, parallelizable and scalable to large volumes and compatible with user interaction, allowing a designer to guide the structure, erase parts and observe its evolution in real‐time. Overall, our method provides a versatile and efficient means of generating intricate organic microstructures that have potential applications in fields such as additive manufacturing, design or biological representation and engineering.Item Evaluating Graph Layout Algorithms: A Systematic Review of Methods and Best Practices(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Di Bartolomeo, Sara; Crnovrsanin, Tarik; Saffo, David; Puerta, Eduardo; Wilson, Connor; Dunne, Cody; Alliez, Pierre; Wimmer, MichaelEvaluations—encompassing computational evaluations, benchmarks and user studies—are essential tools for validating the performance and applicability of graph and network layout algorithms (also known as graph drawing). These evaluations not only offer significant insights into an algorithm's performance and capabilities, but also assist the reader in determining if the algorithm is suitable for a specific purpose, such as handling graphs with a high volume of nodes or dense graphs. Unfortunately, there is no standard approach for evaluating layout algorithms. Prior work holds a ‘Wild West’ of diverse benchmark datasets and data characteristics, as well as varied evaluation metrics and ways to report results. It is often difficult to compare layout algorithms without first implementing them and then running your own evaluation. In this systematic review, we delve into the myriad of methodologies employed to conduct evaluations—the utilized techniques, reported outcomes and the pros and cons of choosing one approach over another. Our examination extends beyond computational evaluations, encompassing user‐centric evaluations, thus presenting a comprehensive understanding of algorithm validation. This systematic review—and its accompanying website—guides readers through evaluation types, the types of results reported, and the available benchmark datasets and their data characteristics. Our objective is to provide a valuable resource for readers to understand and effectively apply various evaluation methods for graph layout algorithms. A free copy of this paper and all supplemental material is available at , and the categorized papers are accessible on our website at .Item Real‐Time Polygonal Lighting of Iridescence Effect using Precomputed Monomial‐Gaussians(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Zhengze; Huo, Yuchi; Yang, Yinhui; Chen, Jie; Wang, Rui; Alliez, Pierre; Wimmer, MichaelThe real world consists of mass phenomena, such as iridescence on thin film and metal oxide layers, that is only explicable by wave optics. Existing research can reproduce such effects with simple point lights or low‐frequency environmental lighting. However, it remains a difficult task to efficiently rendering these effects when near‐field, high‐frequency area lights are involved. This paper presents a high‐fidelity, real‐time rendering algorithm for the iridescence effect under polygonal lights. We introduce a novel set of spherical functions, Monomial‐Gaussians, to accurately fit iridescent materials' reflectance. With a precomputed lookup table, the Monomial‐Gaussians are easily integrated over spherical polygons in linear time. Importance sampling of Monomial‐Gaussians is also supported to efficiently reduce Monte‐Carlo error. Our approach produces accurate renderings of the iridescence effect while still preserving high frame rates.Item Learned Inference of Annual Ring Pattern of Solid Wood(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Larsson, Maria; Ijiri, Takashi; Shen, I‐Chao; Yoshida, Hironori; Shamir, Ariel; Igarashi, Takeo; Alliez, Pierre; Wimmer, MichaelWe propose a method for inferring the internal anisotropic volumetric texture of a given wood block from annotated photographs of its external surfaces. The global structure of the annual ring pattern is represented using a continuous spatial scalar field referred to as the growth time field (GTF). First, we train a generic neural model that can represent various GTFs using procedurally generated training data. Next, we fit the generic model to the GTF of a given wood block based on surface annotations. Finally, we convert the GTF to an annual ring field (ARF) revealing the layered pattern and apply neural style transfer to render orientation‐dependent small‐scale features and colors on a cut surface. We show rendered results of various physically cut real wood samples. Our method has physical and virtual applications such as cut‐preview before subtractive fabricating solid wood artifacts and simulating object breaking.Item Directional Texture Editing for 3D Models(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Shengqi; Chen, Zhuo; Gao, Jingnan; Yan, Yichao; Zhu, Wenhan; Lyu, Jiangjing; Yang, Xiaokang; Alliez, Pierre; Wimmer, MichaelTexture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a exture diting odel designed for automatic object editing according to the text nstructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state‐of‐the‐art methods on various 3D objects. We also perform text‐guided relighting to show explicit control over lighting. Our project page: .Item Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Fournier, Romain; Sauvage, Basile; Alliez, Pierre; Wimmer, MichaelMixing textures is a basic and ubiquitous operation in data‐driven algorithms for real‐time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel‐wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant‐time and parallel evaluation of the resulting mix over square footprints of MIP‐maps, making our operator suitable for real‐time rendering. We also develop a micro‐priority model, inspired by micro‐geometry models in rendering, which represents sub‐pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend.Item Hierarchical Spherical Cross‐Parameterization for Deforming Characters(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cao, Lizhou; Peng, Chao; Alliez, Pierre; Wimmer, MichaelThe demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross‐parameterization approach that semi‐automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi‐sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi‐sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high‐quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.Item Deep SVBRDF Acquisition and Modelling: A Survey(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Kavoosighafi, Behnaz; Hajisharif, Saghi; Miandji, Ehsan; Baravdish, Gabriel; Cao, Wen; Unger, Jonas; Alliez, Pierre; Wimmer, MichaelHand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine‐learning‐driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high‐quality measurements of bi‐directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi‐directional Reflectance Distribution Functions (SVBRDFs). Learning‐based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State‐of‐the‐Art Report (STAR) presents an in‐depth overview of the state‐of‐the‐art in machine‐learning‐driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real‐world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at .Item Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Delgado Díez, S.; Cerrada Somolinos, C.; Gómez Palomo, S. R.; Alliez, Pierre; Wimmer, MichaelThis paper presents an efficient algorithm for voxelizing the surface of triangular meshes in a single compute pass. The algorithm uses parallel equidistant lines to traverse the interior of triangles, minimizing costly memory operations and avoiding visiting the same voxels multiple times. By detecting and visiting only the voxels in each line operation, the proposed method achieves better performance results. This method incorporates a gap detection step, targeting areas where scanline‐based voxelization methods might fail. By selectively addressing these gaps, our method attains superior performance outcomes. Additionally, the algorithm is written entirely in a single compute GLSL shader, which makes it highly portable and vendor independent. Its simplicity also makes it easy to adapt and extend for various applications. The paper compares the results of this algorithm with other modern methods, comprehensibly comparing the time performance and resources used. Additionally, we introduce a novel metric, the ‘Slope Consistency Value’, which quantifies triangle orientation's impact on voxelization accuracy for scanline‐based approaches. The results show that the proposed solution outperforms existing, modern ones and obtains better results, especially in densely populated scenes with homogeneous triangle sizes and at higher resolutions.Item Deep and Fast Approximate Order Independent Transparency(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Tsopouridis, Grigoris; Vasilakis, Andreas A.; Fudos, Ioannis; Alliez, Pierre; Wimmer, MichaelWe present a machine learning approach for efficiently computing order independent transparency (OIT) by deploying a light weight neural network implemented fully on shaders. Our method is fast, requires a small constant amount of memory (depends only on the screen resolution and not on the number of triangles or transparent layers), is more accurate as compared to previous approximate methods, works for every scene without setup and is portable to all platforms running even with commodity GPUs. Our method requires a rendering pass to extract all features that are subsequently used to predict the overall OIT pixel colour with a pre‐trained neural network. We provide a comparative experimental evaluation and shader source code of all methods for reproduction of the experiments.Item Artistic Style Transfer Based on Attention with Knowledge Distillation(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Al‐Mekhlafi, Hanadi; Liu, Shiguang; Alliez, Pierre; Wimmer, MichaelArtistic style transfer involves the adaption of an input image to reflect the style of a reference image while maintaining its original content. This technique, now a prominent focus due to its prospective use in creative fields like digital art and graphic design, typically applies normalization techniques and attention mechanisms. While these methods yield decent results, they often fall short due to distortion of content image details and non‐artefact styles. In this paper, we introduce a novel approach that synergizes adaptive instance normalization (AdaIN), attention mechanisms, knowledge distillation (KD) and strategically placed internal layers, and new enhancements designed to preserve content details and provide a nuanced control over the style transfer process. We introduce a Detail Enhancement Module to amplify high‐frequency details in the content image, enhancing edge and texture preservation. A Multi‐scale Strategy is implemented to ensure uniform style application across various detail levels, leading to more coherent stylization. The Content Feature Refinement process refines content features, sharpening and emphasizing details to preserve structural and textural integrity. AdaIN's distinctive feature of efficiently collecting style data is exploited in our approach, coupled with attention mechanisms' inherent ability to conserve content information. We supplement this blend with KD for the enhancement of model accuracy and efficiency. Additionally, the introduction of internal layers acts as a conduit to further improve the style transfer process, increasing the transfer level of features and fostering better stylized results. The cornerstone of our technique lies in preserving the content structure amidst complex style transfers. Experimental results affirm the superior performance of our method over existing techniques in both quantitative and qualitative evaluations.Item VolTeMorph: Real‐time, Controllable and Generalizable Animation of Volumetric Representations(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Garbin, Stephan J.; Kowalski, Marek; Estellers, Virginia; Szymanowicz, Stanislaw; Rezaeifar, Shideh; Shen, Jingjing; Johnson, Matthew A.; Valentin, Julien; Alliez, Pierre; Wimmer, MichaelThe recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real‐time. While implicit deformation methods based on learned functions can produce impressive results, they are ‘black boxes’ to artists and content creators, they require large amounts of training data to generalize meaningfully, and they do not produce realistic extrapolations outside of this data. In this work, we solve these issues by introducing a volume deformation method which is real‐time even for complex deformations, easy to edit with off‐the‐shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics‐based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.Item A Hierarchical Architecture for Neural Materials(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xue, Bowen; Zhao, Shuang; Jensen, Henrik Wann; Montazeri, Zahra; Alliez, Pierre; Wimmer, MichaelNeural reflectance models are capable of reproducing the spatially‐varying appearance of many real‐world materials at different scales. Unfortunately, existing techniques such as NeuMIP have difficulties handling materials with strong shadowing effects or detailed specular highlights. In this paper, we introduce a neural appearance model that offers a new level of accuracy. Central to our model is an inception‐based core network structure that captures material appearances at multiple scales using parallel‐operating kernels and ensures multi‐stage features through specialized convolution layers. Furthermore, we encode the inputs into frequency space, introduce a gradient‐based loss, and employ it adaptive to the progress of the learning phase. We demonstrate the effectiveness of our method using a variety of synthetic and real examples.Item Correction to Real‐Time Neural Rendering of Dynamic Light Fields(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Alliez, Pierre; Wimmer, MichaelItem Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Chandran, P.; Zoss, G.; Gotardo, P.; Bradley, D.; Alliez, Pierre; Wimmer, MichaelIn this paper, we examine three important issues in the practical use of state‐of‐the‐art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability. First, many facial landmark detectors require a face normalization step as a pre‐process, often accomplished by a separately trained neural network that crops and resizes the face in the input image. There is no guarantee that this pre‐trained network performs optimal face normalization for the task of landmark detection. Thus, we instead analyse the use of a spatial transformer network that is trained alongside the landmark detector in an unsupervised manner, jointly learning an optimal face normalization and landmark detection by a single neural network. Second, we show that modifying the output head of the landmark predictor to infer landmarks in a canonical 3D space rather than directly in 2D can further improve accuracy. To convert the predicted 3D landmarks into screen‐space, we additionally predict the camera intrinsics and head pose from the input image. As a side benefit, this allows to predict the 3D face shape from a given image only using 2D landmarks as supervision, which is useful in determining landmark visibility among other things. Third, when training a landmark detector on multiple datasets at the same time, annotation inconsistencies across datasets forces the network to produce a sub‐optimal average. We propose to add a semantic correction network to address this issue. This additional lightweight neural network is trained alongside the landmark detector, without requiring any additional supervision. While the insights of this paper can be applied to most common landmark detectors, we specifically target a recently proposed continuous 2D landmark detector to demonstrate how each of our additions leads to meaningful improvements over the state‐of‐the‐art on standard benchmarks.Item TraM‐NeRF: Tracing Mirror and Near‐Perfect Specular Reflections Through Neural Radiance Fields(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Holland, Leif Van; Bliersbach, Ruben; Müller, Jan U.; Stotko, Patrick; Klein, Reinhard; Alliez, Pierre; Wimmer, MichaelImplicit representations like neural radiance fields (NeRF) showed impressive results for photorealistic rendering of complex scenes with fine details. However, ideal or near‐perfectly specular reflecting objects such as mirrors, which are often encountered in various indoor scenes, impose ambiguities and inconsistencies in the representation of the re‐constructed scene leading to severe artifacts in the synthesized renderings. In this paper, we present a novel reflection tracing method tailored for the involved volume rendering within NeRF that takes these mirror‐like objects into account while avoiding the cost of straightforward but expensive extensions through standard path tracing. By explicitly modelling the reflection behaviour using physically plausible materials and estimating the reflected radiance with Monte‐Carlo methods within the volume rendering formulation, we derive efficient strategies for importance sampling and the transmittance computation along rays from only few samples. We show that our novel method enables the training of consistent representations of such challenging scenes and achieves superior results in comparison to previous state‐of‐the‐art approaches.Item Time‐varying Extremum Graphs(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Das, Somenath; Sridharamurthy, Raghavendra; Natarajan, Vijay; Alliez, Pierre; Wimmer, MichaelWe introduce time‐varying extremum graph (), a topological structure to support visualization and analysis of a time‐varying scalar field. The extremum graph is a sub‐structure of the Morse–Smale complex. It captures the adjacency relationship between cells in the Morse decomposition of a scalar field. We define the as a time‐varying extension of the extremum graph and demonstrate how it captures salient feature tracks within a dynamic scalar field. We formulate the construction of the as an optimization problem and describe an algorithm for computing the graph. We also demonstrate the capabilities of towards identification and exploration of topological events such as deletion, generation, split and merge within a dynamic scalar field via comprehensive case studies including a viscous fingers and a 3D von Kármán vortex street dataset.Item EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Yuhua; Ma, Yuming; Shi, Qing; Wen, Jin; Zheng, Wanjun; Yue, Xuanwu; Ye, Hang; Chen, Wei; Meng, Yuwei; Zhou, Zhiguang; Alliez, Pierre; Wimmer, MichaelExperimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, , which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, . the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time‐varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real‐world dataset have demonstrated the effectiveness and practicability of in the representation of economic behaviour patterns and certification of economic theories.Item Evaluation in Neural Style Transfer: A Review(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Ioannou, Eleftherios; Maddock, Steve; Alliez, Pierre; Wimmer, MichaelThe field of neural style transfer (NST) has witnessed remarkable progress in the past few years, with approaches being able to synthesize artistic and photorealistic images and videos of exceptional quality. To evaluate such results, a diverse landscape of evaluation methods and metrics is used, including authors' opinions based on side‐by‐side comparisons, human evaluation studies that quantify the subjective judgements of participants, and a multitude of quantitative computational metrics which objectively assess the different aspects of an algorithm's performance. However, there is no consensus regarding the most suitable and effective evaluation procedure that can guarantee the reliability of the results. In this review, we provide an in‐depth analysis of existing evaluation techniques, identify the inconsistencies and limitations of current evaluation methods, and give recommendations for standardized evaluation practices. We believe that the development of a robust evaluation framework will not only enable more meaningful and fairer comparisons among NST methods but will also enhance the comprehension and interpretation of research findings in the field.