Browsing by Author "Mudur, Sudhir"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Knowledge-based Discovery of Transportation Object Properties by Fusing Multi-modal GIS Data(The Eurographics Association, 2018) Maroun, Pedro Eid; Mudur, Sudhir; Popa, Tiberiu; {Tam, Gary K. L. and Vidal, Franck3D models of transportation objects like a road, bridge, underpass, etc. are required in many domains including military training, land development, etc. While remote sensed images and LiDaR data can be used to create approximate 3D representations, detailed 3D representations are difficult to create automatically. Instead, interactive tools are used with rather laborious effort. For example, the top commercial interactive model generator we tried required 94 parameters in all for different bridge types. In this paper, we take a different path.We automatically derive these parameter values from GIS (Geographic Information Systems) data, which normally contains detailed information of these objects, but often only implicitly. The framework presented here transforms GIS data into a knowledge base consisting of assertions. Spatial/numeric relations are handled through plug-ins called property extractors whose results get added to the knowledge base, used by a reasoning engine to infer object properties. A number of properties have to be extracted from images, and are dependent on the accuracy of computer vision methods. While a comprehensive property extractor mechanism is work in progress, . a prototype implementation illustrates our framework for bridges with GIS data from the real world. To the best of our knowledge, our framework is the first to integrate knowledge inference and uncertainty for extracting landscape object properties by fusing facts from multi-modal GIS data sources.Item The Problem of Entangled Material Properties in SVBRDF Recovery(The Eurographics Association, 2020) Saryazdi, Soroush; Murphy, Christian; Mudur, Sudhir; Klein, Reinhard and Rushmeier, HollySVBRDF (spatially varying bidirectional reflectance distribution function) recovery is concerned with deriving the material properties of an object from one or more images. This problem is particularly challenging when the images are casual rather than calibrated captures. It makes the problem highly under specified, since an object can look quite different from different angles and from different light directions. Yet many solutions have been attempted under varying assumptions, and the most promising solutions to date are those which use supervised deep learning techniques. The network is first trained with a large number of synthetically created images of surfaces, usually planar, with known values for material properties and then asked to predict the properties for image(s) of a new object. While the results obtained are impressive as shown through renders of the input object using recovered material properties, there is a problem in the accuracy of the recovered properties. Material properties get entangled, specifically the diffuse and specular reflectance behaviors. Such inaccuracies would hinder various down stream applications which use these properties. In this position paper we present this property entanglement problem. First, we demonstrate the problem through various property map outputs obtained by running a state of the deep learning solution. Next we analyse the present solutions, and argue that the main reason for this entanglement is the way the loss function is defined when training the network. Lastly, we propose potential directions that could be pursued to alleviate this problem.