Visual Modeling: Unifying Graphics and Vision
dc.contributor.author | Terzopoulos, Demetri | en_US |
dc.contributor.editor | Mike Chantler | en_US |
dc.date.accessioned | 2016-02-11T13:30:53Z | |
dc.date.available | 2016-02-11T13:30:53Z | |
dc.date.issued | 2005 | en_US |
dc.description.abstract | The computer vision and computer graphics fields have been developing largely independently since their genesis in the early 1960s. However, there is now a significant amount of exciting research at the intersection of graphics and vision, and it is bringing the two disciplines closer together. Since the mid 1980s, my visual modeling research has taken a unified approach to vision and graphics, treating them formally as mutually converse problems. In this talk, I will first review early work on deformable models for image synthesis and analysis and show how these physics-based models ushered in a new paradigm known as image-based modeling. I will also present our recent work on image-based rendering, which again spans vision and graphics. Even more provocative opportunities for unifying graphics and vision are motivated by sophisticated, biology-based models: Within an artificial life paradigm, we have developed comprehensive computational models of humans and lower animals that take into consideration the relevant anatomy, biomechanics, and cognitive science. Central to these models is their ethological constituent, which is driven by active vision within the dynamic virtual environment. In particular, our current work on virtual humans focuses on the lifelike animation of visually perceptive, autonomous pedestrians in urban environments through the integration of (reactive) behavioral and (deliberative) cognitive components. Our work also furthers the cause of exploiting visually and behaviorally realistic virtual worlds for the development and testing of machine vision systems. To this end, I will demonstrate a surveillance system in a virtual train station populated by our autonomous virtual pedestrians. The system features a sensor network of readily recon- figurable active virtual cameras that generate synthetic video feeds emulating those generated by real surveillance cameras monitoring public spaces. Such research would be more or less infeasible in the real world in view of the effort and cost of deploying, modifying, and experimenting with an appropriately extensive camera network in a public space the size of a train station. | en_US |
dc.description.sectionheaders | Keynote 1 | en_US |
dc.description.seriesinformation | Vision, Video, and Graphics (2005) | en_US |
dc.identifier.doi | 10.2312/vvg.20051001 | en_US |
dc.identifier.isbn | 3-905673-57-6 | en_US |
dc.identifier.pages | 9-9 | en_US |
dc.identifier.uri | https://doi.org/10.2312/vvg.20051001 | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Visual Modeling: Unifying Graphics and Vision | en_US |
Files
Original bundle
1 - 1 of 1