A simple programming tool can build a model of a scene in a two-dimensional photograph and insert a realistic-looking synthetic object into it. Unlike other augmented reality programs, it doesn’t use any tags, props or laser scanners to model a scene’s geometry — it just uses a small number of markers and accounts for lighting and depth. The result is an augmented scene with proper perspective, which looks so realistic that testers could not distinguish between an original photo and a modified one.
With just a single image and some annotation by a user, the program creates a physical model of a scene, as demonstrated in the video below.
Kevin Karsch, Varsha Hedau, David Forsyth and Derek Hoiem at the University of Illinois at Urbana-Champaign developed a new image composition algorithm to generate an accurate lighting model. It uses geometry to build upon existing light-estimation methods, and it can work with any type of rendering software, the researchers explain. It works by breaking down the scene’s geometry and depth of field, and then determining how much of the scene’s overall illumination is a result of reflection (albedo) and how much directly emanates from light fixtures. This provides light parameters that can be transposed onto an inserted object. The team has developed algorithms for interior lights and for external light sources, typically light shafts from the sun.
..........................................................................................................................................................
..........................................................................................................................................................To test how well it worked, Karsch et al. showed some study participants a series of images — some with no synthetic objects, and some with synthetic objects inserted in one of three ways: either an existing light-derivation method, their new algorithm with a simplified lighting model, and their new algorithm in all its light-modeling glory. The subjects had computer science or graphics backgrounds.
“Surprisingly, subjects tended to do a worse job identifying the real picture as the study progressed,” the authors explain in a paper describing their method. “These results indicate that people are not good at differentiating real from synthetic photographs, and that our method is state of the art.”
The method could be used for video games, movies, home decorating or other uses. The work is slated to be presented at SIGGRAPH Asia 2011.
0 comentarii:
Post a Comment