Called the ‘Interactive Dynamic Video (IDV)’, this technology makes it possible for humans (and cartoons) to physically interact with photos and videos. In other words, you can poke, prod and push the things you see in the videos, MIT News reports. The MIT researchers made use of algorithms and existing camera technology, rather than highly expensive cutting-edge equipment, and were able to build IDV to find the tiny and almost invisible vibrations of an object and create video simulations that users can interact with virtually. “This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” Abe Davis, a CSAIL PhD student, said. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.” IDV has many possible uses, from filmmakers producing new kinds of visual effects to architects determining if buildings are structurally sound, said Davis. Researchers say that they made use of computer graphics to use 3D models of objects to build interactive simulations, in order to identify the vibrations. MIT News reports that in order to find “vibration modes” at different frequencies that each represent different ways that an object can move, the team studied video clips, which allowed them to then foresee how an object would reach in different situations. “Computer graphics allows us to use 3-D models to build interactive simulations, but the techniques can be complicated,” said Doug James, a professor of computer science at Stanford University who was not involved in the research. “Davis and his colleagues have provided a simple and clever way to extract a useful dynamics model from very tiny vibrations in video, and shown how to use it to animate an image.” Davis used IDV on a number of objects including bridges, a jungle gym, and a ukulele. He showed with a few clicks of the mouse how he was able to push and pull the object, as well as bend and move it in different directions. He even demonstrated how he could make his own hand appear to be moving the leaves of a bush. “If you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones,” said Davis, who also found that the technique even works on some existing videos on YouTube.
Currently, the users need to point and click on a computer while using this technology. However, if the system was used with virtual reality tools, it would allow viewers to touch items they are looking at, MIT News reports. “The ability to put real-world objects into virtual models is valuable for not just the obvious entertainment applications, but also for being able to test the stress in a safe virtual environment, in a way that doesn’t harm the real-world counterpart,” Davis said. Besides entertainment and engineering, Davis says he is also eager to see other applications emerge, from creating new forms of virtual reality to studying sports film. “When you look at VR companies like Oculus, they are often simulating virtual objects in real spaces,” he says. “This sort of work turns that on its head, allowing us to see how far we can go in terms of capturing and manipulating real objects in virtual space.”