CVPR 2011: Google's Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths

Back in March this year, Google launched new features on the YouTube Video Editor, including stabilization for shaky videos, with the ability to preview them in real-time."Casually shot videos captured by handheld or mobile cameras suffer from significant amount of shake. Existing in-camera stabilization methods dampen high-frequency jitter but don't suppress low-frequency movements and bounces, […]

Back in March this year, Google launched new features on the YouTube Video Editor, including stabilization for shaky videos, with the ability to preview them in real-time.

"Casually shot videos captured by handheld or mobile cameras suffer from significant amount of shake. Existing in-camera stabilization methods dampen high-frequency jitter but don't suppress low-frequency movements and bounces, such as those observed in videos captured by a walking person. On the other hand, most professionally shot videos usually consist of carefully designed camera configurations, using specialized equipment such as tripods or camera dollies, and employ ease-in and ease-out for transitions. Our goal was to devise a completely automatic method for converting casual shaky footage into more pleasant and professional looking videos," Google explained.

"Our technique mimics the cinematographic principles outlined above by automatically determining the best camera path using a robust optimization technique. The original, shaky camera path is divided into a set of segments, each approximated by either a constant, linear or parabolic motion. Our optimization finds the best of all possible partitions using a computationally efficient and stable algorithm," Google said.

"To achieve real-time performance on the web, we distribute the computation across multiple machines in the cloud. This enables us to provide users with a real-time preview and interactive control of the stabilized result," Google added.

You can watch the video demonstration embedded below of how to use this feature on the YouTube Editor:

The core technology behind this feature is detailed in this paper, which'll be presented at the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2011), in Colorado Springs the week of June 20th.

[Source:Google Research]