Disney Researchers have come up with a way to auto-edit together footage from multiple social cameras. The team created an algorithm that takes in multiple video feeds and outputs a “single video of user-specified length that cuts between the multiple feeds automatically,” according to their research paper.
Researchers Ido Arev, Hyun Soo Park, Yaser Sheikh, Jessica Hodgins and Ariel Shamir, said their algorithm uses “existing cinematic rules.”
By determining the center of attention of the cameras and, by default, their operators, the algorithm indicates where the cuts should go. The center of attention is key. It is derived from the orientation of each person involved in a group activity and the footage they capture. The aggregate of the resulting viewing angles is referred to as the “gaze concurrence” or “3D joint attention,” which produces a metric for the “spatial and temporal location of the important ‘content’ of the activity.”