Novel Motion Anchoring Strategies for Wavelet-based Highly Scalable Video Compression

 
 
Springer (Verlag)
  • 1. Auflage
  • |
  • erschienen am 13. April 2018
 
  • Buch
  • |
  • Hardcover
  • |
  • 208 Seiten
978-981-10-8224-5 (ISBN)
 
A key element of any modern video codec is the efficient exploitation of temporal redundancy via motion-compensated prediction. In this book, a novel paradigm of representing and employing motion information in a video compression system is described that has several advantages over existing approaches. Traditionally, motion is estimated, modelled, and coded as a vector field at the target frame it predicts. While this "prediction-centric" approach is convenient, the fact that the motion is "attached" to a specific target frame implies that it cannot easily be re-purposed to predict or synthesize other frames, which severely hampers temporal scalability. In light of this, the present book explores the possibility of anchoring motion at reference frames instead. Key to the success of the proposed "reference-based" anchoring schemes is high quality motion inference, which is enabled by the use of a more "physical" motion representation than the traditionally employed "block" motion fields. The resulting compression system can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. Furthermore, "features" beyond compressibility - including high scalability, accessibility, and "intrinsic" framerate upsampling - can be seamlessly supported. These features are becoming ever more relevant as the way video is consumed continues shifting from the traditional broadcast scenario to interactive browsing of video content over heterogeneous networks. This book is of interest to researchers and professionals working in multimedia signal processing, in particular those who are interested in next-generation video compression. Two comprehensive background chapters on scalable video compression and temporal frame interpolation make the book accessible for students and newcomers to the field.
1st ed. 2018
  • Englisch
  • Singapore
  • |
  • Singapur
  • Für höhere Schule und Studium
  • |
  • Für Beruf und Forschung
  • 62 farbige Abbildungen, 30 s/w Abbildungen
  • |
  • 62 Illustrations, color; 30 Illustrations, black and white; XXIII, 182 p. 92 illus., 62 illus. in color.
  • Höhe: 244 mm
  • |
  • Breite: 164 mm
  • |
  • Dicke: 19 mm
  • 472 gr
978-981-10-8224-5 (9789811082245)
10.1007/978-981-10-8225-2
weitere Ausgaben werden ermittelt
Dominic Ruefenacht received his B.Sc. and M.Sc. in Communication Systems with a specialization in 'Signals, Images and Interfaces' from the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, in 2009 and 2011. He was an exchange student at the University of Waterloo, Ontario, Canada, and did his Master's thesis at Philips Consumer Lifestyle in Eindhoven, Netherlands. He obtained his Ph.D. degree from UNSW Sydney, Australia, in 2017, where he was investigating "Novel Motion Anchoring Strategies for Wavelet-based Highly Scalable Video Compression". From 2011 to 2013, he was with the Image and Visual Representation Group (IVRG) at EPFL as a research engineer, where he was working on computational photography problems, with emphasis on near-infrared imaging. He currently holds a post-doctoral position at UNSW Sydney, working on next-generation video compression systems. His research interests are in computational photography and highly scalable and accessible video compression, with a focus on temporal scalability.
Introduction.- Scalable Image and Video Compression.- Temporal Frame Interpolation (TFI).- Motion-Discontinuity-Aided Motion Field Operations.- Bidirectional Hierarchical Anchoring (BIHA) of Motion.- Forward-Only Hierarchical Anchoring (FOHA) of Motion.- Base-Anchored Motion (BAM).- Conclusions and Future Directions.
This thesis explores the motion anchoring strategies, which represent a fundamental change to the way motion is employed in a video compression system-from a "prediction-centric" point of view to a "physical" representation of the underlying motion of the scene. The proposed "reference-based" motion anchorings can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. This raises the prospect of achieving lower motion bitrates than the most advanced conventional techniques, while providing more temporally consistent and meaningful motion. The availability of temporally consistent motion can facilitate the efficient deployment of highly scalable video compression systems based on temporal lifting, where the feedback loop used in traditional codecs is replaced by a feedforward transform.The novel motion anchoring paradigm proposed in this thesis is well adapted to seamlessly supporting "features" beyond compressibility, including high scalability, accessibility, and "intrinsic" frame upsampling. These features are becoming ever more relevant as the way video is consumed continues to shift from the traditional broadcast scenario with predefined network and decoder constraints to interactive browsing of video content via heterogeneous networks.

Versand in 7-9 Tagen

128,39 €
inkl. 7% MwSt.
in den Warenkorb