Philipp Schaber

SpoVNet

Philipp Schaber

Office: A5, B 217

Phone: +49 621 181-2615

Fax: +49 621 181-2601

Email: schaber@informatik.uni-mannheim.de

 

 

Interests

  • Multimedia Technology
  • Image and Video Processing
  • Digital Watermarking
  • Camcorder Copies of Videos
  • Spontaneous Virtual Networks

Projects

Publications

2017

2015

  • Philipp Schaber, Sally Dong, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2015 Modeling temporal effects in re-captured video . New York, NYThe re-capturing of video content poses significant challenges to algorithms in the fields of video forensics, watermarking, and near-duplicate detection. Using a camera to record a video from a display introduces a variety of artifacts, such as geometric distortions, luminance transformations, and temporal aliasing. A deep understanding of the causes and effects of such phenomena is required for their simulation, and for making the affected algorithms more robust. In this paper, we provide a detailed model of the temporal effects in re-captured video. Such effects typically result in the re-captured frames being a blend of the original video's source frames, where the specific blend ratios are difficult to predict. Our proposed parametric model captures the temporal artifacts introduced by interactions between the video renderer, display device, and camera. The validity of our model is demonstrated through experiments with real re-captured videos containing specially marked frames.
  • Philipp Schaber, Stephan Kopf, Sina Wetzel, Tyler Ballast, Christoph Wesch und Wolfgang Effelsberg. 2015 CamMark: Analyzing, Modeling, and Simulating Artifacts in Camcorder Copies ACM Transactions on Multimedia Computing, Communications, and applications : TOMCCAP, 11, 10, Article 42, 1-23
    To support the development of any system that includes the generation and evaluation of camcorder copies, as well as to provide a common benchmark for robustness against camcorder copies, we present a tool to simulate digital video re-acquisition using a digital video camera. By resampling each video frame, we simulate the typical artifacts occurring in a camcorder copy: geometric modifications (aspect ratio changes, cropping, perspective and lens distortion), temporal sampling artifacts (due to different frame rates, shutter speeds, rolling shutters, or playback), spatial and color subsampling (rescaling, filtering, Bayer color filter array), and processing steps (automatic gain control, automatic white balance). We also support the simulation of camera movement (e.g., a hand-held camera) and background insertion. Furthermore, we allow for an easy setup and calibration of all the simulated artifacts, using sample/reference pairs of images and videos. Specifically temporal subsampling effects are analyzed in detail to create realistic frame blending artifacts in the simulated copies. We carefully evaluated our entire camcorder simulation system and found that the models we developed describe and match the real artifacts quite well.

2014

  • Philipp Schaber, Stephan Kopf, Christoph Wesch und Wolfgang Effelsberg. ACM, 2014 CamMark : a camcorder copy simulation as watermarking benchmark for digital video . New York, NYIn 1998, Petitcolas et al. proposed StirMark as a benchmark for image watermarking schemes. The main idea was to introduce a re-sampling process that mimics the analog process of printing and scanning a watermarked image. For digital video, the corresponding concept is a camcorder copy, where a video displayed on a screen is (digitally) recorded using a video camera. As most commercial video streaming systems (VOD, IPTV) and offline distribution (Blu-ray, HDDs for cinemas) are strongly protected by means of DRM, filming a display is actually a relevant use case and a requirement for robust video watermarking systems to survive. We therefore present a tool to simulate content re-acquisition with a camcorder. Our goal is to support watermark development by enabling automated test cases for such camcorder copy attacks, as well as to provide a benchmark for robust video watermarking. Manually creating camcorder copies is a cumbersome process, and even more problematic, it is hardly reproducible with the same setup. By re-sampling each video frame, we simulate the typical artifacts of a camcorder copy: geometric modifications (aspect ratio changes, cropping, perspective and lens distortion), temporal modifications (unsynchronized frame rates and the resulting frame blending), sub-sampling (rescaling, filtering, Bayer color array filter), and histogram changes (AGC, AWB). We also support simulating camera movement (e.g., a hand-held camera) and background insertion.

2013

  • Torben Dittrich, Stephan Kopf, Philipp Schaber, Benjamin Guthier und Wolfgang Effelsberg. ACM, 2013 Saliency Detection for Stereoscopic Video . New York, NY
  • Jakob Huber, Stephan Kopf und Philipp Schaber. , Technical Reports. 2013 Analyse von Bildmerkmalen zur Identifikation wichtiger Bildregionen Mannheim, . 13-004
    Eine zuverlässige Erkennung wichtiger Bildregionen ist die Grundlage für viele Verfahren im Bereich der Bildverarbeitung wie beispielsweise bei der Bildkompression, bei Verfahren zur Anpassung der Bildauflösung oder beim Einfügen digitaler Wasserzeichen in Bilder. Es wurde ein System entwickelt, das Merkmalspunkte in Bildern identifiziert und diese nutzt, um wichtige Bildbereiche zu identifizieren. Zur Berechnung der Merkmalspunkte wird das SURF-Verfahren (Speeded Up Robust Features) verwendet. Die gefundenen Merkmale werden in einem zweiten Schritt einzelnen Bildregionen zugeordnet. Die Qualität der ermittelten Regionen sowie das Laufzeitverhalten der verschiedenen Verfahren werden anhand einer umfangreichen Bilddatenbank analysiert.
  • Stephan Kopf, Benjamin Guthier, Philipp Schaber, Torben Dittrich und Wolfgang Effelsberg. , Technical Reports. 2013 Analysis of Disparity Maps for Detecting Saliency in Stereoscopic Video Mannheim, . 13-003
    We present a system for automatically detecting salient image regions in stereoscopic videos. This report extends our previous system and provides additional details about its implementation. Our proposed algorithm considers information based on three dimensions: salient colors in individual frames, salient information derived from camera and object motion, and depth saliency. These three components are dynamically combined into one final saliency map based on the reliability of the individual saliency detectors. Such a combination allows using more efficient algorithms even if the quality of one detector degrades. For example, we use a computationally efficient stereo correspondence algorithm that might cause noisy disparity maps for certain scenarios. In this case, however, a more reliable saliency detection algorithm such as the image saliency is preferred. To evaluate the quality of the saliency detection, we created modified versions of stereoscopic videos with the non-salient regions blurred. Having users rate the quality of these videos, the results show that most users do not detect the blurred regions and that the automatic saliency detection is very reliable.
  • Philipp Schaber und Wolfgang Effelsberg. IEEE, 2013 A Component System for Spontaneous Virtual Networks . Piscataway, NJ

2010