Benjamin Guthier

High Dynamic Range Video

Titel: Dr. rer. nat.

Office: B 219

Phone: +49 621 181-2605

Email: guthier(at)informatik.uni-mannheim.de

Interests

  • image processing
  • pattern recognition
  • video analysis
  • HDR video

Projects

High Dynamic Range Video

Curriculum vitae

Benjamin Guthier received his PhD in computer science from the University of Mannheim, Germany in 2012, where his main research focus was the creation of high dynamic range video in real-time. Since 2012, he is working as a post-doc researcher at the University of Mannheim with a one year stay at the University of Ottawa, Canada as a guest researcher. His current research interests include image and video processing, smart cities and the extraction of information on affect from text.

Publications

2018

2017

2016

2015

  • Benjamin Guthier, Rana Abaalkhail, Rajwa Alharthi und Abdulmotaleb El Saddik. IEEE, 2015 The Affect-Aware City . Piscataway, NJ
  • Stephan Kopf, Daniel Schön, Benjamin Guthier, Roman Rietsche und Wolfgang Effelsberg. Association for the Advancement of Computing in Education (AACE), 2015 A real-time feedback system for presentation skills . Waynesville, NC
  • Philipp Schaber, Sally Dong, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2015 Modeling temporal effects in re-captured video . New York, NYThe re-capturing of video content poses significant challenges to algorithms in the fields of video forensics, watermarking, and near-duplicate detection. Using a camera to record a video from a display introduces a variety of artifacts, such as geometric distortions, luminance transformations, and temporal aliasing. A deep understanding of the causes and effects of such phenomena is required for their simulation, and for making the affected algorithms more robust. In this paper, we provide a detailed model of the temporal effects in re-captured video. Such effects typically result in the re-captured frames being a blend of the original video's source frames, where the specific blend ratios are difficult to predict. Our proposed parametric model captures the temporal artifacts introduced by interactions between the video renderer, display device, and camera. The validity of our model is demonstrated through experiments with real re-captured videos containing specially marked frames.
  • Jun Yang, Benjamin Guthier und Abdulmotaleb El Saddik. IEEE, 2015 Estimating two-dimensional blood flow velocities from videos . Piscataway, NJ

2014

2013

  • Torben Dittrich, Stephan Kopf, Philipp Schaber, Benjamin Guthier und Wolfgang Effelsberg. ACM, 2013 Saliency Detection for Stereoscopic Video . New York, NY
  • Benjamin Guthier, Kalun Ho, Stephan Kopf und Wolfgang Effelsberg. ACM, 2013 Determining Exposure Values from HDR Histograms for Smartphone Photography . [New York, NY]
  • Benjamin Guthier, Johannes Kiess, Stephan Kopf und Wolfgang Effelsberg. IEEE, 2013 Seam Carving for STereoscopic Video . Piscataway, NJ
  • Benjamin Guthier, Johannes Kiess, Stephan Kopf und Wolfgang Effelsberg. , Technical Reports. 2013 Stereoscopic Seam Carving With Temporal Consistency Mannheim, . 13-002
    In this paper, we present a novel technique for seam carving of stereoscopic video. It removes seams of pixels in areas that are most likely not noticed by the viewer. When applying seam carving to stereoscopic video rather than monoscopic still images, new challenges arise. The detected seams must be consistent between the left and the right view, so that no depth information is destroyed. When removing seams in two consecutive frames, temporal consistency between the removed seams must be established to avoid flicker in the resulting video. By making certain assumptions, the available depth information can be harnessed to improve the quality achieved by seam carving. Assuming that closer pixels are more important, the algorithm can focus on removing distant pixels first. Furthermore, we assume that coherent pixels belonging to the same object have similar depth. By avoiding to cut through edges in the depth map, we can thus avoid cutting through object boundaries.
  • Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. 2013 Algorithms for a Real-Time HDR Video System Pattern Recognition Letters, 34, 9, 25 -33
  • Stephan Kopf, Benjamin Guthier, Philipp Schaber, Torben Dittrich und Wolfgang Effelsberg. , Technical Reports. 2013 Analysis of Disparity Maps for Detecting Saliency in Stereoscopic Video Mannheim, . 13-003
    We present a system for automatically detecting salient image regions in stereoscopic videos. This report extends our previous system and provides additional details about its implementation. Our proposed algorithm considers information based on three dimensions: salient colors in individual frames, salient information derived from camera and object motion, and depth saliency. These three components are dynamically combined into one final saliency map based on the reliability of the individual saliency detectors. Such a combination allows using more efficient algorithms even if the quality of one detector degrades. For example, we use a computationally efficient stereo correspondence algorithm that might cause noisy disparity maps for certain scenarios. In this case, however, a more reliable saliency detection algorithm such as the image saliency is preferred. To evaluate the quality of the saliency detection, we created modified versions of stereoscopic videos with the non-salient regions blurred. Having users rate the quality of these videos, the results show that most users do not detect the blurred regions and that the automatic saliency detection is very reliable.

2012

  • Benjamin Guthier. 2012 Real-Time Algorithms for High Dynamic Range Video Mannheim, Deutschland
    A recurring problem in capturing video is the scene having a range of brightness values that exceeds the capabilities of the capturing device. An example would be a video camera in a bright outside area, directed at the entrance of a building. Because of the potentially big brightness difference, it may not be possible to capture details of the inside of the building and the outside simultaneously using just one shutter speed setting. This results in under- and overexposed pixels in the video footage. The approach we follow in this thesis to overcome this problem is temporal exposure bracketing, i.e., using a set of images captured in quick sequence at different shutter settings. Each image then captures one facet of the scene's brightness range. When fused together, a high dynamic range (HDR) video frame is created that reveals details in dark and bright regions simultaneously. The process of creating a frame in an HDR video can be thought of as a pipeline where the output of each step is the input to the subsequent one. It begins by capturing a set of regular images using varying shutter speeds. Next, the images are aligned with respect to each other to compensate for camera and scene motion during capture. The aligned images are then merged together to create a single HDR frame containing accurate brightness values of the entire scene. As a last step, the HDR frame is tone mapped in order to be displayable on a regular screen with a lower dynamic range. This thesis covers algorithms for these steps that allow the creation of HDR video in real-time. When creating videos instead of still images, the focus lies on high capturing and processing speed and on assuring temporal consistency between the video frames. In order to achieve this goal, we take advantage of the knowledge gained from the processing of previous frames in the video. This work addresses the following aspects in particular. The image size parameters for the set of base images are chosen such that only as little image data as possible is captured. We make use of the fact that it is not always necessary to capture full size images when only small portions of the scene require HDR. Avoiding redundancy in the image material is an obvious approach to reducing the overall time taken to generate a frame. With the aid of the previous frames, we calculate brightness statistics of the scene. The exposure values are chosen in a way, such that frequently occurring brightness values are well-exposed in at least one of the images in the sequence. The base images from which the HDR frame is created are captured in quick succession. The effects of intermediate camera motion are thus less intense than in the still image case, and a comparably simpler camera motion model can be used. At the same time, however, there is much less time available to estimate motion. For this reason, we use a fast heuristic that makes use of the motion information obtained in previous frames. It is robust to the large brightness difference between the images of an exposure sequence. The range of luminance values of an HDR frame must be tone mapped to the displayable range of the output device. Most available tone mapping operators are designed for still images and scale the dynamic range of each frame independently. In situations where the scene's brightness statistics change quickly, these operators produce visible image flicker. We have developed an algorithm that detects such situations in an HDR video. Based on this detection, a temporal stability criterion for the tone mapping parameters then prevents image flicker. All methods for capture, creation and display of HDR video introduced in this work have been fully implemented, tested and integrated into a running HDR video system. The algorithms were analyzed for parallelizability and, if applicable, adjusted and implemented on a high-performance graphics chip.
  • Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2012 A Real-Time System for Capturing HDR Videos . New York, NY
  • Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. IEEE, 2012 Optimal Shutter Speed Sequences for Real-time HDR Video . Piscataway, NJ
  • Benjamin Guthier, Stephan Kopf, Matthias Wichtlhuber und Wolfgang Effelsberg. IEEE, 2012 Parallel Algorithms for Histogram-based Image Registration . Piscataway, NJ
  • Johannes Kiess, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2012 Seam Crop: Changing the Size and Aspect Ratio of Videos . New York, NY
  • Johannes Kiess, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. SPIE, 2012 SeamCrop for image retargeting Proceedings of SPIE. Bellingham, Wash.
  • Tonio Triebel, Max Lehn, Robert Rehner, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. IEEE, 2012 Generation of Synthetic Workloads for Multiplayer Online Gaming Benchmarks . [Piscataway, NJ]

2011

  • Benjamin Guthier, Stephan Kopf, Marc Eble und Wolfgang Effelsberg. SPIE, 2011 Flicker Reduction in Tone Mapped High Dynamic Range Video Proceedings of SPIE. Bellingham, Wash.
  • Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. , 2011 Optimal Shutter Speed Sequences for Real-Time HDR Video , .
    A technique to create High Dynamic Range (HDR) video frames is to capture Low Dynamic Range (LDR) images at varying shutter speeds. They are then merged into a single image covering the entire brightness range of the scene. While shutter speeds are often chosen to vary by a constant factor, we propose an adaptive approach. The scene's histogram together with functions judging the contribution of an LDR exposure to the HDR result are used to compute a sequence of shutter speeds. This sequence allows for the estimation of the scene's radiance map with a high degree of accuracy. We show that, in comparison to the traditional approach, our algorithm achieves a higher quality of the HDR image for the same number of captured LDR exposures. Our algorithm is suited for creating HDR videos of scenes with varying brightness conditions in real-time, which applications like video surveillance benefit from.
  • Stephan Kopf, Benjamin Guthier, Dirk Farin und Jungong Han. IEEE, 2011 Analysis and Retargeting of Ball Sports Video . Piscataway, NJ [u.a.]
  • Stephan Kopf, Thomas Haenselmann, Johannes Kiess, Benjamin Guthier und Wolfgang Effelsberg. 2011 Algorithms for video retargeting Multimedia Tools and Applications, 51, 19, 819-861

2010

2009

2008

2007

  • Tonio Triebel, Benjamin Guthier und Wolfgang Effelsberg. ACM Press, 2007 Skype4Games ACM Conference Proceedings Series. New York, NY