Benjamin Guthier
High Dynamic Range Video
Titel: Dr. rer. nat.
Office: B 219
Phone: +49 621 181-2605
Interests
- image processing
- pattern recognition
- video analysis
- HDR video
Projects
High Dynamic Range Video
Curriculum vitae
Benjamin Guthier received his PhD in computer science from the University of Mannheim, Germany in 2012, where his main research focus was the creation of high dynamic range video in real-time. Since 2012, he is working as a post-doc researcher at the University of Mannheim with a one year stay at the University of Ottawa, Canada as a guest researcher. His current research interests include image and video processing, smart cities and the extraction of information on affect from text.
Publications
2018
-
Rana Abaalkhail, Benjamin Guthier, Rajwa Alharthi und Abdulmotaleb El Saddik.
2018
Survey on ontologies for affective states and their influences
Semantic Web, 9, 18, 441-458
2017
-
Rajwa Alharthi, Benjamin Guthier, Camille Guertin und Abdulmotaleb El Saddik.
2017
A dataset for psychological human needs detection from social networks
IEEE Access, 5, 16, 9109-9117
- Stephan Kopf, Mariia Zrianina, Benjamin Guthier, Lydia Weiland, Philipp Schaber, Simone Paolo Ponzetto und Wolfgang Effelsberg. CSREA Press, 2017 Enhancing bag of visual words with color information for iconic image classification . Athens, GA br>
2016
-
Benjamin Guthier, Ralf Dörner und Hector P. Martinez.
2016
Affective computing in games
Springer Entertainment computing and serious games : International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, July 5-10, 2015, revised selected papers Cham, 402-441
-
Florian Mehm und Benjamin Guthier.
2016
Content and content production
Springer Serious games : foundations, concepts and practice Cham, 107-125
- Daniel Schön, Stephan Kopf, Melanie Klinger und Benjamin Guthier. Curran, 2016 New scenarios for audience response systems in university lectures . Red Hook, NY br>
2015
- Benjamin Guthier, Rana Abaalkhail, Rajwa Alharthi und Abdulmotaleb El Saddik. IEEE, 2015 The Affect-Aware City . Piscataway, NJ br>
- Stephan Kopf, Daniel Schön, Benjamin Guthier, Roman Rietsche und Wolfgang Effelsberg. Association for the Advancement of Computing in Education (AACE), 2015 A real-time feedback system for presentation skills . Waynesville, NC br>
- Philipp Schaber, Sally Dong, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2015 Modeling temporal effects in re-captured video . New York, NY br>The re-capturing of video content poses significant challenges to algorithms in the fields of video forensics, watermarking, and near-duplicate detection. Using a camera to record a video from a display introduces a variety of artifacts, such as geometric distortions, luminance transformations, and temporal aliasing. A deep understanding of the causes and effects of such phenomena is required for their simulation, and for making the affected algorithms more robust. In this paper, we provide a detailed model of the temporal effects in re-captured video. Such effects typically result in the re-captured frames being a blend of the original video's source frames, where the specific blend ratios are difficult to predict. Our proposed parametric model captures the temporal artifacts introduced by interactions between the video renderer, display device, and camera. The validity of our model is demonstrated through experiments with real re-captured videos containing specially marked frames.
- Jun Yang, Benjamin Guthier und Abdulmotaleb El Saddik. IEEE, 2015 Estimating two-dimensional blood flow velocities from videos . Piscataway, NJ br>
2014
- Benjamin Guthier, Rajwa Alharthi, Rana Abaalkhail und Abdulmotaleb El Saddik. ACM, 2014 Detection and Visualization of Emotions in an Affect-Aware City . New York, NY br>
-
Benjamin Guthier, Stephan Kopf, Matthias Wichtlhuber und Wolfgang Effelsberg.
2014
Parallel Implementation of a Real-Time High Dynamic Range Video System
Integrated Computer-Aided Engineering : ICAE, 21, 63, 189-202
- Johannes Kiess, Daniel Gritzner, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2014 GPU Video Retargeting with Parallelized SeamCrop . New York, NY br>
- Stephan Kopf, Benjamin Guthier, Christopher Hipp, Johannes Kiess und Wolfgang Effelsberg. IEEE, 2014 Warping-based video retargeting for stereoscopic video . Piscataway, NJ br>
- Hao Kuang, Benjamin Guthier, Mukesh Saini, Dwarikanath Mahapatra und Abdulmotaleb El Saddik. ACM, 2014 A Real-Time Smart Assistant for Video Surveillance Through Handheld Devices . New York, NY br>
-
Max Lehn, Tonio Triebel, Robert Rehner, Benjamin Guthier, Stephan Kopf, Alejandro Buchmann und Wolfgang Effelsberg.
2014
On Synthetic Workloads for Multiplayer Online Games: A Methodology for Generating Representative Shooter Game Workloads
Multimedia Systems, 20, 53, 609-620
2013
- Torben Dittrich, Stephan Kopf, Philipp Schaber, Benjamin Guthier und Wolfgang Effelsberg. ACM, 2013 Saliency Detection for Stereoscopic Video . New York, NY br>
- Benjamin Guthier, Kalun Ho, Stephan Kopf und Wolfgang Effelsberg. ACM, 2013 Determining Exposure Values from HDR Histograms for Smartphone Photography . [New York, NY] br>
- Benjamin Guthier, Johannes Kiess, Stephan Kopf und Wolfgang Effelsberg. IEEE, 2013 Seam Carving for STereoscopic Video . Piscataway, NJ br>
-
Benjamin Guthier, Johannes Kiess, Stephan Kopf und Wolfgang Effelsberg. ,
Technical Reports. 2013
Stereoscopic Seam Carving With Temporal Consistency
Mannheim, . 13-002
In this paper, we present a novel technique for seam carving of stereoscopic video. It removes seams of pixels in areas that are most likely not noticed by the viewer. When applying seam carving to stereoscopic video rather than monoscopic still images, new challenges arise. The detected seams must be consistent between the left and the right view, so that no depth information is destroyed. When removing seams in two consecutive frames, temporal consistency between the removed seams must be established to avoid flicker in the resulting video. By making certain assumptions, the available depth information can be harnessed to improve the quality achieved by seam carving. Assuming that closer pixels are more important, the algorithm can focus on removing distant pixels first. Furthermore, we assume that coherent pixels belonging to the same object have similar depth. By avoiding to cut through edges in the depth map, we can thus avoid cutting through object boundaries. -
Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg.
2013
Algorithms for a Real-Time HDR Video System
Pattern Recognition Letters, 34, 9, 25 -33
-
Stephan Kopf, Benjamin Guthier, Philipp Schaber, Torben Dittrich und Wolfgang Effelsberg. ,
Technical Reports. 2013
Analysis of Disparity Maps for Detecting Saliency in Stereoscopic Video
Mannheim, . 13-003
We present a system for automatically detecting salient image regions in stereoscopic videos. This report extends our previous system and provides additional details about its implementation. Our proposed algorithm considers information based on three dimensions: salient colors in individual frames, salient information derived from camera and object motion, and depth saliency. These three components are dynamically combined into one final saliency map based on the reliability of the individual saliency detectors. Such a combination allows using more efficient algorithms even if the quality of one detector degrades. For example, we use a computationally efficient stereo correspondence algorithm that might cause noisy disparity maps for certain scenarios. In this case, however, a more reliable saliency detection algorithm such as the image saliency is preferred. To evaluate the quality of the saliency detection, we created modified versions of stereoscopic videos with the non-salient regions blurred. Having users rate the quality of these videos, the results show that most users do not detect the blurred regions and that the automatic saliency detection is very reliable.
2012
-
Benjamin Guthier. 2012
Real-Time Algorithms for High Dynamic Range Video
Mannheim, Deutschland
A recurring problem in capturing video is the scene having a range of brightness values that exceeds the capabilities of the capturing device. An example would be a video camera in a bright outside area, directed at the entrance of a building. Because of the potentially big brightness difference, it may not be possible to capture details of the inside of the building and the outside simultaneously using just one shutter speed setting. This results in under- and overexposed pixels in the video footage. The approach we follow in this thesis to overcome this problem is temporal exposure bracketing, i.e., using a set of images captured in quick sequence at different shutter settings. Each image then captures one facet of the scene's brightness range. When fused together, a high dynamic range (HDR) video frame is created that reveals details in dark and bright regions simultaneously. The process of creating a frame in an HDR video can be thought of as a pipeline where the output of each step is the input to the subsequent one. It begins by capturing a set of regular images using varying shutter speeds. Next, the images are aligned with respect to each other to compensate for camera and scene motion during capture. The aligned images are then merged together to create a single HDR frame containing accurate brightness values of the entire scene. As a last step, the HDR frame is tone mapped in order to be displayable on a regular screen with a lower dynamic range. This thesis covers algorithms for these steps that allow the creation of HDR video in real-time. When creating videos instead of still images, the focus lies on high capturing and processing speed and on assuring temporal consistency between the video frames. In order to achieve this goal, we take advantage of the knowledge gained from the processing of previous frames in the video. This work addresses the following aspects in particular. The image size parameters for the set of base images are chosen such that only as little image data as possible is captured. We make use of the fact that it is not always necessary to capture full size images when only small portions of the scene require HDR. Avoiding redundancy in the image material is an obvious approach to reducing the overall time taken to generate a frame. With the aid of the previous frames, we calculate brightness statistics of the scene. The exposure values are chosen in a way, such that frequently occurring brightness values are well-exposed in at least one of the images in the sequence. The base images from which the HDR frame is created are captured in quick succession. The effects of intermediate camera motion are thus less intense than in the still image case, and a comparably simpler camera motion model can be used. At the same time, however, there is much less time available to estimate motion. For this reason, we use a fast heuristic that makes use of the motion information obtained in previous frames. It is robust to the large brightness difference between the images of an exposure sequence. The range of luminance values of an HDR frame must be tone mapped to the displayable range of the output device. Most available tone mapping operators are designed for still images and scale the dynamic range of each frame independently. In situations where the scene's brightness statistics change quickly, these operators produce visible image flicker. We have developed an algorithm that detects such situations in an HDR video. Based on this detection, a temporal stability criterion for the tone mapping parameters then prevents image flicker. All methods for capture, creation and display of HDR video introduced in this work have been fully implemented, tested and integrated into a running HDR video system. The algorithms were analyzed for parallelizability and, if applicable, adjusted and implemented on a high-performance graphics chip. - Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2012 A Real-Time System for Capturing HDR Videos . New York, NY br>
- Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. IEEE, 2012 Optimal Shutter Speed Sequences for Real-time HDR Video . Piscataway, NJ br>
- Benjamin Guthier, Stephan Kopf, Matthias Wichtlhuber und Wolfgang Effelsberg. IEEE, 2012 Parallel Algorithms for Histogram-based Image Registration . Piscataway, NJ br>
- Johannes Kiess, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. ACM, 2012 Seam Crop: Changing the Size and Aspect Ratio of Videos . New York, NY br>
- Johannes Kiess, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. SPIE, 2012 SeamCrop for image retargeting Proceedings of SPIE. Bellingham, Wash. br>
- Tonio Triebel, Max Lehn, Robert Rehner, Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. IEEE, 2012 Generation of Synthetic Workloads for Multiplayer Online Gaming Benchmarks . [Piscataway, NJ] br>
2011
- Benjamin Guthier, Stephan Kopf, Marc Eble und Wolfgang Effelsberg. SPIE, 2011 Flicker Reduction in Tone Mapped High Dynamic Range Video Proceedings of SPIE. Bellingham, Wash. br>
-
Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg.
, 2011
Optimal Shutter Speed Sequences for Real-Time HDR Video
, .
A technique to create High Dynamic Range (HDR) video frames is to capture Low Dynamic Range (LDR) images at varying shutter speeds. They are then merged into a single image covering the entire brightness range of the scene. While shutter speeds are often chosen to vary by a constant factor, we propose an adaptive approach. The scene's histogram together with functions judging the contribution of an LDR exposure to the HDR result are used to compute a sequence of shutter speeds. This sequence allows for the estimation of the scene's radiance map with a high degree of accuracy. We show that, in comparison to the traditional approach, our algorithm achieves a higher quality of the HDR image for the same number of captured LDR exposures. Our algorithm is suited for creating HDR videos of scenes with varying brightness conditions in real-time, which applications like video surveillance benefit from. - Stephan Kopf, Benjamin Guthier, Dirk Farin und Jungong Han. IEEE, 2011 Analysis and Retargeting of Ball Sports Video . Piscataway, NJ [u.a.] br>
-
Stephan Kopf, Thomas Haenselmann, Johannes Kiess, Benjamin Guthier und Wolfgang Effelsberg.
2011
Algorithms for video retargeting
Multimedia Tools and Applications, 51, 19, 819-861
2010
- Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. IEEE , 2010 Histogram-based Image Registration for Real-time High Dynamic Range Videos . Piscataway, NJ br>
- Johannes Kiess, Stephan Kopf, Benjamin Guthier und Wolfgang Effelsberg. SPIE, 2010 Seam Carving with Improved Edge Preservation Proceedings of SPIE. Bellingham, Wash. br>
- no bib type
- Tonio Triebel, Sascha Schnaufer, Benjamin Guthier, Hendrik Lemelson, Gregor Schiele und Wolfgang Effelsberg. IEEE, 2010 REWARD A Real World Achievement and Record Database . Piscataway, NJ [u.a.] br>
2009
- Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. SPIE, 2009 High-Resolution Inline Video-AOI for Printed Circuit Assemblies Proceedings of SPIE. Bellingham, Wash. br>
- Stephan Kopf, Benjamin Guthier, Hendrik Lemelson und Wolfgang Effelsberg. SPIE, 2009 Adaptation of web pages and images for mobile applications Proceedings of SPIE. Bellingham, Wash. br>
- Tonio Triebel, Benjamin Guthier, Thomas Plotkowiak und Wolfgang Effelsberg. IEEE, 2009 Peer-to-peer voice communications for massively multiplayer online games . Piscataway, NJ br>
2008
- Benjamin Guthier, Stephan Kopf und Wolfgang Effelsberg. IEEE Service Center, 2008 Capturing High Dynamic Range Images with Partial Re-exposures . Piscataway, NJ br>
2007
- Tonio Triebel, Benjamin Guthier und Wolfgang Effelsberg. ACM Press, 2007 Skype4Games ACM Conference Proceedings Series. New York, NY br>