Jonas Unger is an associate professor and head of the Computer Graphics and Image Processing Group and the Visual Computing Laboratory at the Division for Media and Information Technology (MIT) at the Department of Science and Technology (ITN).
Contact (full address):
Office: Kopparhammaren G520
Phone: +46 11 363436
E-mail: This email address is being protected from spam bots, you need Javascript enabled to view it
2016-05-23: SIGRAD 2016 - We are giving two research overview presentations on Real-time tone mapping and compressive image reconstruction.
2016-05-17: Our paper Time-offset Conversations on a Life-Sized Automultiscopic Projector Array has been accepted for publication at the CVPR workshop Computational Cameras and Displays 2016.
2016-04-21: Norrköpings fond för forskning och utveckling will fund our new project Digitala verktyg för en levande historia, which will run 2016 - 2017.
2016-04-21: We have two talks on BRDF editing and HDR-video encoding accepted to SIGGRAPH 2016.
2016-04-14: Our new book on HDR-video is out. We have contributed with two chapters.

Recent activities:

High Dynamic Range Video - From acquisition to display and applications

(The new book is out)

Differential appearance editing for measured BRDFs

(SIGGRAPH 2016 Talk)

Luma HDRv: an open source HDR video codec optimized by large-scale testing

(SIGGRAPH 2016 Talk)

The HDR-Video Pipeline

(Eurographics 2016)

Real-time noise-aware tone mapping

(SIGGRAPH Asia 2015)

Pseudo-marginal metropolis light transport

(SIGGRAPH Asia 2015)

Adaptive dualISO HDR-reconstruction


Compressive image reconstruction in reduced union of sub-spaces

(Eurographics 2015)

Dowload open source code and open data from our projects here:

(Perceptual encoding of HDR video - open source)

(Capturing reality for computer graphics aplications, Siggraph Asia '15 course material)

(Depends: Workflow Management Software for Visual Effects Production)

(LiU HDRv Repository -
HDR-video and
Image based lighting)


My research interests lie at the intersection of computer graphics/vision and image processing with applications in e.g. high dynamic range (HDR) imaging, light field imaging, tone mapping, appearance capture, material modeling photo-realistic image syntehsis, and medical/scientific visualization of volumetric 3D data.

Scene capture for photo-realistic image synthesis and augmented reality

In this project, we are developing new algorithms, interactive methods and a systems pipeline for computer graphics applications in product visualization and augmented reality, where it is a requirement that the rendered images exhibit the quality and accuracy needed to make them comparable to a photograph of the same scene. The goal in these applications is to render virtual objects so that they can be seamlessly placed into background photographs or digitized models of real scenes. Our tools and algorithms make it possible to build digital models of real scenes where geometry, lighting, and material properties are modeled based on accurate measurements. This enables the appearance of virtual objects to be simulated so that they can be placed into real scenes and appear as if they where actually there. We call such a model a Virtual Photo Set (VPS).

Related publications:

Capturing reality for computer graphics applications

(SIGGRAPH Asia 2015 courses)

Photorealistic rendering of mixed reality scenes

(Eurographics 2015 STAR)

Depends: Workflow Management Software for Visual Effects Production

(DigiPro 2014)

Spatially Varying Image Based Lighting using HDR-video

(C&G 2013)

Temporally and Spatially Varying Image Based Lighting using HDR-video

(EUSIPCO 2013)

Next Generation Image Based Lighting

(SIGGRAPH 2011 Talk)

Free form incident light fields

(EGSR 2008)

Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering

(Visual Computer 2007)

Capturing and rendering with incident light fields

(EGSR 2003)

LiU HDRv Repository -
HDR-video and
Image based lighting


Image reconstruction and HDR-video capture

In this project, we are developing new cameras and algorithms for high dynamic range (HDR) video capture. The main theoretical result is a new framework for statistically based image reconstruction using input from multiple image sensors with different characteristics, e.g. different resolution, filters, or spectral response. Based on our framework, we have developed a number of algorithms for different cameras and sensor setups. One of the key contributions is that our algorithms perform the different steps in the "traditional" imaging pipeline (demosaicing, resampling, denoising, and image reconstruction) in a single, unified step instead of as a sequence of operations. This makes them both more accurate and easy to parallelize. The output pixels are reconstructed as a Maximum Likelihood estimate taking into account the heterogeneous sensor noise using adaptive filters. Based on our algorithms, we have developed multi-sensor HDR-video cameras, sequential expousre HDR-video cameras, and methods for HDR-video capture using off-the-shelf consumer cameras.

Related publications:

Adaptive dualISO HDR-reconstruction


HDR reconstruction for alternating gain (ISO) sensor readout

(Eurographcis 2014)

A Unified Framework for Multi-Sensor HDR Video Reconstruction

(Signal Processing : Image Communications 2014)

Unified HDR-reconstruction from raw CFA data

(ICCP 2013)

High Dynamic Range Video for Photometric Measurement of Illumination

(Electronic Imaging 2007)

An Optical System for Single-Image Environment Maps

(SIGGRAPH 2007 Poster)

A real-time light probe

(Eurographcis '04 short paper)

CENIIT - High Dynamic Range Video with Applications

Tone mapping and HDR-video compression

An important problem in HDR imaging and video is to map the dynamic range of the HDR image/video to the (usually) much smaller dynamic range of the display device, taking into account the display characteristics (e.g. black level and peak luminance), the noise properties of the input video, and the ambient lighting in the viewing environment. While an HDR image captured in a high contrast real scene often exhibit a dynamic range in the order of 5 to 8 log10 units, a conventional display system is limited to a dynamic range in the order of 2 to 4 log10 units. The mapping of pixel values from an HDR image or video sequence to the display system is called tone mapping, and is carried out using a tone mapping operator (TMO). In this project, we are developing new tone mapping algorithms and methods for HDR-video compression.

Related publications:

Real-time noise-aware tone mapping

(SIGGRAPH Asia 2015)

Perceptually based parameter adjustments for video processing operations.

(SIGGRAPH 2014 Talk)

Evaluation of Tone Mapping Operators for HDR Video

(Computer Graphics Forum 2013)


Survey and Evaluation of Tone Mapping Operators for HDR Video

(SIGGRAPH 2013 Talk)

Perceptual encoding of HDR video (open source)

Light field displays and automultiscopic viewing

An automultiscopic display is a display system which offers viewing of 3D-images from arbitrary positions, for arbitrarily many users without 3D-glasses. Currently, we are seeing the advent of technology (processing power, bandwidth and display hardware) which will enable the development of high-quality automultiscopic display systems. This development will fundamentally change our notion of a display, how we use it and how we develop content for it. In this project we have developed a rendering system for real-time playback of video recorded using 30 video cameras on an automultiscopic light field display using 216 light projectors.The projector array light field display was built at University of Southern California Institute for Creative Technologies.

Related publications:

An Auto-Multiscopic Projector Array for Interactive Digital Humans

(SIGGRAPH 2015 E-Tech)

Creating a life-sized automultiscopic Morgan Spurlock for CNN's ``Inside Man''

(SIGGRAPH 2014 Talk)

Learning based sparse representations for visual data compression and imaging

In this project, we are developing new representations, basis functions, for compression and efficient representation of visual data such as light fields, video sequences, and images. Previous methods have used analytical basis functions such as spherical harmonics, Fourier bases, and wavelets to represent visual data in a compact form. In this project we take a learning based approach, and train basis functions to adapt to the input data. Our learning based basis representations admit a very high sparsity, which enables data compression with real-time reconstruction and the application of compressed sensing methods for image and light field reconstruction. In our work, we have developed methods for compression of surface light fields encoding the full, pre-computed global illumination solution in complex scenes, for real-time product visualization using our GPU implementation. We have also developed compressed sensing algorithms, which for example allow for high quality light field rendering/capture using only 4% of the original data.

Related publications:

Compressive image reconstruction in reduced union of sub-spaces

(Eurographics 2015)
Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
(SIGGRAPH Asia '13 technical brief)

Learning Based Compression for Real-Time Rendering of Surface Light Fields

(SIGGRAPH 2013 poster)

Geometry Independent Surface Light Fields for Real Time Rendering of Precomputed Global Illumination

(SIGRAD 2011)

Material capture and modeling

Material properties such as reflectance, color and textures play a key role in the visual appearance of objects. In photo-realistic and physically based image synthesis, we simulate how light interacts with the surfaces and the materials in a virtual scene. In this project, we are developing methods and equipment for measuring material properties on everyday surfaces, and mathematical models that can accurately describe these properties using models that can be efficiently used in photo-realistic image synthesis.

Related publications:

BRDF Models for Accurate
and Efficient Rendering
of Glossy Surfaces

(ACM TOG, 2012)

S(wi)ss: A flexible and robust sub-surface scattering shader.

(SIGRAD 2014)

A versatile material reflectance measurement system for use in production

(SIGRAD 2011)

Performance relighting and reflectance transformation with time-multiplexed illumination


GPU techniques, efficient rendering and volume visualization

Efficient processing and rendering algorithms are important aspects in computer graphics and computational imaging. Most of our algorithms are designed to allow for parallel computations and efficient GPU implementations. In the projects described below, we have developed algorithms for real-time ray-tracing, interactive visualization of volumetric 3D data from the medical domain, and 3D grids of spherical harmonics basis functions for efficient storage and representation of spatially varying real world lighting.

Related publications:

Real-time video based lighting using GPU raytracing

(EUSIPCO 2014)

Real-time image based lighting with streaming hdr-light probe sequences.

(SIGRAD 2012)

Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering

(TVCG 2011)

Estimation and Modeling of Actual Numerical Errors in Volume Rendering

(EuroVis 2010)
Thesis Cover
Light Probe Sequence Resampling for Realtime Incident Light Field Rendering

(SCCG 2009)

Contact Information:

  Address: Jonas Unger
Department of Science and Technology
Linköping University
SE-601 74 Norrköping
Office: Kopparhammaren G520
Email: This email address is being protected from spam bots, you need Javascript enabled to view it
Phone: +46 (0)11 363436


Jonas Unger 2016