Jump to: navigation, search

Dynamic Range

The Real World

Before we can talk about the dynamic range of an image we must first review some photography basic notions.

The colors in the real world are not organized in an 8bits range with values between 0 and 255 per channel like in your computer's video card. In the real world everything is related to waves with a certain wave length, also called intensity. Wave lengths can be measured using various units, Joules are used in terms of energy, or the Radiance, more pertinent for our purpose (expressed in watts/meter² * steradian). The branch of science measuring these wave lengths is called Radiometry and covers all measures for all the waves within the spectrum.

In the world of photography, we only take in account the visible part of the spectrum, in other words, what the eye can see. And the eye has its limits: it can only see part of the spectrum, from around 380 nm to about 830 nm, that’s what we call the visible spectrum. The "International Commission on Illumination" (CIE) created the V curve (vee-lambda) that takes in account the eye’s limitations. This curve allow us to convert a radiometric value into an "eye" equivalent value.
Example: The Radiance becomes Luminance (expressed in candela / m²). The branch of science measuring the Luminance is called Photometry (i.e. Radiometry limited to the eye visible part of the spectrum).

Why am I talking about this?

Here is a little table:

Type of light Luminance (candela / m^2)
Light from a star 0.001
Light from the moon 0.1
Inside a house 100
Light from the sun 100000

This table illustrates the variations in light intensity we come across in the real world: the variations are enormous: we can see that the variation between the light from the sun and the light from a star is a 1 to a 100 million ratio in luminance value. If we take a picture of the average grey values of a star the values will be around 0.001, when doing the same for the sun it’s about 100,000! The variations are quite huge.

In our computers, and at the present time, we have video cards that can only render variations ranging from 0 to 255 per color channel. We are very, very, far from the count.

But let’s look at the situation in more details and let’s focus on the aspects of capturing images.


Image Capture

When we take a digital picture, the camera’s sensor measures the photons hitting its surface. The sensor accumulates the photon’s energy during the whole exposition time. At the end of the exposition, an Analog/Digital converter transforms this energy value into a numerical value.

 Luminance => photons energy measure during exposition => A/D converter => numerical value 

A few remarks here:

  • The energy measure by the sensor. It is not of infinite precision. It’s 10 bits most of the time, 12 bits max. We then obtain 1024 possible values or 4096 possible values.
    • Let’s take a 10 bits sensor: 1024 value max. And let’s assume that the transformation curve is linear (it’s not the case but this is not important in this example). We want to correctly photograph the inside of a house where a window is included in the frame and there is a bright sunshine outside:
    • The darkest part of the image will have a luminance value of 1 cd/m². Because I want details in the shadows I will take this value as a reference.
    • The brightest part of the image I can then measure with my sensor will be 1024 cd/m2 (because of its 10bits latitude). The photons coming from the sun will then be way over the saturation threshold of the sensor, with values around 100,000 cd/m². As a result, the area around the sun in the picture will be blown out.
  • The A/D converter. Let’s hope that this converter has the same accuracy than the one used for energy measures. Let’s imagine we have a 10 bits measure going through a 8 bits converter! The opposite is more frequent as we often see 12bits converters paired with 10bits sensors. It invents two bits! And it’s often within these 2bits that we find all the beauty of the noise that plagues digital photography.
  • The exposure window matches the shutter speed/aperture value:
    • If you underexpose, the sensor does not measure energy values on its full dynamic range anymore, but only between let’s say: 0 and 128. You then generate more noise if you want to translate this dynamic on the full histogram range.

To summarize: capturing the image is converting the real world light intensity values into numerical values. This process has its limitations and can only measure a small part of the spectrum.


Houston, we have a problem, I think we cannot measure all the nuances we can see!




The Dynamic Range of an Image

The Dynamic Range of an image is the capacity of this image to restitute the correct intensity scales we can observe in the real world.

We often express the dynamic using the “stops” unit. It can be easily calculated by getting the ratio between the brightest intensity value and the darkest intensity value and taking the natural logarithm (or Napierian logarithm, . exponent 2 scale).

 Example: JPEG file
   Brightest pixel: 255
   Darkest pixel: 0
    => 256 possible values
   stops = log( 256 ) / log ( 2 ) = 8 !

This is the 8bits that a JPEG image can restitute.

A few examples of dynamic range scales:

  • In digital photography
    • JPEG file: 256:1 ratio giving 8 stops
    • RAW file: generally 10 bits: 10 stops
    • HDR file: This depends on the file itself but can go way over 15 or 20 stops.
  • Cinema, we talk about the Exposure latitude notion, which is the camera or the film dynamic range:
    • Standard video camera: 5.5 stops (45:1 ratio)
    • "standard" negative film: 7 stops (128:1 ratio)
    • "Extended latitude" negative film: 11 stops (2048:1 ratio)
  • Computer screens, we have the contrast value corresponding to the display dynamic.
    • LCD technology: 9 stops, (500:1 ratio)
    • SED technology: 16 stops (100000:1 ratio)

Let’s compare to the values we find in the real world:

  • The real world: total range of 100,000,000 corresponding to 26 stops
  • A standard digital file (jpeg or raw): 10 stops maximum


Houston, we have a problem, our jpeg or raw file is incapable of displaying the reality!




LDR Limitations

As mentionned in the previous section, classical image representation (known as LDR or Low Dynamic Range) cannot represent the reality. This part illustrates what happens with LDR images when trying to render high dynamic panoramas. Here, we use a panorama example which contains high exposure differences as shown below : (NB : all the following statements assume that the camera was shooting in automatic exposure mode).


Ref pano.png

As we can make out, image n°3 is facing the sun thus this direction is very bright and so a short exposure time is used by the camera. At the opposite, picture 1 is facing the ground, which means this shooting direction is less bright resulting in a longer exposure time for picture 1 than picture 3. So then, what happens when we try to apply a color correction to this pano?

To simplify things, let's consider only 3 images of the pano :

  • A picture facing the ground (high exposure time) : picture n°1

Picture 1

  • A picture facing half the ground, half the sun (medium exposure time) : picture n°5

Picture5

  • A picture facing the sun (short exposure time) : picture n°3

Picture3

Every image has its own color space since they have been shot using automatic exposure the camera has picked the most suitable exposure time. And so the exposure time used by our camera was quite short for picture 3 whereas it was longer for picture 1. The idea of the color correction in a panorama is to bring every individual image-related color space to a global color space corresponding to the panorama. As far as exposure time is concerned, this simply results in a different scaling factor applied to images composing the pano. On the figure below, we bring every image in the global panorama space  : this expand the histogram related to picture 3 while the histogram related to picture 1 has shrunk.

Step1.jpg

What happens then? Let's remember that the histogram a LDR image is composed of a fixed number of bins (generally 255). To make things simpler, we consider a 15-bins histogram. All those bins have the same width through the histogram. However, this width can vary globally. To understand this principle, we render a pano in 3 differents ways as shown on the scheme below :

Step2.jpg

  • Narrow bins : Render Interval n°1 :

The bins are small, and this means we have nice details in darker areas (no information is lost). However, since we have a limited number of bins, this also means that the information in brightest areas gets burnt. The result is the one we expect by setting picture 1 as reference picture and only correct the 5 others. The result is shown below :

Img Over.jpg

  • Medium-sized bins : Render Interval n°2 :

The bins are medium, well, less bins are dedicated to represent darkest areas (the histogram of picture 1 is smoothed out), however less information is burnt in the brightest areas. This result is the one we expect by setting picture 5 as reference and correct all others.

Img Mid.jpg

  • Large bins : Render Interval n°3 :

The bins are large, this means we still have bins in bright areas of the histogram and thus almost nothing is burnt. However, histograms of pictures 1 and 5 are smoothed out in the darkest areas this resulting in an important loss of details.

Under.jpg

Basically, when facing a case with high exposure differences, the user need to make a choice :

  • Either getting details in darkest areas while burning the bright ones.
  • Keeping details in bright areas but losing details in darkest areas.

In case you don't want either of the choices described above, other techniques are available as explained in sections below.




The HDR Adventure

Definitions

No real photograph possible, no good file format, etc. 

HDR signifies High Dynamic Range, and aims to allow the full rendering of the possible color range visible in the real world. To HDR we often oppose the LDR term (for Low Dynamic Range) which is a low dynamic image. We must be careful when using the HDR term as it has several meanings:

  • an image with an extended dynamic range (often labeled HDRI, for High Dynamic Range Image, but the I is often omitted),
  • the name of the file format used by the Radiance application who was the first format to support a high dynamic (.HDR files),
  • the group of capture and transformation techniques of the real dynamic.

In photography, the final goal of all this technical stuff is to produce nice pictures. And often, a nice picture is when we have lots of details in the highlights and in the shadows. (Note: many artists will easily demonstrate that this sentence is entirely false, but this is outside of the scope of our current purpose: we would enter the subjective realm and the philosophical definition of beauty).

In short: to achieve this goal, we must first be capable of capturing the entire dynamic of the scene we want to photograph with a very limited sensor in terms of dynamic range. Various techniques were invented to achieve this goal, the most used is to take several pictures of the same scene using different exposure values (bracketing). When combining these exposures we can recalculate the whole dynamic range of the scene. Autopano Pro can manage this process as it assemble pictures in terms of geometry but also within the color space by combining the exposure values of the source pictures.

Examples

For example, if we take 2 jpeg files, bracketed at +2 IL and -2 IL, we can rebuild an HDR file with a dynamic range far greater than the 8 stops of a basic jpeg file. In general, we can easily reach 10 to 12 stops, and more.

Manuel-cc-augmentation-dynamique2.png

Manuel-cc-augmentation-dynamique1.png When analyzing the histograms we can see that by matching colors of the two pictures the resulting histogram is transformed. We can see that the histogram of the second image becomes narrower (it shrinks). But this histogram still contains it’s 8bits of resolution, assuming we are working with JPEG source images. This implies that the global quality of the panorama is greater, the resolution between two segments of the histogram becomes smaller; the dynamic range is then widened. The histogram of the final panorama is not strictly contained between 0 and 255 anymore, but extends beyond those limits.

HDR files built from 4, 5 or more images and having a 16 stops dynamic range can easily be fond on the Web, the following links are examples:

Use

Voila!! We have our HDR file and it represents the real light measures of our scene: The file was produced with Autopano Pro or with other HDR creation tools like Adobe Photoshop CS2.

But what are going to do with it? Our printer is an 8bits per channel printer, our monitor is an 8bits per channel monitor, our video card is an 8bits per channel card and our file is much wider than that. We can neither display nor print our HDR file because its dynamic is much higher than all the hardware we use.


Houston, we have a problem, there is nothing we can do with our HDR file!




HDR to LDR

How to process our HDR file?

Simple: we must bring down its dynamic to something more acceptable (i.e bring it back within the standard 8bits per channel supported by the hardware). This is what we do using tone mapping algorithms.

A tone mapper is an algorithm narrowing the dynamic of an image.

tone mapper examples:

  • Levels: This is the most basic tone mapper.
 Everything under the black limit is 0,
 Everything over the white limit is 255,
 Everything in between is interpolated in a linear way between 0 and 255.
  • Some real tone-mappers:

RH2 or RH4 found in Autopano Pro (temporary removed from version 2)
Tone Mapping software like Photomatix or FDRtools.

But why do all this to, in the end, come back to 8bits?

Well, that’s a valid question!

Let’s resume, we had 2 correctly exposed jpeg. From those jpegs we created an HDR file with a high dynamic range. Then we transformed this HDR file into another 8bits jpeg using a tone mapper. Could we have saved us some steps and obtain the final file directly from the two 8bits jpegs?

Well we could have, this technique also exists and is called "contrast blending". And you probably already used it without knowing it: a cathedral, shooting the inside with a spot measure on the wall and a second on the stained glass: and masking the two shots in Photoshop does the trick. In fact the resulting image becomes an HDR picture. The problem with this method is expressed in this rule:
"a pixel is brighter if the object it represents received more light"

We would want this realistic approach to be true in the image. The problem with contrast blending is that it does not match this definition. The HDR technique respects the logical order of things that dictates that an object in the light is brighter than an object in the shadow.


A few links








BACK TO: Documentation / Autopano Documentation

Retrieved from "http://www.kolor.com/wiki-en/index.php?title=Dynamic_Range&oldid=23154"