Now we are going to take the original image and blur it.

The blur we are going to apply is a very simple and fast 2-pass Gaussian filter. This is the convolution of the image with a 10x1 vertical filter followed by a 1x10 horizontal filter. No thresholding or any complex math is done, just convolution like you might read in a introduction to graphics textbook.

Here is the image again (this is an overhead view of lights in downtown Los Angeles). This is just a plain old 8-bit per color image, it is not High Dynamic Range or anything:

Original Normal blur Blur in Linear Floating Point Blur with rolloff

The Normal Blur is what most software will do when you blur the image. You should be able to see that it gets darker, and many of the smaller lights disappear. The fact that small dots disappear when you blur leads many people to think that replicating depth of field or focus effects requires complex math with thresholding and comparisons or color corrections before and after the blur.

The Linear image is the image converted from sRGB to linear, blurred with exactly the same algorithim, and then converted back. Notice that the luminance is the same and it looks a lot more like an out-of-focus version of the first one. Now all we have to do is more accurately model the lens in order to get realistic blur, we don't need to mess with the colors!

In the last image the conversion between sRGB and linear is through the to_byte_compressed function as described in the paper. This makes the brighter values in the file turn into values greater than 1 (and thus they blur out into bright circles). This does a pretty good job of picking out the brighter lights, even though the original image was clipped. This shows that we can use High Dynamic Range methods with 8-bit files by altering the conversion. My algorithim can do conversions where the floating point numbers go outside the 0 to 1 range. Any function that has a positive derivative everywhere is supported.

Go back