Adverti horiz upsell
using the eLin color model in floating point apps
using the eLin color model in floating point apps
admin, updated 2005-05-05 14:58:53 UTC 63,084 views  Rating:
(0 ratings)
Page 1 of 2

Much of my writing on the benefits of working linear is tied to the eLin documentation. It occurred to me that it might be helpful to describe these advantages in more general terms, and to provide equivalences of the eLin color pipeline for two popular floating point compositing apps.

First, let's get some terms straight. A lot of people use the term linear to describe images that look correct on their displays without any color correction. In visual effects circles we sometimes hear about converting Cineon images from log space to �linear� so they �look right.�

When I use the term linear, I am talking about something else. I am talking about a linear measure of light values. I freely intermingle terms like photometrically linear, radiometrically linear, scene-referred values, gamma 1.0, and just plain old linear when describing the color space in which pixel values equate to light intensities.

If you were to display such an image on a standard computer monitor without any correction, it would look very dark. The best way to visualize this is to think about the �middle gray� card you bought when you took your first photography class. It appears to be a value midway between black and white, both to our eyes and in our correctly-exposed shots, and yet it is described as being 18% gray.

We�d want images of this card to appear at or near 50% on our display. But in scene-referred values, an object that is 18% reflective should have pixel values of 18%, or 0.180 on a scale of 0�1.

Virtual Graycard Comparotron2000�:

Linear image with no LUT (card = 0.18, or 18%)



Image with a 2.2 LUT applied (card = 0.46, or 46%)



If your digital camera didn't introduce a gamma 2.2 (or thereabouts) characteristic into the JPEGs it shoots, they'd look like the linear example above. The images where the card �looks right� are variously described as perceptually encoded, gamma encoded, or they may even be identified as gamma 2.2 encoded, or having a gamma 2.2 characteristic curve. A specific variant of gamma 2.2 goes by the name sRGB. In an attempt to create a catchy (and catch-all) term, the eLin documentation refers to these color space collectively as vid, since NTSC video has a gamma of 2.2 (kindasorta), and since these images �look right� on a video monitor.

Many of the image processing tools we use behave differently when performed at different gammas. If you gamma an image dark, blur it, and gamma it back up (inverse of gamma = 1/gamma), you get a different result than if you simply blur the image.

When you convert an image to linear space, your subsequent image processing operations better match real-world physical properties of light. If you are accustomed to processing perceptually encoded images, you will probably find that switching to g1.0 processing will make your familiar effects look more organic (with a few notable exceptions to be covered in a later article).