At the time of writing this article, our digital photography world is going through a change. A big change. Camera makers are finally starting to apply their noise reduction algorithms with some innovation. This is coming from a need to produce better high ISO images than ever. Sensor makers are trying (and encouraged) to stuff more and more pixels in their image sensor to drive sales, which reduces pixel size and thus making it less sensitive. The less sensitive pixels are, the less chance there is for light to fill pixels equally. Thus, there will be more photon shot noise.
Note: this is not the place to explain what photon shot noise is, you can understand this article even without completely understanding. Just keep in mind that smaller pixels produce more random noise, which has to do with the physics of light rather then the image sensor itself.
The problem with every RGB noise reduction algorithms as we covered in the previous article is the same. They all work on RGB data, which, cannot distinguish between color noise and grain noise. Therefore RGB algorithms can only address noise as G noise, R noise and B noise. This approach is even more difficult when the noise is very high, because it is difficult to distinguish between edges and noise.
Imagine what would happen if you could separate the data to luminance and color, having the luminance of the image on one side and the color of the image in the other.
Yes, you’d be able to reduce noise separately on each image, remove the color noise from the color image and remove the grain noise from the luminance image. Afterwards, you’d combine the image into one image to have a noise free image.
It is of course not that simple and there are many difficulties. But the principal is to separate the color noise from the grain noise so it would be easier to reduce color noise. Color noise is much more disturbing to us from just monochromatic noise, so you could see how this method is important for camera makes.
How do they do it?
Some camera makers convert the data somewhere along the pipeline from RGB data to YIQ, YUV, LAB or any data of this kind. This is usually done to make color adjustments. But this also gives the camera makers an opportunity to reduce noise, as the data is already converted.
All of those color spaces separate Chroma (color) from Luma (luminance), so half of the work has already been done for camera makers. Now it’s a matter of the algorithm to take its part and reduce noise. But, as its turns out it is much more difficult to reduce noise in any one of these color spaces. The chroma image, which contains only the color data, is constructed from differences of levels to describe color data (see image bellow). The edges and details in the chroma data are very weak and thus, difficult to detect by the algorithm. If the filter cannot detect edges successfully, the color of the image would flow over the edges and into nearby areas. There are solutions to keep the edges safe, for example, one might use the edges from the luma image to detect chroma edges.
Luminance data (luma)- strong edges
Color data (chroma) – weak edges
Chroma reduction in shadows
Some camera makers are using the luma chroma suppression stage in the pipeline to reduce chroma (color) from shadows areas in an arbitrary way. This approach often does not use a filter; it could be based on color transformation in the shadows to remove chroma completely. For instance, Nikon used this approach with the Nikon D200. Images below from the Nikon D200 and the Canon EOS 350D demonstrate this approach. Move your mouse back and forth between the two 1600 ISO images and note the lack of color in the dark part of the Nikon D200 image (the left part).
|Nikon D200 @ 1600ISO||Canon Rebel xt @ 1600ISO|
|Nikon D200 @ 100ISO||Canon Rebel xt @ 100ISO|
Modern noise reducation algorithms
Noise reduction methods that operate only on luma chroma spirited color models such as YUV, YIQ or LAB are what could be called modern noise reducation algorithms. Those algorithms are not yet implanted in most digital cameras out there, but are expected to heavily penetrate the market in DSLR as well as point and shoot cameras. Adobe Camera Raw is using this kind of method for a while now, other camera makers already produced some cameras with this method principle, although the implementation is not quite the same.
The basic principle is to treat luma noise in the luma layer differently from chroma noise in the chroma layer. By doing so, one can apply aggressive edge preserving chroma noise reduction on the chroma (A and B layers in case of LAB) and gentle edge preserving on luma noise reduction on the luma layer. As you know, the luma layer is where the strong edges are, these that keep the image sharp. The result should be a color noise free image with some monochromatic noise and good strong edges.
The ‘Luma + Chroma’ image below illustrates just that. As you can see, there is some fade color noise and very big color stains. This is the aggressive chroma noise reduction filter working on the chroma layer. Note that we used the same RGB noise reduction images from the RGB noise reduction article, but the images are now displayed into L, A, B instead of R,G and B so you could understand how this would work. Of course, there are much better algorithms then the one we used to do the same task.
|Original||All RGB||Ch RGB||Luma + Chroma|
As always, nothing is prefect, this method does have its problems. Because we are working on the color data, filters of this kind influence the color of the image, many times altering or reducing the saturation of the image. There are other problems that might occur with this type of algorithms. It’s quite difficult to treat color edge noise, so some color edge noise could remain even after this type of algorithm had been applied. Similar problems can occur where the neighboring areas are confusing the algorithm, making it think it has discovered an edge. You can see this happening in the Nikon D200 example below. The Nikon D200 is one of the first cameras utilizing an approach that is based on this method but not quite similar. Other times, as you can also see from the Adobe Camera Raw image below, the algorithm is reducing all the chroma from a particular area where it should’t.
Nikon D200 1600ISO examples, the arrows are pointing where you can see some color noise ‘escaping’ chroma noise reduction filter. This is most likely to happen in the edges and in areas that confuse the algorithm.
Luminance Smoothing (luma): 40
Color Noise Reduction (chroma): 0
Luminance Smoothing (luma): 40
Color Noise Reduction (chroma): 50
Adobe Camera Raw 3.3 example, on the left an image with Luminance Smoothing (luma) set to 40 and Color Noise Reduction (chroma) set to 0 (this does NOT mean it’s turned off completely). On the right, the same image with Color Noise Reduction (chroma) set to 50.
The arrows are pointing where the algorithm has failed, reducing the chroma completely below the eyebrows or smearing chroma data into the edge between the lips, turing the edge red (smeared the A channel which is red to green data).
Chroma with no edge / bad edge preserving
In computer software, where there are plenty of resources and time, some chroma noise reduction algorithms I’ve seen are not edge preserving or are not performing very well. In case of too much chroma noise reduction applied by the user, color edges will leak into the edges and even into nearby areas. The image below illustrates on our image how too much of non-edge preserving chroma reduction would look. So far I’ve seen this happen with Apple Aperture and Adobe Photoshop’s reduce noise filter.
|No edge/bad edge chroma filter||Edge preserving chroma filter|
Making it in the real world
Camera makers with their need to design a pipeline that fast (5 fps and up) as well as economical (in terms of watt and cost), can’t really afford to use modern noise reduction algorithms as we’ve described. I’m afraid I can’t disclose how exactly camera makes are producing modern noise reduction results, but I can say that the same principle of the YUV, YIQ or LAB noise reduction is being used in a unique way, very similar to what was shown here.
In the time of writing this article there are several camera makes already producing cameras with their property modern noise reduction algorithms. Camera makers like Nikon (DSLR only), Panasonic with their Venus III pipeline and Fujifilm. We expect more to follow during the years.
Note: all algorithm examples are illustrations of camera noise reduction algorithms a camera makes might use. Those algorithm illustrations are not taken from real cameras or based on actual camera algorithms.
Digg this story Add to del.icio.us