LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Color Calibration across multiple cameras

Hi,

 

Anyone ever try to do color calibration across multiple cameras to known value?

 

My situation is this:

 

1. Four identical setups with the same make and model 4k color line camera and LED lights 

2. All four cameras have been white balanced according to the manufacture specs

3. On running the same product through all the different lines we noticed slight variations in color from one system to the next

4. So we built a calibration fixture with known color targets (RGB values provided by the manufacture of the color card). Colors are a known red, green, blue, yellow, black, and white

5. Next thought was okay this should be easy, I have an image of a known color, Draw an ROI inside the known color extract the average value and compare that to the known to determine the ratio offsets of each layer.

6. To me that logic seemed solid but when I try to do that I end up with images that are less calibrated then before because I start to "over saturate" certain layers (i.e. push the value over 255) in certain colors so I am not producing the results I really need. 

 

Can someone explain to me where my logic gets flawed. I have a decent background in machine vision but color is not something I have not done a whole lot in. Anything I have ever done with color was always "good enough" with the default white balances from the camera manufacture. 

 

Sharing code on this will be a little difficult but I can share snippets if needed. Mostly I am looking for help in the design framework though more than the actual "nuts and bolts" of the coding. 

 

Please let me know if you have any questions and I will do my best to answer them. Thanks for your help in advanced. 

0 Kudos
Message 1 of 7
(3,890 Views)

It's a long time ago that I did an application for this and I did not develop the math behind it (smarter people than me from the customer did) but color calibration is a bit of magic.

But are you by any means trying to do the color calibration directly in RGB color space? That is not advisable as RGB space has some non-linear aspects as far as color representation is concerned. If I remember correctly we did some translation to CIELab, then applied the color calibration routines (which were some matrix calculations) that the customer had developed and then converted everything back to RGB for display. And I did part of the conversions by integrating C routines from some open source project. We also controlled the cameras directly to acquire raw images which then got processed by calling dcraw.

You also need to normalize the data at some point to avoid over-saturation of  one or more of the color channels when you apply the color correction.

Also, all this trouble only has any value if you have a completely controlled environment in which you make both the calibration and product snapshots. Ambient light of any form that can reach the object being photographed is absolutely surely going to change your color values that the camera sees. The customer used special boxes in which the camera was mounted, that had a calibrated light source, specially treated inside walls and could be completely closed for the actual picture acquisition.

You need to account for the light source to change its spectrum both over short time due to warming up and over longer periods of time due to aging.

This means that you need to have a warm up time before doing calibration and measurements as well as need to redo the calibration in certain intervals, even if it is a LED source. Other light sources will require even more regular recalibration as they tend to age quicker.

Rolf Kalbermatter
My Blog
0 Kudos
Message 2 of 7
(3,858 Views)

Hi,

 

Thanks for the response. I had a feeling once I got into it more it was going to be more "magic" then anything. I had been trying to do it in the RGB mostly but I have tried it in just about every format LabView has to offer as well. I will focus on the CIELab one though now.

 

The system is very protected against ambient light. The lens is at F/16 and six led bar lights light the application. I 100% agree the system is not a one and done calibration. That is one of the reasons we are developing this so we can do this calibration as needed. 

 

So LabView has no built in functionality at all to preform a color balance then? I know programs like photoshop do so I was hoping somewhere there was a hidden function that I could make work here. 

0 Kudos
Message 3 of 7
(3,841 Views)

Costello,

 

As far as I have seen, there is not an option for an automatic color balance. Can you tell me what effect the color has on your final goal or application? Knowing this will help us investigate what the best workaround for your situation might be.

0 Kudos
Message 4 of 7
(3,828 Views)

Rey,

 

The goal is to have highly accurate color images. We are replacing a visual manual check with an automated one and I need to be to pull out features of the object based on color.

 

While to the human eye the machines all appear to produce similar images, my color groups used to separate features are working really well on some of the lines but not all of them. The colors are just out slightly enough that they fall outside my groupings. The feature we are looking for has a narrow band of color so I do not want to just increasing the groups to include more colors because then I could produce fails reads at other times.

 

0 Kudos
Message 5 of 7
(3,815 Views)

Turns out I remembered something wrong and we did not use CIELab but rather XYZ color space for this. I can't post the code here really but basically we used the XYZ color values from an IT8 color card and scanning/photographing that card we extracted the mean of the RGB values of each color spot, This RGB3 was converted to RGB9 which is a combination of the original RGB pixels, the squares of them and the multiplication of each with one of the other. This was then matrix multiplicated with its transposed matrix and again and then matrix multiplicated with the XYZ reference values for the IT8 colors. The resulting array was again transposed and used as calibration matrix.

 

The RGB values from the real image were then again translated into RGB9, transposed and this was matrix multiplied with above calibration matrix and transposed again to get the XYZ values for the to measure spots in the picture. This was it, as the customer wanted to have XYZ values for the final measurements.

 

Basically everything is more or less turned into a linear algebra solution, which I have to admit wasn't my strongest point in mathematics. Smiley Happy Without someone who understands the mathematics involved with this really well, I'm afraid it won't be possible to do this right.

 

And you most likely want to use RAW images from the camera so that you can convert them with dcraw or similar into color images with known white reference values and all that. Otherwise your are pretty much at the mercy of the color transformation and image correction algorithms that your camera manufacturer put into their devices.

Rolf Kalbermatter
My Blog
0 Kudos
Message 6 of 7
(3,799 Views)

There are COTS color sensors that do not rely on cameras, would this be an option?

Keyence and Micro Epsilon come to mind.

 

-AK2DM

~~~~~~~~~~~~~~~~~~~~~~~~~~
"It’s the questions that drive us.”
~~~~~~~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 7 of 7
(3,787 Views)