Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Need help on auto-exposure time regarding histogram/threshold.

Hello,

 

For research, I need to program a iDS uEye UI-1485LE camera to automatically adjust exposure times for taking snapshots. The purpose of this is for image calibration, so the images I take would not be too dim or too washed out.

 

The object of interest is not moving, however, the intensity of our light source may change. So for some snapshots, the image might be too washed out, while for other snapshots, the image might be too dim. So the way to fix this is to automatically adjust the exposure time, so we can get an optimal image that is neither to washed out or dim.

 

However, my knowledge with histogram and image threshold is really limited. I've spent hours reading up on tutorials and examples here on the forum and from things I've found on google, but I'm still stuck on solving the problem.

 

What I'm able to do now:

1. Capture Image (8-bits) and manually adjust exposure time.

 

What I still need to solve:

2. Process the image via histogram and analyze the image quality (whether or not it's too washed out or too dim).

 

My current proposal: Use a simple feedback loop to automatically adjust exposure time based on image quality.

My idea is as follows (attached has my flow chart too):

 

A. Start (with an intialized reference exposure time)

While camera exposure time is not optimal

{

   B. Capture image with camera

   C. Process and analyze histogram

         if image is optimal approximately, then stop (camera is calibrated)

         else if image is too bright, then decrement camera exposure time

                else image is too dim, then increment camera exposure time

}

 

My limited knowledge about histograms is preventing me from being able to program when the image is too bright or when the image is too dim.

 

Any suggestions, advice, links, or feedback that can help is greatly appreciated.

 

Thanks for your attention and time,

John Yeah

 

Posted the other unfinished post by accident.

 

Certified Labview Associate Developer

0 Kudos
Message 1 of 13
(9,840 Views)

Just an add-on question:

 

Assuming that we have an image, how would I process it to acquire an histogram (I know I can use the Vision Assistant and other IMAQ VI's to achieve this)? How would I quantify the values in the histogram so I can compare whether or not it is too bright or too dim?

 

Thanks,

John

0 Kudos
Message 2 of 13
(9,834 Views)

Hi John,

You can monitor how the brightness, contrast and gamma settings for your image affects your histogram by using a vision assistant in a while loop.  My particular example uses Vision assistant, but you can also do this with the Vision Development Module. 

 

You'll have to slightly alter the Vision assistant to pull the image from a different file path, but my code basically consists of 5 steps.

 

22701i9DB288C08A43D3F3

 

22703i2944E758FEEF93D2

 

Hope this helps!

Tejinder Gill
National Instruments
Applications Engineer
Visit ni.com/gettingstarted for step-by-step help in setting up your system.
0 Kudos
Message 3 of 13
(9,793 Views)

Thanks Tejinder,

 

However, I think you might have misunderstood my question. Your code allows me to manually monitor the histogram in a while loop while I alter my brightness, but I was wondering how I would compare histograms instead (i.e. using a value 10 > 7 type of comparison to see if the image is too dim).

 

So what I'm aiming to do in essence, in terms of your code, is to automatically change the brightness (based on feedback) by comparing the histogram (or perhaps other image properties that deal with how dim/bright our resulting image is). So if this snapshot is too dim, make it brighter for next snapshot automatically. If this snapshot is too bright, make it dimmer for next snapshot automatically. However, I'm looking for a automatic way for the system to "determine" if the image is too dim or too bright based on the histogram (or if there are other better ways to do it, please do let me know).

 

Any ideas?

 

Thanks for your feedback, really appreciate it.

John

0 Kudos
Message 4 of 13
(9,791 Views)

I did this on a project once upon a time.

 

I integrated the histogram to get a curve that showed how many pixels were at or below each brightness level.  I decided for my application a good target is to have 90 percent of your pixels at or below the 90 percent brightness level.  I would figure out the brightness for 90 percent of the pixels (curve intercept), then calculate the scaling factor to get that to 90 percent of the brightness.  If 90 percent of the pixels was at 100 percent of the brightness, I would just halve the shutter or gain.  Otherwise, I would multiply the shutter or gain by the scaling factor.

 

You don't want to target 100 percent brightness, because a single bright pixel in an otherwise dark image will throw you for a loop.

 

If you want to do this, here are some instructions:

 

1. Determine how many pixels is 90 percent of your image (or whatever percentage you pick).

 

2. Loop through histogram and sum values.  On each iteration, check if the total is greater than 90 percent (from step 1).  If so, stop and record iteration, which is equal to brightness.

 

3. Determine what 90 percent brightness is, and calculate scaling factor.  Divide 90 percent brightness by brightness measured in step 2.

 

4. Adjust shutter or gain using scaling factor.  Repeat until within satisfactory range.

 

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 5 of 13
(9,763 Views)

Thanks for the feedback Bruce,

 

Yeah, I understand that I don't want to target 100 percent brightness, but how would I implement this histogram integration technique (quantifying the image parameter)? Do you mind going in depth and detail and can you please provide an example?

If there's a book or something I can reference to that would be great, too.

 

Thanks for your time,

John

0 Kudos
Message 6 of 13
(9,760 Views)

I guess like I mentioned earlier, my biggest problem is I don't know how to "handle" the histogram so I can't compare them quantitatively.

0 Kudos
Message 7 of 13
(9,757 Views)

So here's my updated flow chart after incorporating Bruce's idea.

 

Problem is for the second step of "Process histogram," how would I integrate the histogram (sum the values of each pixel)? Is there a built in Labview VI that does this? Or is there a way to take the histogram generated through IMAQ histogram and do such processing?

 

And the optimal "brightness" range right now is just a rough range (probably too narrow), just wrote it there for reference.

 

Thanks,

John

 

0 Kudos
Message 8 of 13
(9,741 Views)

The histogram is just a graph that shows you how many pixels your image has at each intensity level.  It is just an array of 256 values (for U8 grayscale) that go from black (0) to white (255).  Each value of the array is the number of pixels at that intensity.

 

Use a for loop with auto indexing to loop through the values in the histogram.  Use a shift register that is initialized to zero, then add the histogram value each time through the loop.  After adding, compare it to your limit.  If you have reached your limit, output the index value and stop the for loop.

 

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 9 of 13
(9,710 Views)

Thanks Bruce,

 

I realized that after I posted. That's why I updated my flow chart in that way. Thanks so much. I'm going to start implementing it and see how if it works.

Your help is really much appreciated!

 

John

0 Kudos
Message 10 of 13
(9,704 Views)