Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Measure highlights in image

Solved!
Go to solution

Hi everyone.

I have been presented with a challenge.

 

Our mechanical team have built a machine that randomnly erodes the surface of some test specimen.

 

This is an image of a test specimen captured at the beginning of a test:

245.png


As you can see four white lines are present in the image, the lines are caused by reflections from the lamps.

And this is an image of a similar test specimen at the end of a test.

452.png

 

We already have found a satisfying way of measuring the area of the erosion Smiley Happy and it seems to be robust for all possible test specimen colors Smiley Happy


Now they have decided that they want the machine to (online) measure the length of the white reflection lines, as the erosion for some materials only causes the surface to become matted.

And does not necessarily give the dark eroded area seen above.


an example of such a use case can be seen below:
120.png

 

I have tried multiple approaches without the greatest success.
ideally I would like to isolate the highlighted lines, so that the system could identify nearly the same lines as the ones that is hand painted below

120 hand.png

 

methods tried:

1)
IMAQ Find Edge, to get the infinite lines that best describes the highlighted reflection lines.
Then using a binning method, the line is segmented, to where there is enough light in the image.
this method is however very sensitive to the horizontal variation of the lighting, and will typically either not find the endpoints, or the matted mid section correctly.

2)
tuning IMAQ Detect Lines
gives me this:
Untitled.png

Which is not exactly what I want as I hoped that the lines left of the eroded area would also be included, and that the lines found above and below the highlights would have been excluded, but getting closer.

3)
High pass filter the image to enhance the edges, and suppress the effects from the varying light quality.
The built in methods had a tendency to enhance the erosion edge more or equal to the highlights, so I tried building my own kernel to only enhance horizontal features, but even with a 5x5 kernel the best result I got was:
hihg pass filtered.png

Which did not exactly let me separate the highlights from the eroded area.

4)
Finally I tried a cross correlation with an image of a horizontal line with a small width.
cross correl.png

 

again I enhanced the eroded area more than the line feature I was interested in.

If you have any idea, that I can use to either enhance the highlighted lines, so that the length can be measured I would really appreciate it.
I have tried adjusting gamma, contrast, and brightness to make the "white" lines stand more out, but due to the light variation, this also makes the center of the image very bright before the lines are enhanced in the ends.

 

Thank you

/ZcuBa

Engineer, M.Sc. Autonomous Systems, Automation and Control of non-linear systems
Project Engineer @ R&D A/S
www.rdas.dk
0 Kudos
Message 1 of 11
(5,324 Views)

Your fixturing will need to be absolutely solid for this to work.

 

I have included a simple example which extracts the green color chanel from the before and after images, and produces an absolute difference.  It would be better to extract and recombine all the channels.  Be sure to cast the images as I16 before performing the ABS/DIFF

Machine Vision, Robotics, Embedded Systems, Surveillance

www.movimed.com - Custom Imaging Solutions
Message 2 of 11
(5,313 Views)

Hi MoviJOHN

Thanks for the suggestion, using the first frames to create a "background" and then afterwards subtracting the background to detect changes is a good idea.

And indeed it works for my simulated data Smiley Happy

 

Unfortunately it does not work in the practical scenario, as the erosion machine is built by rotating the specimen through a chamber with water droplets.

Hence the camera captures an object travelling at 180 m/s, (while it travels towards the camera) and fixturing is not an option Smiley Sad.

The result is unfortunately that the speed combined with the variance in the trigger system (~10 µs) causes the specimen and lights to vary in their relative positions for each frame.

 

While, I do have markers on the specimen compartments, that I use for homogenizing rotation and scaling on all images before analysis, the location and angle of the reflection lines on the images will vary as a function of the trigger variance, and of the vibrations of the machine.

Kudos is given for thinking of something I had not.



Engineer, M.Sc. Autonomous Systems, Automation and Control of non-linear systems
Project Engineer @ R&D A/S
www.rdas.dk
0 Kudos
Message 3 of 11
(5,305 Views)

Hi ZcuBa

 

Have you consideren using splines to do the estimation of the lines?

 

I have done something similar for lane detection where you also have a lot of lines in an image but want to focus on the largest 2 in that case.

 

Try to take a look at this article:

http://www.med.unc.edu/bric/ideagroup/Publications/publications/articles/WangYue_PRL2000.pdf

 

I thing that you would find a lot of the published articles about lane detection interesting for your applciation here

 

 

 

I you were to follow the image difference as proposed before you can still do this even though the images is not aligned from the beginning, you could faily easy get the position of the specimen in both images an align it to each other (position and rotation wise) and then afterwords subtract the 2 images. You just need to find some features which is the same in both images like the left and right pice of the container or fill up the hole container with a binary true and all black background with a binary white and then you have 2 similar shapes that you can align.

 

 

 

Best Regards

Anders Rohde

Applications Engineer

National Instruments Denmark

Message 4 of 11
(5,292 Views)

Hi A.Rohde
Thanks for the link.
I'll see if splines can do the trick for me 😉

Ah but the lines that occurs from reflections are not placed at the same location on the specimen, if the specimen was not exactly on the same position.

consider a car that drives through a tunnel.
the highlights that is refelctions from the tunnel lights moves backwards on the car, as it goes forwards in the tunnel.


I basicly have the same issue, as the specimen moves towards the camera, but the lights do not.

Engineer, M.Sc. Autonomous Systems, Automation and Control of non-linear systems
Project Engineer @ R&D A/S
www.rdas.dk
0 Kudos
Message 5 of 11
(5,288 Views)

Makes sence. See if you can get something out of the splines, they usually come in handy for me. They are really good when the sorroundings are changing. They will be dragged towards the edges like 2 different poles on a magnet.

 

Otherwise try to post some more images afterwords, then we will have to see if we can find a algorithm for you.

 

 

Best Regards

Anders Rohde

0 Kudos
Message 6 of 11
(5,282 Views)

Please provide me with a before and after of the matting, and I'll send you another means of detection.

Machine Vision, Robotics, Embedded Systems, Surveillance

www.movimed.com - Custom Imaging Solutions
0 Kudos
Message 7 of 11
(5,275 Views)
Solution
Accepted by topic author ZcuBa

You can very easily find the very top and the bottom line using an Advanced Edge Find function. From there extract the pixel values form the line profile and you should be able to detect any nonuniform pixel values.

 

There is a screen shot of the result.

 

Good luck,

 

Dan

Download All
Message 8 of 11
(5,267 Views)

MoviJOHN

Hi, we have created this reference specimen, that was partially protected (like the red one)
to create an image with a known matted area.

During the test most of our current specimen are not matted first, and I have no other images on my laptop with matting, so it will have to do.
We do not expect well defined vertical matting areas. 😉

Here the images are attached
245.png is before the matting

4160 is after.

Engineer, M.Sc. Autonomous Systems, Automation and Control of non-linear systems
Project Engineer @ R&D A/S
www.rdas.dk
Download All
0 Kudos
Message 9 of 11
(5,255 Views)

Thank you df86

I have not tried Advanced Edge Find function

But the ones I tried would normally have false positives, as it finds edges using derivatives, and other derivatives may be present than the ones caused by reflections 😕

I have another method for finding the reflection lines, and the solution that seems to be working for me is to use my method for finding the lines, combined with your idea of sampling the line as a 1D signal analysis.

my colleague just told me that he have created a non-kausal signal reconstruction filter that can determine the reflection lines robustly in the 1D data.

Hence I have accepted you'r answer to the question Smiley Very Happy


The method I found to the line detection was as follows (as the reflection line is the highlights, it always contains the points with the nhighest illumination):
sample n coloumns (n <= Width) with "get rowcol vi"

create a black image, copy of same size as input

for each coloumn

    pick the 5% of the pixels with the largest intensity. (luma channel)

    copy pixels to black image

the result is an image that contains the reflection lines for a coloumn, if the reflection is present in that coloumn, and otherwise some random points.


The result is that the probabillity of a false positive is greatly reduced, as edges that occurs near shadows or erosions are suppressed before the line detection 😉

Engineer, M.Sc. Autonomous Systems, Automation and Control of non-linear systems
Project Engineer @ R&D A/S
www.rdas.dk
Message 10 of 11
(5,246 Views)