07-30-2013 04:10 AM
Hi everyone.
I have been presented with a challenge.
Our mechanical team have built a machine that randomnly erodes the surface of some test specimen.
This is an image of a test specimen captured at the beginning of a test:
As you can see four white lines are present in the image, the lines are caused by reflections from the lamps.
And this is an image of a similar test specimen at the end of a test.
We already have found a satisfying way of measuring the area of the erosion and it seems to be robust for all possible test specimen colors
Now they have decided that they want the machine to (online) measure the length of the white reflection lines, as the erosion for some materials only causes the surface to become matted.
And does not necessarily give the dark eroded area seen above.
an example of such a use case can be seen below:
I have tried multiple approaches without the greatest success.
ideally I would like to isolate the highlighted lines, so that the system could identify nearly the same lines as the ones that is hand painted below
methods tried:
1)
IMAQ Find Edge, to get the infinite lines that best describes the highlighted reflection lines.
Then using a binning method, the line is segmented, to where there is enough light in the image.
this method is however very sensitive to the horizontal variation of the lighting, and will typically either not find the endpoints, or the matted mid section correctly.
2)
tuning IMAQ Detect Lines
gives me this:
Which is not exactly what I want as I hoped that the lines left of the eroded area would also be included, and that the lines found above and below the highlights would have been excluded, but getting closer.
3)
High pass filter the image to enhance the edges, and suppress the effects from the varying light quality.
The built in methods had a tendency to enhance the erosion edge more or equal to the highlights, so I tried building my own kernel to only enhance horizontal features, but even with a 5x5 kernel the best result I got was:
Which did not exactly let me separate the highlights from the eroded area.
4)
Finally I tried a cross correlation with an image of a horizontal line with a small width.
again I enhanced the eroded area more than the line feature I was interested in.
If you have any idea, that I can use to either enhance the highlighted lines, so that the length can be measured I would really appreciate it.
I have tried adjusting gamma, contrast, and brightness to make the "white" lines stand more out, but due to the light variation, this also makes the center of the image very bright before the lines are enhanced in the ends.
Thank you
/ZcuBa
Solved! Go to Solution.
07-30-2013 02:04 PM
Your fixturing will need to be absolutely solid for this to work.
I have included a simple example which extracts the green color chanel from the before and after images, and produces an absolute difference. It would be better to extract and recombine all the channels. Be sure to cast the images as I16 before performing the ABS/DIFF
07-31-2013 01:41 AM
Hi MoviJOHN
Thanks for the suggestion, using the first frames to create a "background" and then afterwards subtracting the background to detect changes is a good idea.
And indeed it works for my simulated data
Unfortunately it does not work in the practical scenario, as the erosion machine is built by rotating the specimen through a chamber with water droplets.
Hence the camera captures an object travelling at 180 m/s, (while it travels towards the camera) and fixturing is not an option .
The result is unfortunately that the speed combined with the variance in the trigger system (~10 µs) causes the specimen and lights to vary in their relative positions for each frame.
While, I do have markers on the specimen compartments, that I use for homogenizing rotation and scaling on all images before analysis, the location and angle of the reflection lines on the images will vary as a function of the trigger variance, and of the vibrations of the machine.
Kudos is given for thinking of something I had not.
07-31-2013 05:01 AM - edited 07-31-2013 05:03 AM
Hi ZcuBa
Have you consideren using splines to do the estimation of the lines?
I have done something similar for lane detection where you also have a lot of lines in an image but want to focus on the largest 2 in that case.
Try to take a look at this article:
http://www.med.unc.edu/bric/ideagroup/Publications/publications/articles/WangYue_PRL2000.pdf
I thing that you would find a lot of the published articles about lane detection interesting for your applciation here
I you were to follow the image difference as proposed before you can still do this even though the images is not aligned from the beginning, you could faily easy get the position of the specimen in both images an align it to each other (position and rotation wise) and then afterwords subtract the 2 images. You just need to find some features which is the same in both images like the left and right pice of the container or fill up the hole container with a binary true and all black background with a binary white and then you have 2 similar shapes that you can align.
Best Regards
Anders Rohde
Applications Engineer
National Instruments Denmark
07-31-2013 07:02 AM
Hi A.Rohde
Thanks for the link.
I'll see if splines can do the trick for me 😉
Ah but the lines that occurs from reflections are not placed at the same location on the specimen, if the specimen was not exactly on the same position.
consider a car that drives through a tunnel.
the highlights that is refelctions from the tunnel lights moves backwards on the car, as it goes forwards in the tunnel.
I basicly have the same issue, as the specimen moves towards the camera, but the lights do not.
07-31-2013 08:17 AM
Makes sence. See if you can get something out of the splines, they usually come in handy for me. They are really good when the sorroundings are changing. They will be dragged towards the edges like 2 different poles on a magnet.
Otherwise try to post some more images afterwords, then we will have to see if we can find a algorithm for you.
Best Regards
Anders Rohde
07-31-2013 01:31 PM
Please provide me with a before and after of the matting, and I'll send you another means of detection.
07-31-2013 09:27 PM
You can very easily find the very top and the bottom line using an Advanced Edge Find function. From there extract the pixel values form the line profile and you should be able to detect any nonuniform pixel values.
There is a screen shot of the result.
Good luck,
Dan
08-01-2013 01:21 AM
@ MoviJOHN
Hi, we have created this reference specimen, that was partially protected (like the red one)
to create an image with a known matted area.
During the test most of our current specimen are not matted first, and I have no other images on my laptop with matting, so it will have to do.
We do not expect well defined vertical matting areas. 😉
Here the images are attached
245.png is before the matting
4160 is after.
08-01-2013 01:40 AM
Thank you df86
I have not tried Advanced Edge Find function
But the ones I tried would normally have false positives, as it finds edges using derivatives, and other derivatives may be present than the ones caused by reflections 😕
I have another method for finding the reflection lines, and the solution that seems to be working for me is to use my method for finding the lines, combined with your idea of sampling the line as a 1D signal analysis.
my colleague just told me that he have created a non-kausal signal reconstruction filter that can determine the reflection lines robustly in the 1D data.
Hence I have accepted you'r answer to the question
The method I found to the line detection was as follows (as the reflection line is the highlights, it always contains the points with the nhighest illumination):
sample n coloumns (n <= Width) with "get rowcol vi"
create a black image, copy of same size as input
for each coloumn
pick the 5% of the pixels with the largest intensity. (luma channel)
copy pixels to black image
the result is an image that contains the reflection lines for a coloumn, if the reflection is present in that coloumn, and otherwise some random points.
The result is that the probabillity of a false positive is greatly reduced, as edges that occurs near shadows or erosions are suppressed before the line detection 😉