08-06-2018 02:36 PM
I am trying to quantify displacements (in the range of 0.1 to 0.5 mm) of different points of a circular object while it shrinks. Basically the points closer to the edges will have larger displacements and the points near the center will have negligible displacement. I do this by using the Particle Image Velocimetry (PIV) method where pictures of the object are taken at different times and then each frame is meshed and analyzed so the movement of the pixels can be quantified and converted back to physical lengths.
I am using the example "optical flow.vi" to do that, using the Lucas Kanade algorithm. The subvi for the algorithm is protected, so I can't see what it is doing exactly. I searched for information on the algorithm and I see that one of the main assumptions is that flow is constant. Since my application does not involve flow, I am unsure if this example program can work for my case. I have the following questions:
1. What exactly are the values of the red vectors calculated with the optical flow.vi example?
2. Can the Lucas Kanade algorithm be used for my case, where I am trying to measure the total displacement of points inside a shrinking circle?
3. Can the magnitude of the vectors be printed out in a text file? (since the subvi is protected)
08-07-2018 04:31 PM
Hi awayllace,
I looked over the example you mentioned, and the specific sub-VI you were concerned about. Generally, the best way to find out information about how a LabVIEW function works is to check out the help page for that function. Based on that information, here’s what I can tell you:
08-08-2018 12:39 PM
Thank you.
In general, do you have any other ideas on how to measure displacement of pixels based on two picture frames using Labview/Vision? or how to find other examples that do that?
08-09-2018 10:23 AM
Hi awayllace,
Like so much in vision programming, it depends on the images you're working with. Do you have unique features you can track from one image to the next? If so, you might be able to do something like geometric matching to locate them on the images and compare their locations.
Do you have a strong contrast between the object and the background? If so, and if the object is in the same physical location from one image to the next, you could measure where the edge of the object is, and compare that from one image to the next.
Depending on the images you have, your lighting setup, the specific measurements you need, and a number of other factors, there may be different ways to achieve your goals. My recommendation would be to take some time with your images and play around in the Vision Assistant. See what you can do with the tools included- trying it yourself is the fastest way to learn.
If you run into questions, try calling into support or creating a thread here on the forums. We're here to help if you get stuck or need clarification.