01-17-2020 05:35 PM
Hi,
I am using the NI Vision Development Module with two USB 3.1 cameras to acquire and save pairs of images for stereovision calibration in a separate software.
The cameras are: https://www.flir.com/products/blackfly-s-usb3/
I have the camera acquisitions running in the same loop, but there is a significant time delay between the paired images. This is noticeable when I try to run the images through my calibration software (bad calibration scores). I am assuming that this is because the cameras are acquiring images at slightly different times in LabVIEW. I need to get the delay between the images down to a few microseconds. Can I accomplish this with the full suite of VIs provided by the Vision Development Module, or do I need to employ hardware triggering as well (as outlined by FLIR's website)? Any feedback on the small code snippet I have attached is welcome. I have opted to use local variables instead of queues because I only want to save the newest image in the live camera feed. If there is a cleaner/better way to accomplish this I am open to suggestions.
Thanks
01-19-2020 08:24 AM
Your code doesn't look inherently problematic (at least, I didn't see anything that jumped out as a cause of synchronisation definite problems).
However, for sub-millisecond synchronisation you'll (almost?) certainly have to use hardware triggering. I don't think it's likely you'll manage it otherwise except by chance.
In terms of queues vs local variables, you should note that a queue prevents data loss (you already did note this, I suppose) but that if you allow loss, it's not certain that both will lose the same number or in the same position. Since you're reading them in your processing loop with no included information about the frame, you're reliant upon having the same iterations' images for both to be even remotely synchronised. So I'd probably suggest a queue instead. If you want fewer data points, you could implement some decimation in either producer (Acq) or consumer (Processing) loops before saving.
01-19-2020 10:27 AM - edited 01-19-2020 10:29 AM
@cbutcher wrote:
However, for sub-millisecond synchronisation you'll (almost?) certainly have to use hardware triggering. I don't think it's likely you'll manage it otherwise except by chance.
Even millisecond accuracy is basically unreachable without hardware triggering. The Windows OS has absolutely no determinisme guarantees and while you usually can get the timing fairly consistent to several milliseconds, there is no guarantee that it can't be occasionally a second or more delay if Windows decides that it has something more urgent to do. This is also valid for all other desktop OSes, be it Mac OSX or Linux, although on Linux you can tweak things to some extend by command line options and kernel configuration options, if you dig deep enough and care to spend that time. But millisecond accurracy is only achievable on true real-time systems and several microseconds definitely only directly with hardware triggering and/or in FPGA programming.
01-20-2020 09:16 AM
All,