05-08-2009 01:29 PM
This is Windows. I doubt it, however, if LVRT only could reduce the latency.
Thanks!
05-08-2009 05:39 PM
I looked at your program and I think you may have one fundemental flaw. You are setting the time reference to GPS and then immediately using the clock. It takes on average at least 60 seconds for the 6682 to sync to GPS and while going through that process you will see tons of jitter. If you open up the Properties dialog in MAX for the 6682 you can set the time reference to GPS and that will force it to automatically start when the computer boots up. You shouldn't be setting the time reference to GPS in your application unless you are willing to wait for th offset from master to settle down.
-Josh
05-08-2009 06:10 PM
Josh,
Thanks for the suggestion. The result is the same (i.e. the latency is the same),
regardless of the selected time reference. Besides, the jitter is one (transient)
problem, the latency is another and is completely unrelated to the transient
synchronization. I can easily wait 60 seconds for the GPS synchronization to
complete, yet that does not help one bit with the buffer overflow.
Tihomir
05-11-2009 05:08 PM
Tihomir,
How exactly are you measuring the latency of your timestamps?
05-11-2009 05:31 PM
05-12-2009 11:16 AM
Tihomir,
So is the "Diff Milliseconds" indicator showing the latency? I'm fairly confident that there isn't any latency (or at least a VERY minimal amount) for the GPS timestamps, and that the latency you're seeing is somewhere in software. There's really not going to be any way to completely eliminate that latency without moving to a Real-Time operating system. LabVIEW on Windows has a resolution of about 1kHz, while LabVIEW RT on certain platforms has a resolution of about 1MHz.
What you can do is utilize some sort of structure that tries to eliminate the buffer overflow. I'd suggest taking a look at some Producer/Consumer architectures, and specifically the Queue Basics.VI example in the LabVIEW example finder.
05-12-2009 01:08 PM
Justin,
> So is the "Diff Milliseconds" indicator showing the latency?
Yes, that is the approximate duration of one iteration of the loop.
If one removes everything else from that loop, i.e. removes writing
to file and showing data in the front panel indicators, there
is still about 60 ms duration of one loop iteration. Which means that
it takes about 60 ms for the function "niSync Read Trigger Time Stamp"
to return a value.
> I'm fairly confident that there isn't any latency (or at least a VERY minimal amount) for the GPS timestamps
I will gladly agree with you once this has been shown / measured / demonstrated
reproducibly.
> and that the latency you're seeing is somewhere in software.
I think so, too. Specifically, I think the latency is exactly in the call of "niSync Read Trigger Time Stamp".
> There's really not going to be any way to completely eliminate that latency
> without moving to a Real-Time operating system. LabVIEW on Windows
> has a resolution of about 1kHz, while LabVIEW RT on certain platforms
> has a resolution of about 1MHz.
Can we be sure that LabVIEW RT will eliminate the latency? After all, even under
LabVIEW RT it takes finite time for a code to execute. LabVIEW RT is deterministic
but hardly much faster than LabVIEW under Windows. Resolution and latency
are not the same thing and higher resolution by no means guarantees
shortening of the "niSync Read Trigger Time Stamp" execution time.
> What you can do is utilize some sort of structure that tries to eliminate the buffer
> overflow. I'd suggest taking a look at some Producer/Consumer architectures,
> and specifically the Queue Basics.VI example in the LabVIEW example finder.
I am unaware of any other way of accessing the timestamp queue but through
"niSync Read Trigger Time Stamp". Could you suggest a practical alternative to
"niSync Read Trigger Time Stamp"? Without such alternative there will always
be 60ms latency.
Finally, as mentioned in previous posts, with PXI6682 installed, MAX allows to create
a clock event and timestamp its edges. Even at a frequency of 1kHz the timestamps
seem correct, no signs of latency or buffer overflow, i.e. a solution exits under Windows.
I will greatly appreciate a suggestion on how to reproduce the same in LabVIEW.
Best regards,
Tihomir
05-13-2009 05:29 PM
Tihomir,
Would it be possible to remove everything else in your while loop except for the niSync Read Trigger Time Stamp function, and logic to calculate the time it takes for that loop to run? In the most recent version of the program, you are also calling the niSync Get Location VI, as well as performing some File I/O. Also, in reference to Josh's comment about taking Multiple timestamps, I noticed that you don't have anything wired into the "number of timestamps" input for the Read Trigger function. This makes it default to reading 1 timestamp at a time, and thus act similarly to using Single timestamp. I'd also suggest, if you haven't tried it already, trying out Josh's advice to use a Property Node to get the Available Time Stamps property, and wire it into the number of timestamps input. This property can be found by creating a new niSync property node, and going to Timing -> Time Stamps -> Available Time Stamps
05-13-2009 07:51 PM
Justin,
Thanks for finding this error. Now the call to "niSync Read Trigger Timestamp (Multiple)"
is reading all the timestamps available in the buffer. The duration of the simplified loop
(VI attached) now fluctuates about 86 ms.
Thanks,
Tihomir
05-14-2009 02:24 PM
Tihomir,
Did this fix your buffer overflow issues? The latency is still there, but my assumption was that the latency was only a problem because it was causing the buffer overflows