Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Reading timestamp from a PXI-6682 has a latency about 60ms. How to reduce that

This is Windows. I doubt it, however, if LVRT only could reduce the latency.

 

Thanks!

0 Kudos
Message 11 of 24
(2,448 Views)

I looked at your program and I think you may have one fundemental flaw.  You are setting the time reference to GPS and then immediately using the clock.  It takes on average at least 60 seconds for the 6682 to sync to GPS and while going through that process you will see tons of jitter.  If you open up the Properties dialog in MAX for the 6682 you can set the time reference to GPS and that will force it to automatically start when the computer boots up.  You shouldn't be setting the time reference to GPS in your application unless you are willing to wait for th offset from master to settle down.

 

-Josh

0 Kudos
Message 12 of 24
(2,444 Views)

Josh,

 

Thanks for the suggestion. The result is the same (i.e. the latency is the same),

regardless of the selected time reference. Besides, the jitter is one (transient)

problem, the latency is another and is completely unrelated to the transient

synchronization. I can easily wait 60 seconds for the GPS synchronization to

complete, yet that does not help one bit with the buffer overflow. 

 

Tihomir 

0 Kudos
Message 13 of 24
(2,441 Views)

Tihomir,

 

How exactly are you measuring the latency of your timestamps?

Justin E
National Instruments R&D
0 Kudos
Message 14 of 24
(2,409 Views)
Justin,

I think it is the function that pulls the timestamps out of the timestamp buffer that has latency. I assume there is no significant latency between the clock's edge and the timestamp for that particular edge. The latency of reading the timestamps from their buffer is being measured with the LabVIEW's millisecond clock, as the VI submitted earlier is showing.

Many thanks,

Tihomir  
0 Kudos
Message 15 of 24
(2,404 Views)

Tihomir,

 

So is the "Diff Milliseconds" indicator showing the latency? I'm fairly confident that there isn't any latency (or at least a VERY minimal amount) for the GPS timestamps, and that the latency you're seeing is somewhere in software. There's really not going to be any way to completely eliminate that latency without moving to a Real-Time operating system. LabVIEW on Windows has a resolution of about 1kHz, while LabVIEW RT on certain platforms has a resolution of about 1MHz. 

 

What you can do is utilize some sort of structure that tries to eliminate the buffer overflow. I'd suggest taking a look at some Producer/Consumer architectures, and specifically the Queue Basics.VI example in the LabVIEW example finder.

Justin E
National Instruments R&D
0 Kudos
Message 16 of 24
(2,390 Views)

Justin,

 

> So is the  "Diff Milliseconds" indicator showing the latency?

 

Yes, that is the approximate duration of one iteration of the loop. 

If one removes everything else from that loop, i.e. removes writing

to file and  showing data in the front panel indicators, there

is still about 60 ms duration of one loop iteration. Which means that

it takes about 60 ms for the function "niSync Read Trigger Time Stamp"

to return a value. 

 

> I'm fairly confident that there isn't any latency (or at least a VERY minimal amount) for the GPS timestamps

 

I will gladly agree with you once this has been shown / measured / demonstrated

reproducibly. 

 

> and that the latency you're seeing is somewhere in software.

 

I think so, too. Specifically, I think the latency is exactly in the call of "niSync Read Trigger Time Stamp".

 

> There's really not going to be any way to completely eliminate that latency

> without moving to a Real-Time operating system. LabVIEW on Windows

> has a resolution of about 1kHz, while LabVIEW RT on certain platforms

> has a resolution of about 1MHz. 

 

Can we be sure that  LabVIEW RT will eliminate the latency? After all, even under

LabVIEW RT  it takes finite time for a code to execute. LabVIEW RT is deterministic

but hardly much faster than  LabVIEW under Windows. Resolution and latency

are not the same thing and higher resolution by no means guarantees 

shortening of the  "niSync Read Trigger Time Stamp" execution time.

 

> What you can do is utilize some sort of structure that tries to eliminate the buffer

> overflow. I'd suggest taking a look at some Producer/Consumer architectures,

> and specifically the Queue Basics.VI example in the LabVIEW example finder.

 

I am unaware of any other way of accessing the timestamp queue but through 

"niSync Read Trigger Time Stamp". Could you suggest a practical alternative to

"niSync Read Trigger Time Stamp"? Without such alternative there will always

be 60ms latency. 

 

Finally, as mentioned in previous posts, with PXI6682 installed, MAX allows to create

a clock event and timestamp its edges. Even at a frequency of 1kHz the timestamps 

seem correct, no signs of latency or buffer overflow, i.e. a solution exits under Windows. 

I will greatly appreciate a suggestion on how to reproduce the same in LabVIEW.

 

Best regards, 

 

Tihomir 

 

 

 

 

 

0 Kudos
Message 17 of 24
(2,388 Views)

Tihomir,

 

Would it be possible to remove everything else in your while loop except for the niSync Read Trigger Time Stamp function, and logic to calculate the time it takes for that loop to run? In the most recent version of the program, you are also calling the niSync Get Location VI, as well as performing some File I/O. Also, in reference to Josh's comment about taking Multiple timestamps, I noticed that you don't have anything wired into the "number of timestamps" input for the Read Trigger function. This makes it default to reading 1 timestamp at a time, and thus act similarly to using Single timestamp. I'd also suggest, if you haven't tried it already, trying out Josh's advice to use a Property Node to get the Available Time Stamps property, and wire it into the number of timestamps input. This property can be found by creating a new niSync property node, and going to Timing -> Time Stamps -> Available Time Stamps

Justin E
National Instruments R&D
0 Kudos
Message 18 of 24
(2,366 Views)

Justin,

 

Thanks for finding this error. Now the call to "niSync Read Trigger Timestamp (Multiple)"

is reading all the timestamps available in the buffer. The duration of the simplified loop 

(VI attached) now fluctuates about 86 ms. 

 

Thanks, 

 

Tihomir 

 

0 Kudos
Message 19 of 24
(2,362 Views)

Tihomir,

 

Did this fix your buffer overflow issues? The latency is still there, but my assumption was that the latency was only a problem because it was causing the buffer overflows

Justin E
National Instruments R&D
0 Kudos
Message 20 of 24
(2,349 Views)