Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx error with synchronized read and write using sample clock

Solved!
Go to solution

Hi, 

 

My application requires me to write analog voltage values to a device and simultaneously read analog voltage values from 2 sensors at 30 Hz. I have a NI 9205 and 9264 with a cDAQ 9174.

 

I used the AO sample clock on the  to synchronize the analog input and analog output tasks. I need to continuously change the values I am writing to analog output so I've kept the DAQmx write inside the while loop. However, when I measure the iteration time of the while loop instead of getting a value near 33 ms, I get a value near 100 ms. Additionally, after I run the program for some time, I get the an error : "Error -200279 occurred at DAQmx Read (Analog 1D DBL NChan 1Samp).vi:2940001".

 

Can I get some suggestions on how to reduce the loop time to 33 ms, and resolve the error above? I have attached my VI and pictures.

 

Thanks in advance!

 

 

 

synchIO.PNG

synchIOBlock_diagram2.PNG

synchIOBlock_diagram.PNG

0 Kudos
Message 1 of 7
(3,640 Views)

Hi,

 

I believe the reason why the execution time of your loop is longer than expected is because you are doing some processing and writing the data to a file. The write to spreadsheet function opens and closes the file at every iteration. 

I would suggest adopting a producer consumer architecture in order to avoid slowing down your acquisition: http://www.ni.com/white-paper/3023/en/

 

I hope this helps!

 

 

 

T. Le
Vision Product Support Engineer
National Instruments
0 Kudos
Message 2 of 7
(3,557 Views)

Hi, 

Thanks for the suggestion - but I get slow loop times even when I am not writing anything to a spreadsheet (the spreadsheet is written only when a control is true).

 

I'm puzzled why the loop times are not the same as the inverse of the sample rate.

 

Thanks,

 

Anish

0 Kudos
Message 3 of 7
(3,551 Views)

Hi Anish,

 

The sample rate that you define for your sample clock is going to be hardware timed. It is the rate at which your samples are outputted from the buffer of your DAQ device to the outside world. The rate of your loop is different. It is the rate at which your computer is writing samples from your Host PC to the buffer of your DAQ device. 

 

The rate of your loop might not be as fast as your hardware sample rate due to limitations of your computer. However, that doesn't mean that that affects your sample rate. 

 

I hope this helps!

T. Le
Vision Product Support Engineer
National Instruments
0 Kudos
Message 4 of 7
(3,541 Views)

Handful of thoughts:

 

1. I find it curious that the block diagram attempts to configure constant 30 Hz sampling for both AI and AO, but a query to the AI task reports back 50 Hz on the front panel screencap. 

 

2. Dunno if your AO device supports the "hw-timed single point" timing mode.  If so, it might be a better choice for you.  For that matter, it may turn out that software timed AO is a better choice for you.

 

3. You may not actually be synced in *quite* the way you intend and expect.  Your initial AO task DAQmx Write appears to auto-start AO.  It's undefined whether the AI task is started yet and ready to respond to the AO sample clock.  Inside the loop, the AO value you send to DAQmx Write and the AI value you query from DAQmx Read are not happening simultaneously out in the real world.  The AO value will occur some short time in the future after DAQmx Write, the AI value retrieved by DAQmx Read was converted some short time in the past.

 

4. I agree with the earlier comment that you need to get the file writing out of this loop if you want fairly consistent loop timing.  I understand that it may not be your most immediate problem, but it very likely will become one.

 

5. Your 2-sample AO buffer appears chosen to minimize latency between DAQmx Write and actual D/A conversion.  However, it sets you up to be at much greater risk for an underflow error.  This is the classic tradeoff for buffered AO that can't be predefined -- it's always a compromise and can't really be avoided.  

   Underflow on a buffered AO task leads to an unrecoverable error.  I *think* (not 100% sure) that hw-timed single point mode would produce a recoverable error when you miss a clock cycle.  Not awesome but probably preferable.  Software timed AO would not produce errors, but would be trickier to reliably correlate to buffered AI.

 

Summary: I think the specific timing issue you brought up is merely an easily visible symptom, but it probably isn't the main problem you need to resolve.  You need the tasks to start in sync, you need to correlate your AO and AI data, you need low latency for changing AO signals, and you need something like a producer-consumer structure to move file access into an independent loop.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 5 of 7
(3,532 Views)

Hi Kevin,

 

1. That was because I was earlier experimenting with different clock rates (50 Hz in this case). In all cases, the loop iteration time was roughly 3x the inverse sampling rate - for 30 Hz, I get an iteration time of 100 ms, which is 3 times the required value of 33 ms.

 

2. I'm using a USB cDAQ, so hardware timed single point is not possible.

 

3. In your opinion, is there a recommended method to setup analog output when the analog output keeps changing each iteration? How should I change my current code - should I use DAQmx Analog Write with 1 ch 1 sample? 

 

4. I can implement a producer consumer paradigm to fix this issue.

 

5. Is there a way I can solve the underflow error?

 

Thanks!

Anish

0 Kudos
Message 6 of 7
(3,520 Views)
Solution
Accepted by topic author ashenoy

1. Can't say for sure, but have general awareness that USB-based devices often have quirks and limitations relating to latency (and other things).  You might be running into such a limit.

 

2. Ok, gotcha

 

3. I've attached a commented-up small modification to the code you posted.  It uses unbuffered, software-timed AO, and services the AI buffer carefully to maintain correlation between AO and AI samples. 

 

4. Ok, good. 

 

5. Using on-demand software timing for AO will prevent the possibility of an AO underflow error because there is no buffer to underflow.  The structure of the AI reads should help prevent AI overflow errors such as -200279.

 

 

-Kevin P

 

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 7 of 7
(3,512 Views)