Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with NI PCIe636x when DO task has less than 4 samples

Hi,

I've noticed an issue when using multiple tasks simultaneously (AO + DO + AI) on a PCIe 6363 (but also seems to affect the PCIe6361). Typically, I want to run all 3 tasks at the same time. The AI tasks runs at a high sampling rate (eg 2MHz), while the AO & DO tasks run at a slow rate (eg 10Hz). So the AO & DO tasks have very few samples. In some cases, the AO&DO tasks just needs 2 samples. In this case, the AI task fails to start!

This is tested on Ubuntu 22.04, with the latest NI-DAQmx (24.8.0.49495) and python-nidaqmx v1.0.0.

 

 

nidaqmx.errors.DaqReadError: Onboard device memory overflow. Because of system and/or bus-bandwidth limitations, the driver could not read data from the device fast enough to keep up with the device throughput.
Reduce your sample rate. If your data transfer method is interrupts, try using DMA or USB Bulk. You can also use a product with more onboard memory or reduce the number of programs your computer is executing concurrently.

Property: DAQmx_Read_RelativeTo
Requested Value: DAQmx_Val_CurrReadPos

Property: DAQmx_Read_Offset
Requested Value: 0

Task Name: AI

Status Code: -200361

 

 

 

I've found out that they are essentially two ways to workaround it: make the DO task contain more samples (at least 4). Or wait at least 15ms after writing the DO data before starting the AI task. Eventually, I've picked the first workaround. Still is such behaviour expected? Is that a sign of a bug in the NI-DAQmx layer?

 

I've attached a small python script that is able to reproduce the issue. On my systems, running it fails most of the times. If you change line 40 from "= 2" to "= 4", it works fine!

(Note, this is the smallest code that I could manage to write to reproduce the issue, and it's not directly representative of how our actual software works)

0 Kudos
Message 1 of 7
(223 Views)

Note: I don't really know Python though I can largely follow the DAQmx parts of your code.

 

This is very likely due to the heavy demand you put on bus access when you define a very small output buffer while *NOT* setting the DAQmx property for using onboard memory only.

 

The larger the buffer, the less frequently the driver needs to deliver data down to the device repeatedly.  If you set the DAQmx property for onboard memory only, the driver only delivers the data once.

 

Another pretty big part of the problem may be the excessive driver interactions from AI task.  You read only 20k samples at a time from a 2 MHz sampling task, making a 100 Hz rate of "interactions" between app and driver.  A much more typical rule of thumb is to start with something like 10 Hz, which would call for 200k samples per read.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 2 of 7
(198 Views)

Thanks Kevin for your insight.

Concerning the short read, it was just due to the example code being made to run quickly. In our application, we also use ~10Hz loop. However, here, the error happens for whichever sample size. Actually, the read fails very shortly after being called, ~20ms, independent of the sample size or buffer size.

 

The main odd behaviour here though is that if sending more data for the DO task, then it works fine. As a matter of fact, in other cases we have the DO task run as quickly as the AI task, and it works fine. It's only if the DO task is less than 4 samples that it fails!

 

I'm interested by your remark of using the onboard memory only. I'd like to explore this option, and see if it has any effect. Could you hint me a little bit on how to activate that (or deactivate the nidaqmx buffer)?

 

Cheers,

Éric

0 Kudos
Message 3 of 7
(180 Views)

I can't explain why setting DO_SAMPLES_N to 4 instead of 2 works.

 

However, If sample_mode is CONTINUOUS_SAMPLES, NI-DAQmx uses this value to determine the buffer size. (Reference: nidaqmx 1.1.0 documentation) Setting the buffer to 4 or 2 shouldn't make any difference. 

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 4 of 7
(172 Views)

I'm interested by your remark of using the onboard memory only. I'd like to explore this option, and see if it has any effect. Could you hint me a little bit on how to activate that (or deactivate the nidaqmx buffer)?

In LabVIEW, it's found deep down under a "DAQmx Channel property node".  The specific property is a boolean found at:

Analog Output --> General Properties --> Advanced --> Data Transfer and Memory --> Use Only Onboard Memory.

 

I don't know whether the DAQmx properties are categoriezed and organized similarly for Python.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 5 of 7
(160 Views)

I've found that to disable the NIDAQmx buffer with Python, on a DO task, you have to write "do_chan.do_use_only_on_brd_mem = True".

It turned out that indeed, it also solves the issue. So that's finally what we implemented 🙂

 

Unfortunately, I didn't find documentation on when/why is it "safe" to only use the on-board memory. So I only use it when the number of DO samples is very small.

Do you know in which case only using the on-board memory wouldn't work?

 

Message 6 of 7
(130 Views)

Many devices have only a small amount of "onboard memory" available.  The spec sheets will show how much (but sometimes it's given the name FIFO).  Also, any devices I remember using had the further restriction that when configuring for exclusive use of on-board memory, you can't update your output data on the fly as the task runs.

 

So overall, on-board memory is most suitable for a short repeating pattern that doesn't change.   Your 6363 appears to have an onboard DO FIFO of 2047 samples.  I expect you'd get an error if configured for on-board memory only and you attempted to write too many samples to the task.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 7 of 7
(126 Views)