04-03-2024 10:50 AM
Hello, I have an application that has been successfully operating for a few years - it uses cDAQ chassis and runs on Windows. Intermittently we have found buffer over-run issues and the application errors out, and would like to increase the buffer size to help avoid this. Hoping to get guidelines for setting "Samples per Channel" in DAQmx Timing vi when in Continuous Samples mode to increase the buffer.
More details: The application runs on cDAQ-9172/9189/and similar and has the DAQmx Timing vi setup for Continuous Samples, but the "Samples per Channel" port is currently not wired. The "number of samples per channel" of DAQmx Read function is set as a function of the sample rate. This setup has proven to be robust, except in a few rare instances. The Read vi is in a loop that executes 5 times per second.
To give a healthy margin on the buffer, to account for when Windows turns its attention away from Labview momentarily, should I set "Samples per Channel" within the DAQmx Timing vi to be ~5x (?) the quantity expected per loop? With modern computers with a lot of memory, and that this computer is solely dedicated to this task (and has all Office etc removed), is there a penalty in making the buffer too large?
04-03-2024 10:55 AM
Related discussion - https://forums.ni.com/t5/LabVIEW/Sample-rate-vs-samples-per-channel/td-p/1537786
04-03-2024 12:31 PM
First things first. No, there's really no penalty for setting a larger buffer size in your call to DAQmx Timing when doing Continuous Sampling. I frequently advocate users to set it to something more like 5 seconds worth, even when they're nominally reading the new accumulated contents every 0.1 seconds. On modern PC's, I don't even give a second thought to buffer memory usage of 10 MB or less.
Second, how are you controlling your loop timing and doing your reads? One fairly common but subtle problem I've seen from users here is to *overconstrain* timing. For example, supposing you were sampling at 10 kHz, it'd be an overconstraint if you both:
- called DAQmx Read and requested 2000 samples
- ran some kind of 200 msec timer in the loop
You should do one or the other, NOT both at once. What can happen over long stretches of time are that the 200 msec timer will sometimes take 1-10 extra msec due to Windows OS reasons. Each time that happens, the 200 msec worth of samples you read will be the oldest 200 out of 201-210 msec worth in the buffer. Over time, that backlog can gradually accumulate to the point that you overflow the buffer.
If this doesn't make sense to you, post your code. (First save back to previous version like 2019 or so).
-Kevin P
04-03-2024 12:44 PM
My general rule of thumb for DAQ applications is to leave the Samples Per Channel on the DAQ Timing unwired and read 100ms worth of data with each read of the DAQmx Read. The loop with the DAQmx Read should do nothing but read from the DAQ, maybe with a chart. Otherwise, a queue is used to send the read data to another loop for processing and/or logging. This lets the DAQmx Read set the loop rate (no additional waits) and makes sure nothing else is slowing it down.
If logging to a TDMS file, use the DAQmx Configure Logging before starting the task and the logging just happens behind the scenes. This is more efficient than even using a Producer/Consumer.
I have yet to have any problems with this setup.
04-03-2024 09:14 PM
I usually use 4-8 seconds worth of buffer.
what crossrulz said works well 99% of the time. When you are pushing bus limits, or high sample rates there is one modification to make. When reading data the number of samples should be an even multiple of the disk sector size. Use the built in logging features but make that mod for the log and read mode; log only mode requires that mod.