Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Re: Software in labview windows lags

Thank you for your reply 

 

What I dont understand is that in the data sheet for the NI6070e the input buffer (FIFO) is stated to be 512 samples. However in the software i am able to set the buffer size of the DAQ's to greater than 512 samples. See Fig 1 the circles enclosed by 'a' and 'c'. I always set the buffer size equal to 2 times greater than the sampling rate I use as shown by 'b'. So this means when I sample at 1000 Hz i set the buffer size to 2000 samples, which is greater than 512 samples. Now in Figure 2 I show a part of the producer loops code. What I do there is read the samples from the DAQ devices and check for a buffer overflow and the number of samples in the buffer. (See the circles enclosed by Z1 and Y1) I always read out 1 sec of data. So when i sample at 34 000 Hz the numbber of samples I read will be 34 000  samples . Now On by front panel the number of samples in the buffer for each DAQ is displayed and when I sample at 34 000 hertz by buffer size is set to 68 000 samples and I can see that each daq has 1020 samples in buffer at certain times (this value varies as I run the software) Again I thought the max number of samples in buffer is 512 as indicated in the spec sheet of the NI card, but clearly the buffer fills uop with more samples before it is read.

 

THE MAIN QUESTION I would like to know is how to speed up my LABVIEW Program so that it does not lag. As mentioned in my orginal question. Fig 3 shows a simple block diagram of my program. My software runs but as time goes by the software only shows events that accured say for instance 2 minutes ago and the longer I run it the slower the greater the lag becomes. I would like to increase the update rate of my program. If I dont use a producer consumer loop a get a buffer overflow error. I attached the GUI's files and the dp_software Qui is the main VI and the other two are subvi's. I load a 3d model in my program that is not included, just remove that part of code if it cannot run. In this example I use two simulated DAQ's. Regards 

 

Feel free to SKYPE ME

 

Buffer_size.png

FIGURE 1

 

Producer loop.png

FIGUE 2

Software block diagram.png

FIGURE 3

0 Kudos
Message 1 of 5
(3,703 Views)

One classic reason for code to slow down more and more over time is growing an array inside a loop.  Something to look for is the "Build Array" primitive in a loop where the output goes to a right-hand shift register and one of the inputs comes from the corresponding left-hand shift register.

 

The "Build Array" in your visible code doesn't appear to be used that way, but I *do* see a different possible problem.  The comments next to each of your DAQmx Reads suggest that you're wiring in a -1 value as the # of samples to read, meaning "read all available."  The problem is that each of the Read calls happens independently, such that the # of samples actually read from the 2 calls may be different.  If so, then the two 2D arrays will have a different # of columns, and they don't append nicely.  LabVIEW will churn its memory manager a bit to try to force things to work, but will either truncate the larger or add 0's to the smaller array.

 

One thing you should defniitely do is make sure to read the same # samples from the two tasks.  Certainly within each iteration and quite possibly also from one iteration to the next (depending on the nature of the code that isn't visible).  If you only need to align them within each iteration, you could call one of the Reads with a -1 to read whatever's available, then determine the # you just read and request that same # explicitly from the other task.

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 5
(3,698 Views)

Kevin is certainly onto something here:

 

From What I see you always have the same sample rate for both tasks.  Why not create 1 task with 32 channels?  There is no reason you cannot combine chanels from different devices in a single task- In fact you can even use a RTSI cable to sync their convert clocks.

 

I think what you are seeing is that a data flow expectation read both property nodes. read 1 second of data for Task A (and wait for the samples to finish) then read Task B - there will be at least 1 second of data (since Task A had to wait to finish) and the data from the property node is stale.  Moreover, some resources had to be released and swapped to switch from Task A to Task B.  (bus resource locks DMA arbitration etc.....)  Combining the channels to 1 task will increase your performance.  Also the DAQ reads are enough to throttle the loop so that it doesn't lock-up a whole CPU.   there is no reason to expressly release and re-acquire the thread (get rid of the 0mS wait):smileywink:

 

 


"Should be" isn't "Is" -Jay
0 Kudos
Message 3 of 5
(3,688 Views)

Thank you for your insight and help with the problem I will soon try these thechniques and regarding the buffer size any help on that question.

0 Kudos
Message 4 of 5
(3,680 Views)

Hi Michal,

 

This is Alexander Maul in Applications Engineering.  I have an answer to your buffing problem.

 

Although you can manually set the buffer size using the DAQmx Configure Input Buffer VI, this does not change the physical FIFO buffer size allocated on the card.  The DAQmx Configure Input Buffer VI is a manual way overriding the buffer size already automatically determined by the DAQmx Timing VI.  Below is a screenshot illustrating a basic Continuous Buffered Acquisition (Taken from NI Examples in Help):

ContinuousBufferedAcquisition.png

When the buffer is manually set to a size greater than what the FIFO can handle (in this case 512 samples), an Overflow Error is produced.  Basically what this means is LabVIEW (specifically NI-DAQmx) is not able to retrieve the data from the FIFO fast enough before the next data set is available.  As a result, the data in the FIFO will be overwritten, which may be why LabVIEW appears to be 'lagging'.  In order to avoid this overflow condition, try and keep the buffer size below the specified FIFO limits.  I have included below a KnowledgeBase link to DAQmx Overflow / Overwrite errors, as well as example that allows you view the DAQmx buffer usage

 

 

 

Understanding and Avoiding Overwrite and Overflow Errors with DAQmx: http://digital.ni.com/public.nsf/allkb/A224DA0551EEA073862574F60060AB6F

 

Read DAQmx Buffer Utilization: https://decibel.ni.com/content/docs/DOC-9887

0 Kudos
Message 5 of 5
(3,664 Views)