Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

queue impact on memory

When using queues to pass info between parallel loops and the consumer loop iterates slower than the producer the queue builds up.  Where does the data get stored, DRAM?

 

Any references about avoiding this because this is a program execution KILLER?

0 Kudos
Message 1 of 4
(3,098 Views)

Yes, the queued data will reside in DRAM.

 

There are a variety of approaches you might take to deal with such a situation, depending on your app and your reasons for passing data via queue in the first place.

 

1.  Make sure you really want/need a queue.  I use queues when it's vital that the consumer receives each and every data point that the producer stashes away.  I frequently use notifiers instead when I merely want a user interface to be updated on demand with recent near-real-time data.  The notifer will NOT keep using up more and more DRAM;  it's considered "lossy" because new data simply overwrites old data and the old data is discarded forever.

 

2. Even though the consumer iterates less frequently, on each iteration you could read from the queue multiple times, i.e., until it's empty.

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 4
(3,090 Views)

Yep, I need the Q.  I am collecting data from a DAQ, processing it, displaying it, and archiving results.  If my DAQ loops go too fast vs processing loop then there is trouble.

 

I cannot find any docs or example code that addresses this issue.

0 Kudos
Message 3 of 4
(3,079 Views)

Hello,

 

What you are running into is the typical tradeoff with a queue. Basically, if your processing and archiving loop takes longer than the DAQ loop with each iteration, you can run into problems. The example that shows this tradeoff is called Queue Basics and can be found in the example finder. If you cannot get your processing into an acceptable range, then you need to first archive the data and then perform post-processing and save the processed data at another time. There are multiple solutions to this tradeoff and no one but you can decide which is the best for your application. For example, if you collect a finite amount of data and know that it will not exceed your memory limitations, you may let the queue build up as the consumer loop tries to process things. Here, you never exceed your memory and this is acceptable.

 

-Zach

0 Kudos
Message 4 of 4
(3,060 Views)