09-02-2009 10:58 AM
Hello all,
Just thought I would throw this out there to see if anyone had any thoughts. I currently have an instrument which acquires data at 200 kHz on 10 channels. In order to maintain the processing rate that we desire (1 Hz) and to do real time analysis on the data collected, I have to push the data onto queue which ships it from the acquisition loop to an analysis loop. The acquisition system is run on the PXI-8110 - a quad core Intel Core 2 (2.26 GHz) with 2Gb of memory (when initially spec'ing this, I thought CPU time not memory would be the limiting factor). Generally, the routine runs fine - the acquisition and analysis loops run in the 1 Hz time frame that we want. However, under certain circumstances, the acquisition loop can outrun the analysis loop, therefore increasing the queue size and hence the memory used by the queue dramatically (1 queue entry contains 2MS), especially if it continues to fall behind. So, related to this, I have a question and an observation:
Anyway, any thoughts on these two questions/observations would be appreciated. I haven't posted any of the code because it is quite large, but if you would like to see a snapshot, let me know and I will put it up.
Cheers, Matt
09-02-2009 12:11 PM
09-03-2009 01:04 PM
Hello mtat76,
I tried to reproduce the behavior you are seeing with Queue Basics.vi from the Example Finder, but was unsuccessful. I set the loop rates for the producer (fast) and consumer (slow) to force the memory allocated by LabVIEW to increase for the growing queue, but as soon as I stop the VI (either with the stop button or with the abort button), the memory goes right back to what LabVIEW had allocated before ever running the VI. Could you reply with the exact steps I can take to replicate and observe the problem that you did with this example? Also, LabVIEW version may be helpful.
It is important to note that when enqueue-ing elements at a faster rate than dequeue-ing elements, it is expected that the memory allocated by LabVIEW for the queue will grow, so that it has some place to store the elements. If you were to switch the rates around, and force the queue to go down to 0 or 1 elements, the memory allocated by the running VI would stay the same (not go back down), in anticipation of more elements to be enqueued. However, stopping the VI from running (either with the stop button or the abort button) should force LabVIEW to let go of that memory again, and go back to its initial state before running the VI.
Please post step-by-step instructions with the example VI that I can use to replicate the behavior, and I would be happy to take a look at it for you.
09-03-2009 01:31 PM
mtat76 wrote:...
it is quite large, but if you would like to see a snapshot, let me know and I will put it up.
Cheers, Matt
Years ago a fellow developer made the observation that
"
Queues are
almost always
almost full or
almost empty.
"
So since yours is backing up, I would look at "cleaning the drain" to let the dat flow faster. So could you post the code that is reading from the queue that is backing up. Maybe we can offer a suggestion to change from code so that it processes faster.
No promises, just thinking out loud.
Ben
09-03-2009 01:33 PM
Hey Chris,
Thanks for taking a look at this. So, I have to apologize - I thought I was reproducing this behavior that I see on our PXI system, but it seems I am unable to do this on my desktop today. It does seem that I am continually grabbing more memory each time I start and stop my VI on our instrument controller, but maybe this is not due to the memory allocation associated with the queues? I will have to look a little closer at this.
Nathan, thanks for the link. So, the gist of this seems to be that a simple flush will not suffice. That is a bummer. I will have to work around this to avoid the unreasonable amount of memory that rapidly is consumed when the queue backs up. Luckily, in standard operation, there is really no reason for the queue to back up as we are not using any slow numerical routine (such as the Levenberg-Marquardt).
Thanks to both of you. Matt
09-03-2009 02:06 PM
Thanks, Ben. And congratulations on becoming a Knight; if I follow in your footsteps I only have about 8 more years. I wonder where we will all be then...
Anyway, cleaning the drain sounds like an appropriate metaphor. I will try to show you, but this is a more mature application that it is difficult to extract the essence without handing over 50 VIs. Below is thepath for analysis of the queued data. I feel almost sadistic giving you this because I am not sure what you can do with it. That being said, I believe that the improvement can be made in the fitting of the frequency data. I am using an iterative, nonlinear routine.
And a thought just occurred to me - I just recently switched to 2009 so that loop in the second diagram could be parallelized as none of the data in each consecutive fit is dependent on the previous.
Enjoy! Matt
09-03-2009 02:29 PM
Without think to much about this...
1) First use Show Buffer Allocations and play "Wack-the-dots" to makes ure the extra time is not due to haveing to duplicate buffers.
2) The"Clear as Mud" Thread indicated arrays that will not be modified will be passe "in-place" if the array is "wired through. I think that maybe the "Y' input may be getting copied.
3) The wire fork thatt feeds the "Y" array to the sub-VI and the Array Max/Min could forsce a buffer duplication.
That's all I can say after a quick glance.
Ben