LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how to estimate the available system memory

1) 

Sometimes, I am wondering how LV 2009 manage the remained memory.

For example, I made 5,000,000*5 2D DBL memory.

This is just less than 200 MB.

 

bytes per DBL#data#channelstotal bytesin MB
85,000,0005200,000,000190.7349

 

Of course, my attachment contained other structure and inserted array.

But this is not so serious; just several times for 200 MB.

The Profile Performance and Memory tool showed me only maximum 400 MB during the execution.

However, LV shows the error message,  Not enough memory.

What do I have to use my remained memory? 

 

2) I want to acquire DAQ data in 2 MHz sample rate and 5 channels for 10 sec.

That means I have to manage 800 Mbyte (for DBL) for full displaying after array manipulation in realtime.

In this case, which data type is more suitable?

If I use waveform (like a cluster) in a producer routine, some more extra rouitne is needed to extract data array info for array manipulation.

Are they(Int, floating, waveform) the similar performance?

 

 

0 Kudos
Message 1 of 5
(2,730 Views)

Hi labmaster:

 

you probably know it's not "good" to use InsertIntoArray for large data arrays...

 

You create 2 arrays. Then you insert the smaller into the bigger one: LabVIEW needs to get an even bigger memory chunk to copy both arays into. Then you need an additional memory chunk for the indicator (as this one has it's own buffer...)!

 

So in total you need:

- 200MB + 200kB for creating the arrays

- 200.2MB for thearray created by InsertIntoArray

- 200.2MB for the indicator to display that array

 

So this small demo needs 600.6MB of memory, in 3 chunks of ~200MB!

 

Btw. I also get this "out of memory" error - but (as seen by highlighting) not the InsertIntoArray is giving the error, it's the write to the indicator...

Message Edited by GerdW on 05-14-2010 10:14 AM
Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 5
(2,714 Views)

This tag cloud has links to threads on Performance and Memory.

 

I talked about how data is stored in this thread.

 

I don't think I tagged my recent suggestion that opened up new memory regions for me so I outline the idea here and tag it this time.

 

LV wants to store data in contiguous memory. If it can't find a contiguous block you get the "Not enough memory..." message.

 

So another technique for being able to use more memory leverages off the idea of keeping the individual buffer small. so...

 

Instead of using a single array to store everything, store the data using queues. Set up a bunch of single element queues and keep the refs for them in an array. When you want to same a new set of reading put them in the next queue. WHen reading process the data in each queue one at a time.

 

Since queue elelemtns are stored in seperate buffers LV can squeeze a bunch of small buffers and effectively use all available memory.

 

So try using queues.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 3 of 5
(2,693 Views)

Ben, Thanks for the comments.

I know your big and big contribution to these forums.

 

Frankly, I have never had this trouble until now.

Can you explain more detail?

You mean use functional global (queue function except 1 or 2D array) to save or get the data temporarily?

 

What do you think about my other question, waveform (cluster) and DBL array?

I found most time consuming was happend in array manipulating parts of the waveform.

(i.e. each channel should be manipulated in the same way.) 

However, I feel some trouble to ignore the valuable information in waveform.

So, in using DBL array, I have to write more codes for TDMS due to the extra time channel.

 

labmaster.

0 Kudos
Message 4 of 5
(2,658 Views)

You may want to check out Managing Large Data Sets in LabVIEW.  It is old and does not mention some things such as the Data Value Reference or the Inplace Element Structure, but the concepts have not changed.

 

In your particular case, you should probably acquire your data in smaller chunks and stream to disk using a producer/consumer architecture (see LabVIEW examples for details).  You should be able to achieve disk speed limited performance.  You can use the TDMS file for a buffer for display and decimate/display only those sections you want to look at.  The article above contains details of this process.

 

Good luck.  Let us know if you need more help.

0 Kudos
Message 5 of 5
(2,624 Views)