LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

memory de-allocation string arrays

I am working with tab delineated ascii string arrays and am having memory management issues. I have been able to reduce the memory usage thanks to a suggestion here:

 

https://forums.ni.com/t5/LabVIEW/spreadsheet-to-array-memory-problems/td-p/1547726

 

This is great, but has brought up another question that I am unable to find an answer for, and I, as it is not directly related to the original question and I am unable to find any information by searching on this topic, decided to ask it in a new post.

 

When I allocate memory for arrays using initialize array with doubles and use replace subset, etc, I can track memory usage going up when data is read, and then going up significantly more when array to spreadsheet string is used and then written to file. Once completed, when I de-allocate via using a "request deallocation" memory control function, my memory usage in labview returns to the same size as when the VI is initially loaded (e.g.. maximum usage might be 600mb, and after deallocation, labview memory usage returns to ~130 mb).

 

When the only change to the code made is to use strings in the initialize array, everything proceeds similarly (albeit using more memory because of the string vs double), but when the "request deallocation" memory control function is called, total memory usage returns to a level that is about 1/3 less than the maximum total usage during the run, rather than the level that was used when the vi was initially loaded (e.g.. maximum usage might be about 900 mb, and after deallocation around 630 mb).

 

Can anyone enlighten me on why "request deallocation" works differently with string arrays vs double arrays?

0 Kudos
Message 1 of 14
(4,394 Views)

Hey belgron,

 

I'm unsure off the top of my head. To start out, what OS are you running?  Version of LabVIEW?

 

Regards,

Bobby Breyer
Applications Engineer
National Instruments
0 Kudos
Message 2 of 14
(4,361 Views)

We need to see some code. Is this in any way related to this discussion?

0 Kudos
Message 3 of 14
(4,351 Views)

belgron wrote:

Can anyone enlighten me on why "request deallocation" works differently with string arrays vs double arrays?


I would strongly suspect it is due to the different ways the two types are stored in memory.  Double arrays are stored as a contiguous block of data.  You deallocate it and it is easy to free the entire block.  String arrays are stored as an array of handles, the actual data is not contiguous, but can be spread around.  When you request deallocation, only a fraction is freed immediately (probably the handle array).  The remainder are marked for release, but it is probably not worthwhile to actually do it unless it becomes necessary.   They are probably reclaimed, or dumped with the entire VI data space when the time comes.

 

I am sure the name 'request' deallocation was carefully chosen. 

Message 4 of 14
(4,347 Views)

Have you tried to blank out the array right before the request for deallocation?  That may clear out the rest of the strings but I have not tested it (per below document)...

Free memory in LabVIEW back to Operating System

https://decibel.ni.com/content/docs/DOC-3829

 

0 Kudos
Message 5 of 14
(4,341 Views)

At this point, I don't think it's related to that discussion. The attached image shows a doubles version and a string version. The doubles returns to original memory levels, the string version does not.

 

Please excuse the code, I was playing around trying to test things out. This is more example code than it is something I'm trying to deploy.

0 Kudos
Message 6 of 14
(4,337 Views)

Unfortunately, we cannot *run* pictures. Can you attach the actual VI?

 

(I think the main problem is you use of several instances of "delete from array" in a loop. Try to find an "in place" solution, it will be much more efficient.)

 

What is the algorithm actually trying to do. Seems overly complicated.

 

(To round the result of quotient&remainder up to the next integer multiple. simply do a sign function on the remainder and add it to the integer quotient. Less code ;))

Message 7 of 14
(4,329 Views)

@belgron wrote:

I am working with tab delineated ascii string arrays and am having memory management issues. I have been able to reduce the memory usage thanks to a suggestion here:

 

https://forums.ni.com/t5/LabVIEW/spreadsheet-to-array-memory-problems/td-p/1547726

 


The 1st question is ofcourse: What's the memory issue? Do you have problems running a program? Does it not fit in an embedded system, or similar? Generally you shouldn't need to request allocation or care about memory management, apart from not making lots of data copies and filling the memory.

 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 8 of 14
(4,320 Views)

Altenbach, thank you for your response,

 

I was looking for an inplace way to avoid the delete from arrays, but didn't see an "elegant" solution (I assumed, maybe incorrectly, that indexing single element by element in an inplace structure would take a long long time with ~12000 rows and ~1000 columns).

 

So what the algorithm is trying to accomplish is to convert, in essense, a TDMS file to an ascii text file. Because it is largish (often containing 12 million or more data points), I was trying to chunk the data to avoid having to load, transpose, convert to string and save all that data in one fell swoop. I believe you are correct in the suggestion that the delete from arrays are causing excess memory usage. I need the data, which is collected in a tdms file in 1D arrays to be in columns rather than rows (reasons are a long story, but columns work better) when it's converted to a 2D array from channels, so it's transposed. The data is essentially an xyyyyyy graph, with a single x array and multiple y's. The other step is to place the x array in the first column.

 

Originally it did not create the large string array and then replace chunks, but was chunking the data and writing each chunked piece to the text file in turn in each loop iteration to avoid creating the full massive array. For the purposes of deallocation test, that wasn't necessary, so I just created the one string and wrote it to a file.

 

I've attached a VI that generates a random data set and then does the messy chunking/deleting. If you run the doubles (i think they might actually be singles) version first, you'll notice labview memory usage before and after remains fairly consistent (maybe small amount of memory larger), and then run the string version, you should probably see a big chunk of memory added to labview that remains in use until the program is exited.

 

I do understand that there are better ways of breaking up the data, this code was more of experimenting to see what happens with memory usage trying different methods to save large text files. In that process I noticed the memory not appearing to be deallocated (which is what I am trying to bring attention to). It seems like Darin K. is probably spot on, but it would be nice to be able to reclaim that memory when I am forced to work with string/ascii data files.

 

To answer Yamaeda's question, the memory issue is that with a couple minor misteps, you can make the labview use up 16 GB of ram with an initial data set of ~100 MB in binary and no real way to let the operating system reclaim it. I mentioned it in the other thread, but simply reading a 90 mb text file with read spreadsheet string and placing it in a string array indicator uses ~600 MB of memory.

0 Kudos
Message 9 of 14
(4,299 Views)

belgron wrote:

I was looking for an inplace way to avoid the delete from arrays, but didn't see an "elegant" solution (I assumed, maybe incorrectly, that indexing single element by element in an inplace structure would take a long long time with ~12000 rows and ~1000 columns).


Delete from array is an extremely wasteful operation and should never be used in a loop. Every "delete from array operation" needs to delete the portion, shift all remaining elements down by the size of the deleted portion, and resize the array to the new size. If you use delete from array N times, the last element in the array has been touched and moved around N times! In an in-place solution, the last element is touched only once. A difference by many orders of magnitude for large arrays. Remember that arrays need to be contiguous in memory.

 

Can you explain how you want to process the 2D array? What's up with the 10 strings that you are also converting to DBL (seems useless).

 

Whatever you are doing seems to use way too much code and is way too inefficient!

 

So, we have:

  1. an array with 10 strings (SA, SB, SC, SD, SE...)
  2. A 2D array with 12 columns and 500 rows A1, A2, A3...A12 /B1, B2, B3...B12/C1, C2, C3, ...C12/ etc.
  3. A 1D array with 12 elements (X1, .. X12)
  4. 1&3 gets concatenated to form a 1D array of 22 elements.

How should all this be arranged in the single output array? Can you show me the pattern using the above nomenclature?

 

 

 

0 Kudos
Message 10 of 14
(4,282 Views)