LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how does labview release memory with subVI's

It is my understanding from what I have read here that LabVIEW will release the memory used by a sub VI when complete. I have gone through and created a data mining tool that looks at previously recorded files and processes them via my subVI "Process Data." The code used in there is rather complex memory intesive and company proprietary so I can't post that. From there it outputs the processed data which gets sent to the subVI adjacent to it which writes it to a tdms file.

 

Since I have about 1000 files to mine through and it takes about 20-40seconds each I also give the operator status as to how much time remains, and what file is being processed etc...

 

The issue that I am having is that my program will run through about 100 or so iterations of this loop before it crashes due to low memory. After that I restart LabView and I am good to go for another 100 or so iterations. Since those 100 or so iterations take about 1hour or so it is rather hard to troubleshoot as I am usually not watching when it does crash.

 

Also I really want this program to run after over night after I leave work for the day. Such that a single crash 1hr into a 10hr script is going to ruin that concept real fast. Thus does anyone have any insight as to where this is struggling? The high level loop itself seems to be quite benign when it comes to memory use. I know that the "Process Data" one is the real memory pig. Is it correct to assume that I get a deallocation of memory each time this program is run or am I some how incrementally building up a big block of memory somewhere in my loop with each iteration till finally I am full?

0 Kudos
Message 1 of 18
(7,889 Views)

The memory is released once all of the calling VIs are released.  So simply the memory from a subVI will be released when the top level VI closes.  One thing that I've done when dealing with large data sets from files is to use the Request Deallocation to release the data of a subVI when I'm done with it.  This might be appropriate for your file processing VI.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 18
(7,883 Views)

Ok, I went and modified my code now so that at the tail end of the sub VI that is using all the memory I have the request deallocation. Time will tell if this solves the issue.

 

 

0 Kudos
Message 3 of 18
(7,869 Views)

I understand your reasons for not posting your processing VI, but without that, it's hard to tell you what's going wrong.

 

I'm not sure you want Request Deallocation in the processing subVI, since that VI will just acquire the memory again on the next call.  If you don't request deallocation, it will reuse the memory it allocated on the previous call.

 

I do wonder if the issue is memory fragmentation, not overall memory use.  Why do you dislike auto-indexing for loops?  It looks like you're just pulling elements off an array sequentially.  The use of "Insert Into Array" to build up an array of strings of Processed Files is not good.  You already know how large that array will be when finished, so pre-allocate that array.  As you have it now, every time you add an element to that array, LabVIEW moves that array to a new location where there's space for one more element, then frees the previous space.  While this isn't a large array (it's just pointers to strings), if you do this 1000 times, you may fragment your memory by leaving a lot of small, non-contiguous blocks around.  If your processing VI needs a large contiguous block, there may not be one available.

Message 4 of 18
(7,867 Views)

If your Processing VI makes data copies or doesn't empty the data in between loops, but keep building some array you'll get exactly that behaviour.

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 5 of 18
(7,859 Views)

I came in this morning and it looks as though my program is still suffering from this issue. It may be making it farther into the processing than it was before but it is hard to tell.

 

What really bugs me is how it can run the memory intensive loop about 150 or so times and then still almost randomly run out of memory somewhere and crap out. Is there any way to watch the memory allocation to find out what would be building up? When I run the task manager for example I can see as the program cycles through its different parts that it very noticably changes in its memory use along the way as would be expected.

 

Going back to some of the suggestions made so far;

"I do wonder if the issue is memory fragmentation, not overall memory use.  Why do you dislike auto-indexing for loops?  It looks like you're just pulling elements off an array sequentially.  The use of "Insert Into Array" to build up an array of strings of Processed Files is not good.  You already know how large that array will be when finished, so pre-allocate that array.  As you have it now, every time you add an element to that array, LabVIEW moves that array to a new location where there's space for one more element, then frees the previous space.  While this isn't a large array (it's just pointers to strings), if you do this 1000 times, you may fragment your memory by leaving a lot of small, non-contiguous blocks around.  If your processing VI needs a large contiguous block, there may not be one available."

 

This could be an issue infact I know quite a bit of my lower level code has been written this way. As for why I do it that way it is likely more of a factor of me not being a programmer and only learning how to use labview in the past 3months or so. Visually it makes more sense to do it the way I have been doing than trust the auto indexing. Do you think I may need to go back and rewrite all of my code to fix the issue or upon completion of the memory intensive VI am I then releasing that block of memory?

 

 

Yameda;

"

If your Processing VI makes data copies or doesn't empty the data in between loops, but keep building some array you'll get exactly that behaviour.

/Y"

 

If I were to go back and analyze my code what would I be looking for to find this behavior. All of my loops seem to start in one place, process the data and hand off to the next subVI in line. Other than my highest level function I really don't see any loop that appears to build up data between runs. The only array I am building that has the ability to get rather large would be the string array showing my processed files. Is it concievable that this array which may have maybe 150 or so file names in it composing of maybe 50 or so characters a file name could be what is causing the memory to run out? Would that require enough memory to cause issues?

0 Kudos
Message 6 of 18
(7,827 Views)

Are the TDMS files being close properly?

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 7 of 18
(7,815 Views)

Hi Ben,

 

I went through my code, and found one where the TDMS file isn't being closed properly. We will see if this corrects the issue.

 

Thanks,

 

Adam

0 Kudos
Message 8 of 18
(7,809 Views)

Yeah, not closing the files will force them to stay in memory, constantly reserving memory much like the things i mentioned above. Nice catch Ben.

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 9 of 18
(7,789 Views)

Yea, I think that solved the issue thanks for the help, I have now run the program for a few hours with no issues.

Message 10 of 18
(7,787 Views)