06-06-2018 09:36 AM
Hello!
I started to get mysterious error msgs, which first pops up a dialog window as "Not enough memory to complete this operation". After clicking "OK", another dialog window:
"LabVIEW: Memory is full.
The top-level VI "MAIN.vi" was stopped at SubVI "Enqueue.vi" on the block diagram of "".
Refer to the VI Memory Usage topic in the LV Help for suggestions...blabla..."
I again click OK, and LV just crashes (LV2017 SP1)!
I managed to pinpoint the code part which causes these crashes. It is caused by a very simple Stream Channel node, where I use the "element valid?" input false or true. The running VIs stop with same error msgs if this input set to True, but no crash in this case (crash only happens if the element valid? input is False)!
Right now I cannot share my VIs, and did not manage to reduce the code and at the same time reproduce the problems and crashes. My first thinking was that I might use the Stream Channels in a wrong way in this problematic subVI: namely, I use a Typdef cluster with a string and Variant to transmit data between three parallel While loops. Loop1 sends a certain data cluster as Variant to Loop2, and if some conditions met, Loop2 sends ANOTHER type of cluster inside the Variant to Loop3, but using the same Channel Stream type (same string+Variant cluster Typdef!).
The crashes/errors happen only, when the Stream Channel Writer node is active in Loop2 (either with false/true inputs to the "element valid?" input). Okey, I thought it is not allowed to use the very same Channel type at two places in the code, so I changed the Channel data type from the Typedef cluster to the simple Data type I need to send from Loop2 to Loop3 (no need for string info here). Very strange, but now the code runs fine, until the "element valid?" input is false for the Channel Writer at Loop2, but when I set it to "True", I get same memory error msgs and the code stops running!
Honestly, I have no idea what is going on! I will do some further tests, and be back when I figure out more...
Solved! Go to Solution.
06-06-2018 09:48 AM - edited 06-06-2018 10:00 AM
It looks like I managed to fix the problem, but I have no idea why one approach works, and the other crashes the code??? 🙂
EDIT: okey, I started to find some other sources which might cause these crashes, so it is entirely possible that Channels have nothing to do with this, this time...testing further 😄
06-06-2018 11:16 AM
Ok, still not clear. I replaced the Stream with Tag, but still the problem exists. For some reason, as early as I start to send a cluster (2D array with 512*256 I32, plus a few more numbers in a typdef cluster) via the TAG Writer, the memory consumption of LabVIEW IDE goes up from 300 MByte to several GB, and crashes with memory overload...First I thought the SQL node fools me, but now it is disabled...
06-06-2018 11:28 AM
And here is a crash movie, much fun! (nooot! 😄 )
06-06-2018 11:45 AM
Sorry I cannot help, Channels are Bob Schor domain, but he probably won't be happy since you did not include your code. It seems pretty cool though, lasers, Raman, CCDs, etc.
I have not used Channels at all, but have noticed strange memory problems with queues where the memory can go out of control.
Can you use the Tool "Profile Buffer Allocations" before the crash occurs? This could show you the point where the memory is growing and give you a hint.
mcduff
06-06-2018 11:50 AM
Now it is almost 7pm, i continue the detective work tomorrow morning. By the way, I replaced the Tag channel with a Notifier, no more crashes!!! RAM usage is constant, no increase for a few min run!
However, I WANT to use Channels, and I really hope (most likely) the problem is with my code, and this is not a bug.
I cannot share the code, and did not have success to reduce it plus reproduce the behavior yet. However, If a pro user asks me, I am happy to send it in private, or to NI...
06-06-2018 11:57 AM
He's not happy, but not because Blokk didn't include code, but because there's another LabVIEW Developer Who Knows What He's Doing who is also getting funny code with the current iteration of Channel Wires.
Almost all of my Channel Wire development has been in LabVIEW 2016, where I've not had any problems. I gave a 5-hour "seminar" that I called "Advanced LabVIEW Techniques" (I think) a few months ago to some of my colleagues and students, and sent my code around for their inspection. One of them had LabVIEW 2017 and tried to run my code -- he reported many problems, including lots of broken wires. I had been unable to safely install LabVIEW 2017, so I was unaware that there might be a "Version" problem, but was able to "borrow" a machine and confirm his findings. I filed a Support Ticket with NI, who confirmed there was an issue, and said a CAR would be filed.
I now have LabVIEW 2018 running in a VM. I have seen problems similar to what Blokk is reporting, and am busy trying to develop demonstration code that will illustrate the bug. Naturally, it fails most reliably when you have at least 40 VIs in your Project (i.e. I haven't been able to yet make a "simple" failure ...).
Bob "Channel" Schor
06-06-2018 01:14 PM - edited 06-06-2018 01:16 PM
I really like to use Channels, so I really want to help those who are more skilled than me to figure out whats going on! If a "pro user" (LV Champions, etc) sends me a private msg, I am also happy to share the source code with them in a private way.
06-07-2018 04:16 AM - edited 06-07-2018 04:32 AM
The crazy thing is that even if I replaced the Tag Channel with Notifier, the memory leak bug stayed. But it is not 100% reproducible, and I did not figure out what can trigger it or not. When I run my code (from LV IDE), the Win Task Manager shows a more or the less solid 300MB memory usage for LV. Then, when I start this particular data transfer inside a single Module, between two loops, the RAM goes crazy, and crashes eventually LV in a few iterations. The data transferred (either with Tag/Stream/Notifier) is relatively small: 512*256 I32 plus some few doubles, etc. So this is clearly a bug, and not due to huge array/cluster usage!
The annoying is that, I do not get crashes 100% of the times. Sometimes the RAM goes up from the baseline of 300MB to 400-500 and stays there solid! No crash. But other times it just grows unlimited causing a crash in a few seconds (i transfer data between the loops with 1 Hz).
I opened today a service request (Service Request #7742765) toward NI and I shared my zipped project with them. I hope they will figure out what is going on. I do not know if this problem is caused by the fact that I use many Channels in my modules in the project, but if so, why even an extra Notifier triggers the issue??? 🙂
EDIT: I realized, actually I can reproduce the memory leak easily, but not with 100% success: I just need to restart LV IDE... No idea why sometimes the IDE manages to avoid the leak after a few restarts of the project or LV IDE...
06-07-2018 06:39 AM - edited 06-07-2018 06:50 AM
I was interested what is the case with an EXE. So I built one for my project, and I get the same memory leak bug.
EDIT: what is also strange, even if the Notifier starts the memory leak, I feel it is just a kind of "trigger". Since the very error msg in the video below complains about the "Analysis module" VI. But this is just a pure guessing.
(Extra info: i simulate CCD capture in the "CCD module" which sends the data at 1 Hz rate to the "Analysis module" using Dynamic User Event. Then, after mimicking some calculation (just a sum over the 2D array fr now) in the "Analysis module", it sends the raw 2D data and the "results" to the next module, the "File management and database" one, again using User Event. Inside each module, I use a single Stream Channel to send the data from the Event loop (from the Dynamic User Event case) to the "State machine" loop for further handling (and finally sending over to the next module via Generate User Event)...