LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Network Stream(memory leak)

Solved!
Go to solution

Hi all, I was using Network Stream and found that it leads to memory leaks under certain circumstances, the procedure is as follows:

reader.vi

reader.vireader.vi

writer.vi

writer.viwriter.vi

 

When I run Reader.vi first and then loop through Writer.vi to send data, it causes the program memory to keep increasing, I don't understand what the problem is, because Reader.vi calls "Destroy Stream Endpoint" after each read. after each read, but the memory doesn't seem to be freed. And when Reader.vi is stopped, the memory is still not freed and will continue to increase when it is run again. I've uploaded the program as an attachment and hope someone can answer my question, it's driving me crazy.

0 Kudos
Message 1 of 11
(2,229 Views)

You increase the size of the array in each iteration. It's not a memory leak of the network streams. Its your program ...

 

0 Kudos
Message 2 of 11
(2,102 Views)

It seems that this is not the reason, this array is just to observe the result of data transfer, the memory grows far beyond the size of this array, if you have time you can try this program.

0 Kudos
Message 3 of 11
(2,095 Views)

Ah ok - I only looked at your VI. I'll test this out without the array.

0 Kudos
Message 4 of 11
(2,078 Views)

The usage of the network stream in this program is different from the official example, but I think it is logical that the "Destroy Stream Endpoint" function is called immediately after each read.

Ideally, the program would release memory immediately after each call to the "Destroy Stream Endpoint" function, but in reality, the program in the loop increases memory every time it runs, so I don't know what's causing this.

0 Kudos
Message 5 of 11
(2,061 Views)
Solution
Accepted by topic author Wlison

Now I executed the program since my previous post. I see no memory leak with LabVIEW 2021 SP1. Windows is doing some strange caching things with memory and you can see an increase in total memory usage or memory usage of the labview process, but that's often caused by the memory management and that's not a memory leak.

I do not really understand these memory management things. But I observed, that the total memory usage increases until the OS "think" it is enough and then you get a stable situation.

 

0 Kudos
Message 6 of 11
(2,044 Views)

Thank you, the problem has been solved

0 Kudos
Message 7 of 11
(1,876 Views)
Solution
Accepted by topic author Wlison

That is a very strange way (in my personal opinion) to manage Network Streams, destroying the Writer and re-constituting it for every single transmission.  I've mostly used Network Streams in LabVIEW Real-Time applications to create communication paths between a Host PC and a Target system running LabVIEW Real-Time, often running 4 Streams simultaneously (two Streams for Host->Target and Target->Host messages, and two more for Target data to Host).  The Target generally starts first (it "auto-runs" its routine) and is identified by its IP Address, so it goes into "Waiting for Connection" mode when it starts up, waiting for the four Create NS  Endpoints in parallel, with a timeout of 30 minutes (after which the RT system will exit and shut down, or reboot itself, as appropriate).  When the Host starts, it tries to connect to the four Target Endpoints, using the Target's known IP address, but in serial fashion, with a 15 second timeout.  A failure to connect, of course, will cause the entire Serial chain of Create NS Endpoints to fail, and if there are three failures, the Host clears the Error and exits saying "The Target is not responding".

 

Once the Host and Target are connected, the Streams "stay alive", sending messages back and forth, and with the Target streaming lots of data back to the Host for display and "streaming" to disk files.  At the end of the program, the Host sends a Message to the Target telling it to end.  The Target sends a note back to the Host saying "I'm shutting down the Network Streams".  The Target flushes all of its Writer Streams with a 1 second "Wait" condition, then destroys its Endpoints (in parallel).  The Host does the same, serially, first flushing its Writer endpoint (with a 1 second Wait), then destroying its Endpoints.

 

I've been using this multi-stream paradigm for LabVIEW Real-Time routines for more than a decade.  It has never let me down.  I'm currently using it with a myRIO as the Target, with an additional Target->Host data stream.  We are currently sending 16 channels of analog data collected at 240 Hz from the Target to the Host without any problem, despite the relatively limited memory and processor speed of the myRIO compared to the PXI systems based on Intel processors that could run Windows.

 

Bob Schor

0 Kudos
Message 8 of 11
(1,952 Views)
Solution
Accepted by topic author Wlison

@Bob_Schor wrote:

That is a very strange way (in my personal opinion) to manage Network Streams, destroying the Writer and re-constituting it for every single transmission.  I've mostly used Network Streams in LabVIEW Real-Time applications to create communication paths between a Host PC and a Target system running LabVIEW Real-Time, often running 4 Streams simultaneously (two Streams for Host->Target and Target->Host messages, and two more for Target data to Host).  The Target generally starts first (it "auto-runs" its routine) and is identified by its IP Address, so it goes into "Waiting for Connection" mode when it starts up, waiting for the four Create NS  Endpoints in parallel, with a timeout of 30 minutes (after which the RT system will exit and shut down, or reboot itself, as appropriate).  When the Host starts, it tries to connect to the four Target Endpoints, using the Target's known IP address, but in serial fashion, with a 15 second timeout.  A failure to connect, of course, will cause the entire Serial chain of Create NS Endpoints to fail, and if there are three failures, the Host clears the Error and exits saying "The Target is not responding".

 

Once the Host and Target are connected, the Streams "stay alive", sending messages back and forth, and with the Target streaming lots of data back to the Host for display and "streaming" to disk files.  At the end of the program, the Host sends a Message to the Target telling it to end.  The Target sends a note back to the Host saying "I'm shutting down the Network Streams".  The Target flushes all of its Writer Streams with a 1 second "Wait" condition, then destroys its Endpoints (in parallel).  The Host does the same, serially, first flushing its Writer endpoint (with a 1 second Wait), then destroying its Endpoints.

 

I've been using this multi-stream paradigm for LabVIEW Real-Time routines for more than a decade.  It has never let me down.  I'm currently using it with a myRIO as the Target, with an additional Target->Host data stream.  We are currently sending 16 channels of analog data collected at 240 Hz from the Target to the Host without any problem, despite the relatively limited memory and processor speed of the myRIO compared to the PXI systems based on Intel processors that could run Windows.

 

Bob Schor


Yes, isn't that the whole idea behind a stream?  To keep it active until you don't need it any more?  I mean, you leave the faucet on until your glass is full of water so the water can stream into the glass until it's full and then you stop it, not opening and closing with with every single drop of water until you get a full glass...

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 9 of 11
(1,896 Views)

Sorry, I know this is a strange way of thinking but it's actually fine. A program that would cause a memory leak would be written as follows:

reader.vireader.vi

writer.viwriter.vi

My original program was written like this but it was causing a memory leak, so I changed it to the following and it solved the memory leak problem:

reader.vireader.vi

writer.viwriter.vi

I hope you can help me test this, if you have the time. I don't know why just changing the name of the endpoint causes a memory leak.

0 Kudos
Message 10 of 11
(1,855 Views)