12-13-2018 04:19 PM
Is there any way to prevent LV from cleaning up references when a VI ends? IE TCP connections that were created inside of a listener loop VI that then instantiates client instances that store the connection reference. I'm trying to avoid having to make the TCP server keep track of all incoming connections and somewhere else in the code can cleanup the client resources. However if the server loop closes before the clients are cleaned up all sorts of references get leaked, including the client class instances that are "owned" by the server loop VI` which wreaks havoc with expected cleanup behavior.
If I ensure all clients are closed before closing the server then everything is hunky dory. This approach works great in my simple testing app where even "remote" client connections are local to the app but when connections are remote I won't be able to necessarily control how quickly they attempt to reconnect. If I go the route of the server keeping track of clients then I also need client to keep track of servers to notify the server that the client connection has been closed and should be released by the server. If the server gets shut down before the client, the server can close down all of its clients before ending the listener loop vi. However I would REALLY like to avoid tying the lifetimes of the instances together so strongly.
Solved! Go to Solution.
12-13-2018 07:28 PM
As far as I am aware LabVIEW clean-ups references automatically when a VI Hierarchy goes idle (as a result of the top-level VI ending). LabVIEW doesn't automatically clean-up references created by a VI when that VI ends unless that VI was the top-level VI. The distinction matters if you have a nested call structure (which is typically the case) and references are created within that call-stack. It also matters as there are ways of creating new VI hierarchies (such as certain async nodes). You can crudely think of all references created within a VI Hierarchy as being "owned" by that hierarchy.
I don't believe it's possible to change this behavior - having said that its a lot like an application exiting, despite how it appears in the development environment.
However it is possible to manage orderly shutdown / application exits. Given the term "client" is a bit over-loaded as I read your post perhaps you could attach your project/ VIs - we could give you more targeted advice. I assume you are following something conceptually similar to the example given here: http://www.ni.com/white-paper/2710/en/
12-14-2018 02:52 AM - edited 12-14-2018 02:54 AM
It's not possible to change that for the built in LabVIEW refnums. The underlaying refnum system used, supports changing the cleanup mode for a refnum when it is created and that is even (globally) configurable for VISA refnums, but this option is intentionally not made available to the LabVIEW node interface for any of the LabVIEW refnums and inaccessible for all but VISA refnums.
The reason is most likely that a consistent cleanup model is always better than one that can be misconfigured, even if it can be painful at times. Also consider the additional terminals needed to many nodes to make all this kind of options configurable (and no, a right click option for the nodes would be even magnitudes worse as that is hiding functionality that directly changes the runtime behaviour of the node and that is super-über-evil )
12-14-2018 03:36 AM
The only way to get similar behavior, is to make a (possibly dynamic) running broker VI.
The broker will provide references, and keeps running. When the main VI wants a reference, it gets one from the broker. When done with the reference, the broker should be told to close it. When the main VI stops without closing, the reference will keep on living... Forever...
More or less intelligence can be put in the broker. It could be more of a connection manager...
12-14-2018 08:05 AM
Why is it not possible to reply to multiple posts in a single reply? 😞
Thanks for the replies folks!
tyk007:
I'm doing quite a bit more than that reference design for TCP; I'm implementing a TCP library that more closely models the .Net networking setup where a TCPListener (TCPServer in my case) runs and creates TCPClient instances. However in my case, I launch the server loop asynchronously in the background and utilize callbacks to handle new TCPClient instances (Connected callbacks). At this point, the clients are maintained outside of the server instance and I'd like flexibility on whether the clients get cleaned up first or the server gets cleaned up first.
rolfk:
I'm not necessarily looking for a configuration terminal for this. If there's some "trick" to maintaining the reference, like maybe the memory management recognizes when a ref is put in a global or some such, that that would suffice as well. Looks like that won't be the case.
wiebe:
The broker won't work for TCP as it's the wait on listener function that creates the connection reference. My server IS the broker. The solution I mentioned, that I'm trying to avoid, would be to have the server maintain instances of the clients (flesh out the client brokering) and then get them all cleaned up before returning from the server loop when the server's cleanup function is called. Doing that then means that I need to add the capability for the client to notify back to the server that it's being cleaned up so that the server can discard the instance. Now I've got server's maintaining clients and clients maintaining servers and because they're by-ref implementations I have to store them as bare objects and type cast to get around the circular references which breaks typing, and and and... I did implement this exact system in a different tcp library I wrote; I'm just trying desperately to avoid the extra coupling and lifetime assumptions.
12-14-2018 08:16 AM - edited 12-14-2018 08:19 AM
I think you are making this more complicated then it needs to be. An asynchonously started client does not have to notify your broker that it is finished in order for the broker to close the refnum. It can close the connection refnum anytime it wants itself and then simply quit.
The only thing you have to guarantee is that the VI hierarchy in which the broker runs is not shutting down prematerily. You can shutdown the broker loop itself if you need to, to stop it accepting connections but the top level VI in whose hierarchy the broker has been executed needs to stay alive until you do not need the refnums anymore (or they have been all closed).
If you startup your broker as background task it is its own top level VI, so if you want to allow connections to survive, you have to somehow leave it alive, eventhough you may have stopped the Listener loop to accept new connections.
12-14-2018 08:18 AM
And for some more background (sorry, can't post the code):
The current implementation is a by-ref class implementation. When I call the Open VI for the server, it creates the listener, stores it in the class ref, and then async launches the listening loop. I'm currently closing the listen loop by closing the listener reference when the server's Close vi gets called. If I haven't cleaned up the server instance I can open and close the listening many times, the idea being limiting connection counts or having a server only open for portions of an app.
Maintaining the ability to repeatedly open and close the listening means now I always have to have the async loop always running and implement an event system to signal it to start listening, stop, close, etc. and make the background loop lifetime tied to the lifetime of the server instance. Not a big deal, I just thought I had struck gold with how simple my new implementation is but it appears as if I've only succeeded in shooting myself in the foot! Huzzah \o/
12-14-2018 08:36 AM - edited 12-14-2018 08:43 AM
@rolfk wrote:
I think you are making this more complicated then it needs to be. An asynchonously started client does not have to notify your broker that it is finished in order for the broker to close the refnum. It can close the connection refnum anytime it wants itself and then simply quit.
The only thing you have to guarantee is that the VI hierarchy in which the broker runs is not shutting down prematerily. You can shutdown the broker loop itself if you need to, to stop it accepting connections but the top level VI in whose hierarchy the broker has been executed needs to stay alive until you do not need the refnums anymore (or they have been all closed).
If you startup your broker as background task it is its own top level VI, so if you want to allow connections to survive, you have to somehow leave it alive, eventhough you may have stopped the Listener loop to accept new connections.
Your broker is exactly my server instance that I've already stated as a solution. If the broker accepts calls to release a reference, that IS the client notifying the server that it's been closed. The main idea here is that a user of this library shouldn't have to implement any external glue logic to handshake the instance lifetimes with each other. They configure the server and start it, write a callback to accept connection state changes, and simply close the connection when they're done with it. Simplifies future workloads, results in smaller API footprint & documentation, and encapsulates much much more.
If the client doesn't notify the server that it's been closed and cleaned up then the server will always be maintaining that reference. This won't work for systems that are up for months or more at a time (basically until I have a software update). The memory might not ever build up even a noticeable amount but good programmers should always clean up after themselves! And again, client instances are tied to a single connection, not the lifetime of the running application; clients can and will be closing before the application/server is closed. Imagine if the web server maintained a list of past connection requests in memory for the entire lifetime of the service!
A possible alternative is the server could poll the client instance to see if it's still active and clean it up when it returns false. I refuse to implement polling behavior when an event/direct call possibility exists; I'll take the extra overhead of one event/notifier/queue/VI ref book keeping over a timeout any day.
I think I've just had a daft stroke of genius... Instead of my server calling the wait on listener and instantiating a client instance when it succeeds, why don't I instantiate the client first and have the wait on listener function call INSIDE THE CLIENT?! It's conceptually backwards but I think I'm in love with it. The server still maintains the lifetime of the listener instance, the single client that's always waiting on a tcp connection can close gracefully when the wait function errors out from the listener getting closed by the server, and clients that succeed in connecting had their TCP connection ref created in a loop that they maintain the lifetime of. Time for refactor #5340976!
12-14-2018 09:43 AM
Check that, I'll still spawn a new async VI to handle the wait on listener until it either succeeds or errors out due to closing but it can still live in the Server class.
I've started looking into getting permission to share utility libraries like these though if you thought my server implementation was getting complex... This is all based on an abstract communication endpoint base so the same app code can run regardless if the connection is a standard TCP connection or a WebSocket connection (or even RS-232 serial?) and is built on a stream serialization library I've already written to easily send Serializables over a network, or to file, or to an in memory string to do something manually, or save signed configuration / documentation to a JWT... Any code that gets posted, if possible, won't be just a handful of VIs.
12-14-2018 11:22 AM
Do you have any example code to post? I've done TCP Client/Servers and they aren't nearly as complicated as what your descriptions evoke. So I must not understand what you are trying to do.