LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Circumvent reference cleanup when VI ends?

Solved!
Go to solution

The last post from me explains some of the complexity due to abstractions and also that I'm checking if I can get permission to share utility libraries I've written.

0 Kudos
Message 11 of 24
(1,730 Views)

The abstracted stream concept may make this more complicated than needed, but so far when I implemented network servers I always had a listener loop that waited on incoming connections. Whenever the Wait on Listener returned with an incoming connection I put the connection refnum in a queue to be handled by a communication loop in old days and since there are asynchronous VIs available, (they didn't exist back in the 90ies of last century) I simply start an instance of a connection handler as asynchronous VI up, pass it the new connection refnum and let it handle its thing, which can be anything from a short lived single command-response cycle (like the typical HTTP request) to a complete long lived communication handler and once it decides that it has finished, it simply closes the connection. The Listener (Broker) doesn't care about the connection anymore, once it passes it of to the connection handler, so there is no need for the communication handler to notify the listener that it has finished.

If I wanted to have some overview of current connections I rather implemented a simple LabVIEW 2 style global that let connection handlers register when they started and then deregister themselves when they close the connection.

Rolf Kalbermatter
My Blog
0 Kudos
Message 12 of 24
(1,715 Views)

I understand that, however with that implementation if you closed the listener down before closing the incoming connections you'd leak references all over the place (or rather, LV would kindly leak everything on your behalf)

 

I'm going to have very disparate parts of the application managing different resources/connections/servers/etc. so I don't want to have to tie lifetimes together if I can manage it. Also, my applications are going to have both standard TCP servers for connections from other LV apps as well as a websocket server for connections from webapps... I don't want client code to have to know which server it came from, hence the abstractions.

0 Kudos
Message 13 of 24
(1,706 Views)

A "leak" is when temporarily-needed resources are not released when no longer needed.  LabVIEW automatically releasing references acts to try and prevent leaks.

 

Also, the shutdown of the Listener process only closes the TCP Connections, not any other associated references.

0 Kudos
Message 14 of 24
(1,697 Views)

Please excuse my ignorance in this in respect to Websockets as I never used them so far. But your response does sound at least somewhat unlogical. As Mr. Powell already pointed out, there is nothing leaky about the Connection refnums being closed when the listener hierarchy in which it was created goes into Idle mode. This is rather yanking the connection refnum under the feet of whoever is handling the connection. No leakage at all is happening, nor in terms of resource management as the resource is properly closed and not leaked, nor in object abstraction terms, as the listener handler at some point instantiated the connection handler so knows about it anyhow eventhough it may have forgotten about it once it created it.

And while there might be some obscure need for the Websocket client to inform its "Creator" at the end that it has finished with its work because of the underlaying implementation that you call for handling Websockets, that does sound like a real leaky abstraction with the client having to know for some reason who it has to inform when it wants the connection to be closed. It would be much more logical to let that client handle the life time of the connection entirely itself, no matter if it is a websocket, RS-232 connection or a "raw" TCP socket. The fact that LabVIEW can yank your TCP refnum under the clients feet away is of course a concern but there is nothing in the process of having to inform the listener "Please shutdown this connection for me" that will prevent this.

 

As such your current design introduces a dependency of the client to the listener service that seems to serve no good purpose but make your class hierarchy more complex and in fact even circular. The listener needs to know about the client as it creates (instantiates) it at some point but the client also needs to know about the listener in order to tell it "Please shut this connection down for me" while it perfectly can do that itself too. Such back and forth dependencies are not a good thing in an object oriented design and in the case of LabVIEW even a concern that you should avoid at all costs, as the class loader starts to get some problems when loading such a hierarchy. Under Windows it will normally work but cause the loading of your object hierarchy to get slower and slower with every such backreference or circular reference, but don't try to build an app for a Realtime application if you have such a class hierarchy. It will almost certainly crash on startup on any class hierarchy that is not just a demonstration project but a real world implementation!

Rolf Kalbermatter
My Blog
Message 15 of 24
(1,693 Views)

@drjdpowell wrote:

A "leak" is when temporarily-needed resources are not released when no longer needed.  LabVIEW automatically releasing references acts to try and prevent leaks.

 

Also, the shutdown of the Listener process only closes the TCP Connections, not any other associated references.


I'm using the term leak because that's what the trace toolkit tells me I've done; I know they aren't leaking but they aren't being cleaned up by me. Is there a better term for "I tried making sure the references were cleaned up but thanks to LabVIEW's implemented behavior I wasn't given the chance to because LabVIEW ripped the rug out from under my feet"? Perhaps "Reference Theft"?

 

As I've stated though, thanks mainly to tyk indirectly reminding me that I can spawn a new top-loop for each wait on listener execution, I've solved my problem and eliminated the shutdown dependencies between the server and clients. Thanks to this inception of ACBRs (we must async deeper!) my server (request listening and client creation) and client (connection status and data stream provision) codes are more encapsulated with regard to their lifetime operations and eliminate most of the assumptions that need to be passed on to the user. For the cost of one more async call, intended to manage the lifetime of the incoming tcp connection and the associated client class instance, I can now do the things I'd like to do. IE close the listener after N connections are received because I want to limit the number of active connections (say, to 1) yet established connections will continue to live on. My implementation also avoids having to create a state machine that keeps the listener creation VI active and looping regardless of the state of the listener ref and communicates with the rest of the server instance via some sort of event signalling. Instead, I can just close the listener ref and gracefully handle the wait on listener's error to let that 2nd async VI close. No muss, no fuss. State is much much simpler as well and I don't need additional variables to track the internal state of the server; I only need to assess if the async call to create the listener succeeded and that's handled internal to my server class.

 

I was specifically trying to eliminate the circular dependency between the server and client instances, when in fact most of the proposed solutions were going against this instead of focusing on the problem of the lifetime of the reference which was my question. Eliminating the dependencies  was difficult since LabVIEW wasn't allowing me to easily control the lifetime of the TCP connection. I could put more effort into making the client class instances fail more gracefully but then nearly EVERY client operation would need to more closely monitor the state of the TCP connection and of the class instance's DVR but let's face it, LabVIEW doesn't exactly expose much functionality for TCP and determining connection state feels more like a violent interrogation of bashing its reference's skull against the block diagram until it gives up the intel you know it should have. Also, a distopian process of constantly putting the client class instance through polygraph testing was extra overhead that I was also trying to avoid. Also, these client instances would get shared throughout a handful of dependent components which would likely get more complex to handle those assumptions.

 

The internal implementation may be more complicated than the typical bit of LabVIEW [TCP] code but the external API is small and simple, sometimes uncharacteristic for LabVIEW. Conveniently, this implementation also closely matches the .Net networking API, so hooray for being able to use that architecture across platforms instead of directly using .Net for the architecture I was pursuing.

 

WebSockets on the surface function mostly like a standard TCP stream with some initial handshaking in HTTP and an additional protocol layer / packet format; none of which typically concerns external code. External to the server which establishes and handshakes the connections external code will ideally have zero knowledge on the transport of the data it's streaming.

0 Kudos
Message 16 of 24
(1,660 Views)

@DerrickB wrote:

@drjdpowell wrote:

A "leak" is when temporarily-needed resources are not released when no longer needed.  LabVIEW automatically releasing references acts to try and prevent leaks.

 

Also, the shutdown of the Listener process only closes the TCP Connections, not any other associated references.


I'm using the term leak because that's what the trace toolkit tells me I've done; I know they aren't leaking but they aren't being cleaned up by me. Is there a better term for "I tried making sure the references were cleaned up but thanks to LabVIEW's implemented behavior I wasn't given the chance to because LabVIEW ripped the rug out from under my feet"? Perhaps "Reference Theft"?


LabVIEW does give you a chance to clean up reference, as you've found yourself when implementing a proper design.

 

It's not "Reference Theft", it's simply a design choice that didn't suit your needs at a certain time. You'll probably will or have learned to live with it, as we all did.

 

The times I've run into this, I wasn't pleased either. But all strategies will have upsides and downsides. You found yourself on a downside of the strategy that LabVIEW uses very consistently.

 

Often this is in fact a very useful feature...

 


@DerrickB wrote:

As I've stated though, thanks mainly to tyk indirectly reminding me that I can spawn a new […] that's handled internal to my server class.


So, eventually good design was the solution, to what was just an inconvenience?

 

Again, I get the frustration. I whish LabVIEW had some more "persistent" reference options, as everything is killed when the creating hierarchy stops running. Everything except data. So despite all the fancy new things, I'm often forced to use old fashioned shift registers (or feedback nodes), just so I can test sub VIs when the main stopped...

 

I don't always like it, but I'll manage...

0 Kudos
Message 17 of 24
(1,644 Views)

Discussing code that you cannot show is difficult.  I get the impression (possible wrong) that you are overcomplicating things, and that I would find your "simpler" solution and API quite complicated.

 

One thing, though: a couple of times you have mentioned communication between processes ("some sort of event signalling") as being burdensome.   If you work on an API to make inter-process communication easy then you might find that adding network communication to that is also simple.

 

 

0 Kudos
Message 18 of 24
(1,629 Views)

I wasn't trying to discuss code, I was trying to discuss working around LV's automatic reference cleanup. Additional questions were asked and I tried providing some explanation.

 

To continue that extra discussion, because it's more fun than just staring at wires and nodes all day and writing this out gets me to reflect on some of the aspects...

 

I myself stated that the internal implementation is complicated. However the implementation is simpler and more encapsulated than the suggested solution of creating a reference broker, which I previously had and was trying to move away from as it was also adding unintended dependencies between components that I wanted to avoid. The external API for a client is: Connect, Add/Remove Status Callback, Get Stream, Is Connected, Close, Cleanup. Stream API (when used with the client/server): Read Data, Write Data, Close, Cleanup. Server API: Ctor, Open, Add/Remove Status Callback, Close, Cleanup. And of course some properties to manage things like buffering modes, timeouts, and other parameters to the networking functions but nothing new. Not too bad and the other value add is anyone coming from another environment (especially any .Net language) will be familiar with the structuring and functionality of this. (Not a targeted goal but I was definitely stealing designs from .Net so a natural side effect). DotNet simplifies networking by making the external API more elegant and internalizing some of the interaction with the underlying windows socket API. That's the same goal for this along with interoperability with other components that will get used in tandem with the networking being discussed here.

 

It does have the added layer of the notion of a Stream which is driving the entire design of this (providing multiple data I/O mechanisms that inherit from a common Stream base) so that most higher level code doesn't have to differentiate between files, strings, network, ... Joys found in most other environments. That's my goal. Having the stream object the #1 factor in this whole design and all of this internal complication is to be able to in the long run pass that stream off to other code that's sole concern is the generation or consumption of data. No concerns with formatting in the case of serializable objects, no concern with file management or error handling for issues that the data component shouldn't be concerned with. It either successfully reads/writes data or it doesn't. The component that generated and knows about the underlying mechanics of the stream can fret over the cause of the issues. Code internal to the TCPStream class can recognize than any error besides a timeout means that it can cleanup the connection and nothing that uses the Stream has to know or care about that other than reading/writing data failed.

 

As far as the IPC, it's not that it's necessarily burdensome but I don't want an additional layer when there's already something that can suffice. I can close the TCP Listener ref, and the wait on listener function will immediately return an error. No state machine. No timeouts or polling. No additional IPC / handshaking. Adding in additional layers of IPC when the close function can just close the listener reference is the more burdensome approach. Adding in IPC for the sake of handshaking between server/client code is completely more burdensome and adds in coupling that I know I can avoid, albeit by dizzying most people that are interested in looking under the hood.

0 Kudos
Message 19 of 24
(1,620 Views)

The broker was suggested as (and is) an answer to "Circumvent reference cleanup when VI ends".

 

AFAIC, avoiding the need for this is better. But actually OT Smiley Wink.

 


@DerrickB wrote:

 

To continue that extra discussion, because it's more fun than just staring at wires and nodes all day and writing this out gets me to reflect on some of the aspects...


That says it all. AFAIK, we're just debating technical things here. It could feel like attacks (over and back), but that's just tone of voice lost in translation to text.

0 Kudos
Message 20 of 24
(1,600 Views)