10-02-2019 08:02 AM
cRIO-9056, LVRT 2018, OPC UA Server with ~50 tags. My application requires dynamic reconfiguration of OPC tags so the OPC UA server needs to be closed and reopened periodically. I am finding that I incur an increasing memory loss each time the OPC UA server is cycled. I have meticulously closed all OPC UA folders and references prior to stopping the server but it does not make any difference. Any thoughts would be appreciated, thanks.
10-02-2019 08:07 AM
I’m sorry to hear that. Do you think you could estimate how many bytes is lost per open/close cycle?
10-02-2019 09:14 AM
Also, how are you evaluating memory usage?
10-02-2019 09:16 AM - edited 10-02-2019 09:17 AM
In addition to the question already asked, how are you measuring the memory usage? The property nodes from the NI System Configuration API currently don't work properly on Linux Real-Time so just double-checking that you're looking at accurate numbers.
Memory Reporting Issue with NI Linux Real Time OS Target
EDIT: Drat, rtollert beat me to it.
10-06-2019 05:02 PM
Yes I read the articles on cRIO Linux memory usage and have changed my monitor code to use the recommended Linux shell command. In spite of this I am seeing a 3-4% memory loss every time I close and reopen the OPC UA server.
10-06-2019 09:08 PM
I meant .3-.4%. Here is the test case I am using...
10-07-2019 09:43 AM
Since it looks like you've narrowed it down, I'd recommend opening a support ticket so that it can be brought to the attention of the OPC UA team. I'm admittedly not clear on proper usage of the OPC UA API.
Out of curiosity, what's the use case for regularly opening and closing servers as opposed to leaving one running?
10-07-2019 10:00 AM
I tried just stopping the service (which is required to modify folders and tags) and closing all the refnums but still had the memory leak so I thought maybe closing the server would help but it does not. BYW I have created a ticket for this issue but thought in the meantime someone from this specific forum might have seen this before.