LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Update :accessing multiple SV data Items from DataSocket

Back in March I wrote in asking about accessing multiple SV items from Labview Linux via DataSocket. I need about a dozen values from an SV server ,  trying to do this in Linux so that we can serve out the data without loading down the XP host  that's busy trying to talk to an old plc, and have as a server apache on linux, whose behavior we can control better than the NI built-into-labview server.  It was left that I'd try to loop over a subset of the variables needed, ad would report back with problems. Well, it seems that getting 10 ints takes 15 seconds, I had been hoping for like 200mS  Maybe my trivial test VI is just wrong, wrong, wrong. Attached is my vi, any suggestions for speeding this up ? 

 

thanks

Alex

 

0 Kudos
Message 1 of 7
(2,880 Views)

I think the reason that it's taking so long is that you're doing your datasocket open and close inside the loop.  Each time you open a connection it has a TCP handshaking process that adds a lot of overhead to your loop.  You should instead try doing the open and close outside the loop, and perform only the read inside the loop.  See if this gets you a better loop rate.

0 Kudos
Message 2 of 7
(2,864 Views)

Sure, makes sense. I was juuust about to move the open to a previous sequence frame and got bogged down by:

 

1.) Put it in a loop , then write the handles returned to an array. Um, which loop structure loops over whatever's in the array -

 do you really have to use the array size vi to initialize the loop counter or is there a loop mechanism like "for each item in this list" ? Got distracted with that one.

 

2.) During this fiddling I found out from the author of the SVs in question that the source of this data is  an OPC server talking to a GE-Fanuc plc , and trying to read in 100+ variables over a serial connection as fast as it can, then writing out the shared vars and also writing the whole kit and caboodle in a cluster and into a 4-day circular buffer file. Right now, that's about 250mS per loop , and grows with time, so prolly a memory leak which is ahem fixed by rebooting when it  gets too bad.  So he's worried about doing much more with this machine, and we both thought that I'd better open and close each explicitly. 

 

3.)Wonder if trying to do the connect is the trouble. With that in mind, is there something the OPC server could be pushing out to me ? How about and OPC client running under Linux that can talk to the NI-OPC ? know of any that work ? I'm not an OPC person, but OpenOPC , a sourceforge Python kit, notes that OPC protocol is not really all that well adhered to.  This is a multicore box, maybe we could put the variable push on a separate processor ?

 

I'll try the open-once-read-many though and see how much that helps. Point us to what else NI-OPC could be doing , we'll track that down too.

 

thanks for your attention

Alex

 

 

0 Kudos
Message 3 of 7
(2,861 Views)

Kyle -

From the last message :

1.) Nevermind. build arrays in the simple examples. Got it.

 

Here's the attempt. DS items need an open for each read, I think . Once all the refnums are gotten back, the first read takes ~0.4 mS (!) but subsequent reads time out. Tell me what is wrong.

 

Note that the DS reader examples do NOT do open/close , and fail ! the notes claim that they work remotely but they don't. Probably old examples that haven't been looked at in a while.

 

Still probably the best option is to have that OPC server shoot me the whole data structure as it is doing the logging routine.

 

Alex

 

0 Kudos
Message 4 of 7
(2,852 Views)

We need to get all of your reads to execute like that first one!  Generally with datasockets you only need to execute the open only once, unless for some reason your open is changing parameters each time.  You're not closing the connection inside the loop by any chance are you?  Additionally, if you're doing some heavy communication with OPC servers the LabVIEW Datalogging and Supervisory Control module will likely be a good investment.  Maybe if you posted your most recent code it might help determine where you're going wrong.

0 Kudos
Message 5 of 7
(2,850 Views)

Sorry, I didn't specify how I know the times well enough. Have a look at the VI- there is a control in the middle frame that allows the user to choose the # of reads after the open and before the close. The timer check deltaT as the read-loop starts and at the close starts. one pass =~0.4mS, two passes ~1-6 seconds. The OPC server is really busy, as noted above. These times are lots shorter than with an open-close cycle each read, but those handles don't work on the second pass.

 

The OPC server machine does have a Labview thread that gets the whole pile of SVS about every 250mS, shoves the bunch into a cluster and writes to disk, a circular buffer of length ~3-4 days as noted above. We're going to open a regular tcp/ip socket pair and let that LV routine shove the cluster to me , we think that should reduce the latency. 

 

BUT STILL , if you have suggestions on the subsequent -read thing, let me know.

 

Alex

 

0 Kudos
Message 6 of 7
(2,844 Views)

I think that idea is definitely worth a try.  Get back with me with the outcome of that test, and I'll see if I have any more suggestions to try.

0 Kudos
Message 7 of 7
(2,835 Views)