LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Slow TCP duplex handling on remote connections

Not to sound arrogant here, but reading the posts usually can be a good thing?

 

How come I get good speed on when the TCP read and write is in two separate loops in the same application? What magic happens within the context of one loop? Is Nagle's algorithm suddenly disabled in the context of two loops, and automagically present in one loop?

 

If this is the case that Nagle's algorithm is implemented in the NI TCP code, then where is the TCP Flush that I can use to trigger an immediate send instead of waiting for more bytes in the send buffer?

 

My zip file is stripped down to a bare minimum as I dont want forum members to plow through my code and debug it.

Try to run it on two separate computers, remember to change the IP and port. 😉

 

The InputStream encapsulates TCP Read and OutputStream a TCP Send. The Java IOStreams package is pure OOP elegance and I want that in my LV code as well. But I might be wrong? 😉

 

http://java.sun.com/javase/6/docs/api/java/io/OutputStream.html

 

Roger

 

0 Kudos
Message 11 of 63
(2,339 Views)

Nagle's algorithm is implemented on the operating system level, as indicated by the link to the Microsoft documentation.  This is not an NI issue.

 

The reason the two loops method works is because you're no longer forcing the write operation to wait on the read operation, which allows the Nagle algorithm to do its job.  In your code, each time through the loop you do one read and one write.  The operating system says "I don't have a full packet worth of data to send; I'll wait 500ms to see if any more data needs to be sent."  That means the other end has to wait 500ms before the read operation completes.  Since the other end can't send new data until the read operation completes, it only can only send data every 500ms.

 

When you separate the read and write into two loops, the data accumulates as fast as possible until the operating system has a full packet, at which point it sends all that data at once.  It doesn't have to wait for the read operation to complete.

Message 12 of 63
(2,324 Views)

If what you say is true, then it's impossible to make a fast and clean way of implementing the following communication, with data dependencies, sequence: server TX -> client RX -> client TX -> server RX, on two different tcp connections?

 

Let me clarify the "bug": If there is a dependency on the TCP Write, the LV implementation should force a flush and, or enable "TCP_NOWAIT" to continue execution. Maybe even there could be a TCP Flush method, and, or a TCP NOWAIT flag to be set with each TCP Send if desired? Something is definetley missing in the LV TCP code.

 

The Client-Server code I am working on in Java, Python and C# screams in comparison with the LV TCP, and is clean too!

 

Roger

 

0 Kudos
Message 13 of 63
(2,317 Views)

There is nothing missing in the LabVIEW TCP code - it is a direct wrapper around calls to the operating system.   I don't suppose you can share a simple Java example that is implemented the same way but does not demonstrate the same behavior?

 

There's nothing wrong with two loops - in LabVIEW, that IS a clean implementation.  LabVIEW provides you tools, such as queues, to pass data between loops efficiently.  If you're going to insist that your LabVIEW code look just like your Java/Python/C# code in order to meet some pre-conceived idea about "clean" then you're going to miss out on LabVIEW's benefits.  That said, I'm certain that if you duplicate your LabVIEW code in one of those other languages, you'll see the same performance, because fundamentally you will have the same problem: two sides are both sending each other small packets, then waiting for those packets to arrive before sending the next packet, and the operating system is inserting a delay to prevent you from sending those small packets.  The problem goes away as soon as you separate the sending and receiving on one end (that is, you can fix the problem by changing only one end of the communication).

Message 14 of 63
(2,306 Views)

Thank you for schooling me on how to create good, clean and maintainable LV code. I appreciate it. Seriously, I mean it!

 

In Java, C, C# I can set the TCP_NOWAIT and I can as well force flushes. I am not being restricted on creating real OOP programs without silly hacks and workarounds to get decent RT performance.

 

Is it possible to have a NI engineer to have a look at my issues with LV TCP, instead of forum trolls dismissing my claims?

 

Roger

 

0 Kudos
Message 15 of 63
(2,299 Views)

You can set TCP_NOWAIT in your LabVIEW code.  A library to do this is attached to the article Do LabVIEW TCP Functions Use the Nagle Algorithm?

Message 16 of 63
(2,289 Views)

Great! Thanks!

 

I'll look into it tonight.

 

Roger

 

0 Kudos
Message 17 of 63
(2,284 Views)

Hi Roger,

 

This fits the description of the Nagel ...

 

THat algorithm was put in place years ago when ethernet was running a 1 MHz and served a very important purpose. Without it, small packages could flood a network becuase the over-head required to move a packet was fixed and by packing more than one message into each, the network could be used more efficiently.

 

If you chase down the links were we discussed this topic you should eventually find threads where we found that sending packets of "optimum size" (satisfy Nagel by themseleves) you will see that Nagel can be shut-off (in the OS) or by-passed by observing the restrictions.

 

For my part I will step out of this thread and let an NI AE engineer reply.

 

If you would like to fix this problem with the OS before then, then check out Nagel.

 

Always trying to help,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 18 of 63
(2,281 Views)

Sorry for being a pain in the.... I take my LV G coding quite seriously and I want LV to be as great as other OOP languages. 🙂

 

I think I understand the motivation for Nagel's algorithm, but there must be a way of disabling such features when RT performance is desired?

 

Ben, your post came in just before I got a good response with a proposed TCP_NOWAIT fix and maybe this sorts out the latency issues?

 

Roger

 

0 Kudos
Message 19 of 63
(2,269 Views)

RogerI wrote:

Sorry for being a pain in the.... I take my LV G coding quite seriously and I want LV to be as great as other OOP languages. 🙂

 

I think I understand the motivation for Nagel's algorithm, but there must be a way of disabling such features when RT performance is desired?

 

Ben, your post came in just before I got a good response with a proposed TCP_NOWAIT fix and maybe this sorts out the latency issues?

 

Roger

 


Not to be picky but you will never truly achieve RT performance if you are targeting a Windows PC. Windows is not deterministic and can therefore never be consider realtime. Out of curiousity is there is a reason that you don't want to use a two loop solution? Parallel processing is one of LabVIEW's great strengths.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 20 of 63
(2,264 Views)