LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Nagle Algorithm Affecting TCP Listener

Solved!
Go to solution

I have two LabVIEW applications communicating with each other via TCP to stream data from a monitor to a receiver. Ideally, I would like my monitor, which is generating the data, to listen for a TCP connection and then send data via that connection when my receiver connects to it. When I structure my code in this manner, I find I get peak latencies of ~500ms (!!).

 

I came across the following knowledgebase article Do LabVIEW TCP Functions Use the Nagle Algorith?, and initially found no speed up.

 

When I restructured my code such that the receiver acted as a listener and my monitor connected to the receivers port and IP then the TCP No Delay function worked as advertised.

 

Does anyone have any experience with turning off the Nagle algorithm on a listener, e.g. like this?

 

Listener Snippet.png

I would like my monitoring application to act like a passive monitor that other applications can connect to.

 

Message 1 of 4
(4,119 Views)

Could you clarify what you're asking? In your case, you only care about disabling the Nagle algorithm on the side that sends the data (which, if I've understood correctly, is the same side that listens for, rather than initiates, the connection). Does the code in the image you posted work? If not, is there an error or other indication of the reason for failure?

0 Kudos
Message 2 of 4
(4,105 Views)

Sure Nathand, I would normally post code for this but I have found that the small piece of code to illustrate the implementations runs so fast in both implementations that it doesn't highlight the problems. I would like to avoid posting the entire project...

 

There are two implementations I have tried.

 

Implementation #1 (this is similar to the code that is here)

 

Program for receiving the data starts by creating a TCP Listener and then waits on that Listener until a TCP connection is made to it. The program for sending the data uses a TCP Open Connection and turns off the Nagle algorithm and then starts streaming the data.

 

Pros: This implementation correctly turns off the Nagle algorithm. Cons: This implementation requires the program sending the data to know the IP of the receiver in order to send the data. 

 

Implementation #2

 

Program for sending the data starts by creating a TCP Listener and then waits on that Listener until a TCP connection is made to it. When that connection is made, I try to use the same code to turn off the Nagle algorithm that is used here and shown in the image in the original post. The program for receiving the data uses a TCP Open Connection and once the connection is established the sender starts sending the data to the receiver.

 

Pros: The sender doesn't require knowledge of the IP of the receiver. Cons: The code for turning off the Nagle algorithm doesn't seem to be working as I get latencies as high as 500ms.

0 Kudos
Message 3 of 4
(4,064 Views)
Solution
Accepted by Ben

I found the issue once I started to make a test project to illustrate the problem.

 

My Receiver application was setup as a queued state machine and there was a bug where occaisionally it would not be waiting on the TCP read. This looked like a bottleneck on the sending side. When I changed between the two implementations I removed the bug without knowing it making it look like the issue was with how I was disabling the Nagle algorithm. 

 

It all appears to work fine now!

Message 4 of 4
(4,028 Views)