05-28-2009 07:42 PM
Well, I know that this is more of a general TCP question than a LabVIEW question but I will ask it anyway since there does not seem to be any good documentation on the Web about TCP under Windows, particularly whatever Windows layer LabVIEW calls to do TCP.
Can someone out there explain what happens behind the scenes when doing TCP Read/Write with LabVIEW? Let's assume "immediate" or "standard" mode is used and there is no LabVIEW buffer set up for the data. What actually triggers the data to travel over the network? The Read or the Write? Obviously there is some fixed size buffer allocated somewhere at a low level- is it used by the Read side or the Write side, or both? What is the default size of the TCP buffer? 4kb? When that buffer fills, I'm assuming that's when the "Write" timeout comes into play?
So in summary, I'm looking for a basic "tutorial" about how TCP data transfer is handled below the LabVIEW layer.
Solved! Go to Solution.
05-28-2009 10:49 PM
Just pointing to links [These are really good]:
http://www.freeprogrammingresources.com/tcp.html
http://www.redbooks.ibm.com/redbooks/pdfs/gg243376.pdf
05-29-2009 11:11 AM
05-29-2009 12:09 PM
On windows, LabVIEW uses the winsock2 API (WS2_32.dll).
http://msdn.microsoft.com/en-us/library/ms740673(VS.85).aspx
It would seem that on WindowsXP the default size of TCP buffers is 17,520 bytes.
By default LabVIEW leaves the send and recv buffer sizes alone (the OS defaults). If you want to fine tune them you can use the ini tokens
SocketSendBufferSize and
SocketRecvBufferSize
The values of these tokens will be applied to all sockets that LabVIEW uses.
<Disclaimer>
The OS defaults are almost always the best choice. It is really easy to destroy LabVIEW's network performance by fiddling with these values.
</Disclaimer>