04-05-2012 05:04 PM
I'm pretty sure this is entirely up to the operating system. I can't find anything on MSDN about a socket option that would control whether to receive packets with invalid checksums. Some searching on the web turned up one blog post (http://blog.southparksystems.com/2008/01/udp-checksums-in-embedded-linux-and.html) saying that Windows XP drops UDP packets with empty checksums, but I can't find any other source confirming or denying that in a quick search.
By the way, about the routing... it's not really polling until it finds a destination; Windows maintains a routing table, using information about the network adapters, netmask values, and gateways, that tells it where to send a packet based on its destination address. I don't know much about the details of routing, but to get started, take a look at the "route" command at a Windows command-line.
04-05-2012 05:24 PM
Good link. Not good for us if it's true. I'll see if I can find a way to test this.
Funny, because everything I found claimed that it was acceptable to do that.
Thanks
04-09-2012 08:24 AM
Ok, solved. It was a unit problem, but I'll share the experience just in case someone finds it useful.
First off. Windows WILL receive UDP checksums of 0x0000 (disabled).
The real problem was the packet length in the IPv4 header was wrong. Funny enough, wireshark does not flag this and windows silently drops it. If the UDP length in the header is wrong, wireshark will flag it.
For people delving this deep into UDP packets, I used this freeware packet builder.
Thanks for the help. Kudos.
04-25-2012 08:07 PM
How did you change the packet length of IPV4 header? Could you please shed some light onto this
Thanks
Austin
04-26-2012 02:43 AM - edited 04-26-2012 02:47 AM
I don't know a lot about wireshark, but I assume it's hooking into the network stack at some much lower level and reading raw packets before the operating system has a chance to do anything with them. Your LabVIEW and Ruby programs are both relying on the operating system to pass packets to them, so if the operating system sees the packets as invalid and drops them, you'll never get them regardless of programming language.
Actually Wireshark uses the PCAP driver which hooks the network driver directly before the OS network stack sees any data. So Wireshark will report any data that comes through the network card drivers (except some local traffic on some Windows versions, as that is routed in the network stack directly without going through an actual software loopback device.
Some LabVIEW specific questions, if the UDP Checksum (not the IPv4, or ethernet) is invalid. Will LabVIEW report it with a read?
If the checksum is 0x0000, will LabVIEW follow UDP protocol and ignore the checksum?
LabVIEW does absolutely no interpretation of its own here. It simply calls the Winsock API and passes whatever data it gets (or doesn't get) to your program.
04-26-2012 02:49 AM - edited 04-26-2012 02:51 AM
@AustinCann wrote:
How did you change the packet length of IPV4 header? Could you please shed some light onto this
Thanks
Austin
It looks like they have their own device they need to test, which had a firmware bug and created invalid IP frames. So their (inhouse) programmer probably fixed the firmware of the device. If you are asking about how to fix the IP frame from within LabVIEW: you don't. LabVIEW does not give you access to either the IP, UDP or TCP frames at all.
04-26-2012 07:02 AM
@rolfk wrote:
@AustinCann wrote:
How did you change the packet length of IPV4 header? Could you please shed some light onto this
Thanks
Austin
It looks like they have their own device they need to test, which had a firmware bug and created invalid IP frames. So their (inhouse) programmer probably fixed the firmware of the device. If you are asking about how to fix the IP frame from within LabVIEW: you don't. LabVIEW does not give you access to either the IP, UDP or TCP frames at all.
Correct. We had an FPGA building and sending the packets. They had the wrong algorithm for calculting the IPv4 length.
The only thing I found to send a raw packet was the packet builder I linked above, which lets you tinker with nearly anything. Everything else sends through the windows layer which builds the packets automatically.
So really, this thread is just a warning for people using embeded ethernet with an unproven library. A wrong IPv4 length can lead to hard to find troubles. I'm sure many other things can cause Windows to silently drop the packet too.
10-18-2013 11:17 AM
I seem to be experiencing one of the "other things that can cause Windows to silently drop the packet".
I'm using UDP to send commands and responses between two devices. Sometimes ~(1 out of 100) the UDP Read on the Windows device running LV 2012 timesout. Wireshark on the same Windows machine sees the packet the other device transmitted. I've examined the Wireshark output and I don't see any differences between a "received" packet and the "dropped" packet that could explain the problem.
Any thoughts?
10-18-2013 01:46 PM
Windows has a default UDP buffer size that operates as a FIFO.
If you receive packets at a high enough rate, Windows will overwrite the oldest packets in the buffer if you don't service the socket fast enough.
You can configure windows to use a larger default size for all UDP sockets, or you can use this technique I used long ago to increase a specific LabVIEW UDP reference's buffer size.
http://forums.ni.com/t5/LabVIEW/Change-UDP-Socket-ReceiveBufferSize-under-Windows/td-p/483098