11-06-2014 11:51 AM
I have an application where I will receive many "packets" in order to produce a large "frame". Each packet is on the order of a few hundred to a few thousand samples (depending on the sample rate).
If I make fetches on the order of my anticipated packet size, then the processing on my part between fetches will likely be a bit easier (less data to have to keep around in array variables, and less read/writes to them). The downside is that I'll have to make *many* fetches because the fetch size is small.
Or I could do the opposite. Fetch a huge chunk less often, but deal with bigger data arrays and more code to "sort through" all the potential packets in a fetch.
To this end...it would help to know if there's any significant overhead involved with LabView going out to memory to grab a chunk of data. If the act of simply calling the Fetch.vi, regardless of the fetch size, takes some known amount of time that might be longer than the processing of that data, then I probably want to opt for large/less frequent fetches.
I don't know if stuff like this is benchmarked...or maybe memory calls like this are trivially fast and I have nothing to worry about? In either case, I have no clue!
Thoughts?
---
Brandon
11-07-2014 05:00 PM
Hey Brandon,
I would take a look at this article to see if this points you in the right direction.
http://zone.ni.com/reference/en-XX/help/373380D-01/usrphelp/data_streaming_performance_tips/
11-12-2014 01:19 PM
Thanks Rob. I was looking more at benchmarking software calls to memory, rather than transfer of data between the USRP and host...but this is useful data to have.
One question...could you explain the difference between an MTU and a UDP payload size? What's the untis of an MTU? Also, it says, "...the larger the number of samples per Write or Fetch, the greater throughput you will be able to achieve." Could you provide any insight as to why this might be true?
---
Brandon
11-21-2014 09:52 AM
The MTU is the Maximum Transmission Units in bytes. In the document referenced above, the UDP Payload size is the size in bytes of the data payload (MTU minus some header bytes). So that's the maximum amount of your data payload that you can send in each transmission. Now, that UDP Payload also contains some additional header information, so the maximum number of IQ samples that you send in each packet is somewhat less than that.
The point of that entire discussion is that, at the very highest data rates, it is most efficient to send data in multiples of some number so that the data you are sending fully fills the data payload of each packet. There is some fixed overhead to every packet, so you get the highest throughput if you are filling every packet to capacity.
There is also some fixed overhead in the driver for every call to Fetch or Write. We try to keep it small, but it is enough to affect the streaming throughput at the highest data rates. So certain rates may only be achievable if you keep the fixed overhead small relative the the data transmission time.
In short, this means it is more efficient to send a 10,000 sample buffer in one Write call rather than sending a 100 samples each in 100 Write calls.
11-24-2014 06:04 AM
Thanks Paul-
If I could peel back the curtain a bit more...
It's my understanding that once you initiate acquisition, samples start streaming back to the host, regardless of how and when you decide to pull those samples out of PC memory to do something with. If that's the case...why would all this data packet preparation be dependent on what I decide to do at the PC (especially if there's some unknown amount of time between fetches where I calculate some stuff)? Why not always just make data transfer back to the host as efficient as possible (full data payloads), regardless of how many you fetch out of memory each time through the loop? (I suppose I'm thinking about continuous acquisition here, but I can't really think of a compelling reason not to do this with a finite acquisition either.)
---
BC