USRP Software Radio

cancel
Showing results for 
Search instead for 
Did you mean: 

what are the maximum sustainable streaming rates to USRP RIO

Hi,

I am using a USRP-2942R.  Using one of the example VIs (host based), we created a VI which would record 2 or 4 channels simultaneously to a data file (we use two such devices for the 4 channel capture).  We would like to do this continuously for as long as possible.  By recording to our Solid State Drive, we can achieve rates of about 20MS/s per channel when capturing two channels or 10 MS/s per channel when capturing 4 channels.  However, we cannot seem to go much above this rate. Even if we setup a RAM drive for the file, we still have issues above 20 MS/s for two channels.  So, it does not seem that the file writing is the bottleneck.

 

I am wondering if there is any example code for doing this exact function.  Perhaps we have done something wrong in modifying the example we had.  Do you know what sample rates we should expect to achive when recording to a RAM file.  By comparison, if I run these same USRPs on this same PC but using the Ettus UHD software under Linux Ubuntu 14.04, I can achieve 100MS/s per channel on two channels when recording to a RAM file.

 

Thanks.

 

Rob Kossler

0 Kudos
Message 1 of 18
(7,776 Views)

Hi Rob,

 

Are you programming your application in FPGA or in the LabVIEW USRP API?

 

Could you please describe in detail your physical setup for both the Ubuntu and Windows cases?

 

How are you measuring your rate?

 

Regards,
0 Kudos
Message 2 of 18
(7,701 Views)

I am using the host-based USRP RIO API.  I am not modifying the FPGA.

 

The physical setup for LabVIEW/Windows is just a PC and the USRP connected by the PCIe x4 kit (with the NI 8371 PCIe card and x4 cable).  The physical setup for UHD/Ubuntu is just the PC and the USRP connected by 10Gb LAN using an Intel X520-DA2 10Gb LAN card and a SFP+ direct attach cable.

 

I am evaluating any given sample rate by counting the number of samples I receive and write and then multiplying that by the sampling interval.  This represents the full capture duration.  If this is significantly less than "real-time" (as measured by the PC clock), then I know I am missing samples.  For example, if I collect for 15 seconds as measured by the PC clock but my LabVIEW VI tells me it only collected 12 secs of data, I know there is a problem.  Actually, this method is a terrible method, but it was the easiest thing for me to check right away.  I would prefer a better method for determining dropped samples, but I don't really know how to do it the right way.  Another "terrible" way is for me to use an actual signal and check the resulting captured file for continuity throughout.  Again, there must be a better way for detecting dropped samples.  Nevertheless, using my current method, I can see "coarse" failures like the example I mentioned above.  

 

Is there any example program for writing received samples to file as quickly as possible?

 

Rob

0 Kudos
Message 3 of 18
(7,684 Views)

Hi Rob,

 

If you open LabVIEW and from the splash screen select "Create Project".  In the resulting window select "NI-USRP" in the list on the left.  This will bring up the "Simple NI-USRP Streaming" sample project.  Open this sample project and click "Finish" on the next screen to complete the setup.  Then, from the project explorer window select the "Rx Streaming (Host).vi".  Set your device per your listing in MAX and set your sample rate to 120MS/s. Make sure the "Finite Acquisition" button is not selected and run the program.  This example will automatically handle any errors, so there is no need to check for overflow, dropped data, or rate as the example will automatically throw errors for these things. This example doesn't stream to file, but will at least establish the streaming rate of the device in the absence of the writing to file. Once you have established you can achieve the expected rates, you can adapt this to log to file and know that any problems that arise are attributed to the changes you've made and not a fundametal problem with the program or hardware.

Regards,
0 Kudos
Message 4 of 18
(7,662 Views)

Thanks for the reply.

 

We did as you suggested and were able to run with 2 channels at 120 MS/s.  You mentioned that the program will "automatically throw errors" for things like overflow or dropped data.  I am not seeing any such errors.

 

But, how can I be sure that the program will do this? I attempted to force an error by inserting a "wait" on the Fetch Rx Data VI.  I used a constant wait value of 500 ms.  This caused the display update rate to slow down accordingly, but I did not get any error message.  I was expecting that the insertion of this "wait" would cause the "fetch" VI to run slowly such that we would get dropped data.  But, given that I did not get an error message, I'm guessing that this is not a valid way to cause an error.

 

If I were working with the Ettus UHD library, I could easily cause such an error by injecting a wait inside the loop which continuously calls the "recv()" command.  That was my thinking when I inserted the wait in the Fetch VI.  Is there some way I can purposely cause dropped data so that I can see what error message I will get in this situation?

 

Thank you.

Rob

0 Kudos
Message 5 of 18
(7,642 Views)

The error handling on the exampling you are using is quite robust and fairly difficult to inject false errors.  It can be done, but hs to be done on the FPGA side and requires recompilig which will take a long time. Everything is open, so you can go into the FPGA code and find all of the error handling, find the error handling, and spoof it.

Regards,
0 Kudos
Message 6 of 18
(7,615 Views)

I think that there is some misunderstanding.  I am very new to LabVIEW so part of the misunderstanding may be related to that.

 

If I were working with this same hardware but using Ettus UHD/Linux, I know that the FPGA streams data continuously (via 10Gb LAN) to the host.  The UHD driver accepts the Ethernet data and expects the host application to retrieve it very quickly.  If the host application does not retrieve the data quickly enough it is discarded by the UHD driver and "Overrun" error messages are displayed.  It is very easy for me to purposely slow down the host application that is retrieving the data in order to force an Overrun condition so that I can see the error message.

 

Now, working with LabVIEW/Windows, I want to continuously stream the receive data to a data file without dropping samples.  Your recommendation was to modify the "Simple NI-USRP Streaming" example VI to send the data to a file.  Presently it is sending the data to a GUI.  So, I could do this but how would I know if I dropped any data?  The current VI is certainly not displaying ALL data on the GUI when running 2 channels at 120 MS/s.  So, at a GUI level it is dropping data, but there are no Error messages.  If I just modify this to send the data to a file, I expect there will be dropped data with no error messages (same as current GUI).

 

It seems that there must be some way at the HOST side to purposely inject a WAIT (or other bug) such that I could see what would happen when all of the data is not being consumed.  Any thoughts?

Rob

0 Kudos
Message 7 of 18
(7,605 Views)

Hi Rob,

 

Go into the host code.  Make sure the button that says "Finite Acquisition/Continuous Acquisition" Reads "Continuous Acquisition" this button shows what you are using now what you are selecting, which is a bit confusing, sorry for missing that.  Then, set your rate up to 120MS/s.  With continuous acquisition you should drop samples and when you hit stop it should report that you did and how many.

Regards,
0 Kudos
Message 8 of 18
(7,578 Views)

AC,

Sorry for the delay.  Our equipment was unavailable for a couple of days.

 

We followed your suggestion and indeed we were able to generate an error message by hitting Stop.  But, this will not help us because this error occurs at any sample rate.  This is the reason I had hoped to insert a delay to cause the error message. The purpose of such a "delay" would be to simulate a disk writing or other slow PC operation.  My understanding was that if I slowed down the PC, it wouldn't be able to stream as fast and thus I would expect overflows or other errors at fast sample rates.  I'm starting to realize that this is not the way the program operates.  However, I'm no closer to understanding how it does operate.

 

Let me try another approach to try to explain what it is that I don't understand.  We are using the example VI "Rx Streaming (host)" which has a main loop that calls "Fetch Rx Data (ID CDB WDT)" and "Check Stream Status" on each iteration.  My initial expectation was that "Fetch Rx Data (...)"  would need to be called very fast in order to fetch ALL of the streaming data in order to avoid overflows.  This does not appear to be the case.  I can call this VI as slowly as I like and there are no error messages produced.  This is my primary misunderstanding.  For all of the streaming data that does NOT get fetched, where does this data go?  How can I reconfigure "Rx Streaming (host)" such that if "Fetch Rx Data (...)" does not fetch ALL of the streaming data, an error is produced. 

 

Recall that my end goal is to simply redirect ALL of the streaming data to file.  If I just take this example program and redirect the output of the "Fetch Rx Data (...)" VI to a file, I will get no errors but I will also have large gaps in my recorded data for fast sample rates.  And, I have no good way to detect if there are gaps and if so how large they are (see my original post for the method that I am currently using to detect gaps - this method is not good).  

 

Let me know if you have any comments / suggestions.

Rob

0 Kudos
Message 9 of 18
(7,493 Views)

AC,

Now I understand better the Overflow warning so you can ignore my previous post.  In my previous post, I was under the assumption that the very act of hitting the Stop button was "causing" the warning to occur.  Now, I realize that hitting the stop button just stops the program and if any warnings occurred at any point along the way, they are displayed.  I didn't realize the distinction between Warnings and Errors.  By the way, is there a way to have such a warning immediately notify you rather than having to wait until the program  is stopped?

 

With this new understanding, I modified the example program to remove the output of the "Fetch" and remove the GUI displays such that the while loop would be a fast as possible.  This works without generating any warnings at low sample rates.  But, even under this "ideal" situation of discarding all data, I cannot reliably run at 20 MS/s (2 chan) without seeing an Overflow warning now and then.  Using this same exact PC and USRP under Linux/UHD, I can run at 100 MS/s (2 chan) with no warnings (via 10Gb LAN interface).  Is this expected behavior?  Is the Linux/UHD performance really that much superior to the Windows/LabVIEW performance?

 

Rob

0 Kudos
Message 10 of 18
(7,480 Views)