Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Underflow and buffer sizes

I have an application that does some closed loop control using a PCI-6251 by sending the analog output at 100kHz and reading 8 channels back, also at 100kHz each.

 

It works, but I have tried to push the envelope and use buffers as small as possible in order to get the quickest response time on control corrections.  I know the rule of thumb is that buffer sizes should be 10% of the sample rate, but 100ms is really a little slow for what I want to do.  By trial and error on my PC (a Dell Core 2 Quad that's a couple years old now) I can use 20ms (2000 sample) buffers working quite reliably.  I can hammer on the PC while my program is running and never get the dreaded underflow: code 200015.  (I cannot afford a single underflow on the output, and have to abort if I do, repeating the last output buffer is no good.)

 

Unfortunately, when I run my program on certain PC's, including PC's that have far greater specifications than mine, they get DAQ underflow errors with some regularity.  Same config as mine, no indexing or virus scans running, etc. etc.  On one particular model, upgrading the video driver made a big improvement but did not eliminate the underflows.

 

I can get rid of the errors by increasing my buffer size, I understand that.  But what I want to know is why this works on some PC's and not others.  Why is the "10% rule" there in the first place?  100ms is an eternity on today's computers, I should not have to wait that long to hit up the DAQ card.  As a test, I ran my program with 10kHz sample rate, and even though it is 1/10th the data, it has more trouble doing 20ms buffer sizes than at 100kHz.  That is counter-intuitive.

 

Is there anything on a PC I can possibly tweak or watch out for, that would enable me to run with smaller buffers?  What is it about my PC that makes it work so well?  (It's also reliable on a current model of Dell i7's, which is nice.)

 

Thank you for any insight you may be able to provide.

 

0 Kudos
Message 1 of 3
(3,133 Views)

Hi JSA,

 

I don't have a definitive explanation for you, but I can give you my best guess.  Older Intel platforms, like the Core 2 based ones, had a PCI bus that came directly off of the chipset.  This gives pretty low latency, which for a control app is obviously important.  New platforms no longer have a PCI bus on the chipset directly.  Instead, they only have a PCIe bus.  There is then a PCIe to PCI bridge chip that provides the PCI bridge.  This additional layer of indirection can add a latency overhead.  In addition, we have seen cases in which the PCIe to PCI bridge chips seem to exhibit some bug-like behaviors which could also slow them down.

 

Do you happen to the the specs of one of the machines on which the 6251 doesn't keep up?  I *might* be able to do a deeper dive and get you some more definitive information.

 

-Jeff

 

Message 2 of 3
(3,120 Views)

Thanks for the response Jeff, that gives me something to look out for, and I wonder if it is indeed the differentiating factor; it would make at least some sense.  I don't have the models of the newer PC's that fail readily available, but I can find them and do some digging.

 

Thanks!

John

 

0 Kudos
Message 3 of 3
(3,115 Views)