Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DMA problems with PCI 6115 Analog Input

Hi Dennis-

 

I'm glad to hear you're seeing better results.  I don't know specifically what type of spectral analysis you're looking for, but you can check the Signal Processing>>Spectral Analysis palette in LabVIEW to see what's available as shipped with your version.

Tom W
National Instruments
0 Kudos
Message 11 of 15
(1,612 Views)

Continuing the discussion.  Since the PCI bus has a theoretical maximum bandwidth of 133 megahertz and a 10 MHz sample rate gets no where near this, I am still having problems getting the data rate above 2.8 megasamples/second and it is still DMA related (as an aside DMA channel one locks the Mac Pro workstation up).  

 

I am trying to control the input buffer side as I spent the extra $1k to get the 64 megasample memory.  The maximum ai input buffer size is 32 megasamples but I have no idea what the automatic buffer is doing so I want to control it to investigate the problem.  However when I used the mxBase Configure input buffer VI, I get a "Value passed to the task/control is of the wrong type.  I am using a fixed point, hexidecimal number as the buffer size, well below the maximum buffer size.  This task is input to the mxBase AI Voltage Vi.  Where should the Configure input buffer VI be placed?

 

 

0 Kudos
Message 12 of 15
(1,590 Views)

Hi Dennis-

 

You should place the DAQmx Base Configure Input Buffer VI after the DAQmx Base Timing VI.  The Timing VI will choose a default buffer size for the acquisition which you will then override with the Configure Buffer VI.

Tom W
National Instruments
0 Kudos
Message 13 of 15
(1,588 Views)

Tom

 

Unfortunately changing the buffer size did not help.  At least now the DMA is not messing up but there is a time out error.  Any ideas?

 

Have changed the timeout constant as well.

 

0 Kudos
Message 14 of 15
(1,581 Views)

Update

 

I can arbitrarily set the buffer size now.  I can actually get the device to run faster now but only if I set the buffer size to zero, which bogs down one of the CPU's as the DMA is not doing the work now.

 

It will run faster than 2.8 megasamples but only if I set the buffer to zero.  To me this indicates a buffer scheduling problem in how the system schedules the device in vs dma out operations.  I keep changing the samples read and I used a simpler Raw 1D!16 VI in my while loop rather than the double precision 32 bit in the example.

 

Any ideas?

0 Kudos
Message 15 of 15
(1,577 Views)