Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

USB-6211 overwriting samples from previous acquistition?

I'm working with a USB-6211 on a simple program that outputs IQ data on two AO channels and reads it back on two AI channels. I've found that for each acquisition, the acquired waveform appears to be affected by the previous acquisition. The issue I see is that one or more samples from the end of the previous acquisition are overwriting (or prepending? I can't tell.) the first one or two samples from the current acquisition. The number of samples that do this seems to vary between the channels, and once or twice I saw it change unexpectedly when I switched PCs. I couldn't reproduce that change.

 

I tried to work around this issue by padding the waveform front and back with a number of samples before generating it, then trimming those extra samples off after acquiring it. I decided to add samples with value '1' to the front of the waveform and samples with value '0' to the end. The code does what it should, but since the number of samples overwritten varies, I can't just set a fixed number of samples to pad/trim and trust it to work every time. I find that sometimes it ends up shaving off the first real sample of the waveform in addition to the padding samples, so I lose some data from the signal. (For example, if I set it to add 2 samples to the front and 2 to the back, the acquisition may only show up with 3 padding samples in front of the signal instead of 4. If I trim 4 samples anyway, it'll shave off one sample from the real signal.

 

There's also another issue: the ch0 waveform has an extra sample when compared to the ch1 waveform. Every sample in the ch0 waveform lags the corresponding sample in the ch1 waveform by one sample clock period. I don't know why this is, though it was easy enough to work around by trimming the extra sample off the waveform. Still, it seems like a bug in the driver...

 

I extracted the DAQ code from my program and put it into the attached VI that exhibits the behavior. I'm developing with LV 2009 SP1 and DAQmx 9.2.2. Before I'm asked to upgrade: I can't, because that could affect executables in the critical path of the project. Here's a screenshot of the behavior, using a pair of triangle waveforms with a slight phase offset in place of an I-Q pattern. (The triangles show the behavior a little better.) If you run this with your own hardware, connect ao0:1 to ai1:2 with a couple of bare wires.

 

ao_ai_synched 2.PNG

 

Can someone at NI explain the behavior I'm seeing? Is this a bug, or have I configured the driver wrong? I put some extra comments in the example VI to explain some of the code sections.

Message 1 of 12
(4,213 Views)

Can anyone please provide some insight into why my device is doing this?

0 Kudos
Message 2 of 12
(4,187 Views)

Hi Keith,

 

The outputs take some time to reach the desired voltage.  This is probably best represented in the "Slew Rate" specification, which gives the maximum slope of the transition during a voltage update.  For the 6211, this spec is 5V / us.  Note that it might take longer to reach the final voltage to the desired accuracy, since the slope of the output will level off as the output approaches the target value (overdamped) or the output will overshoot the target value and ring (underdamped).  This is represented as the "settling time" spec--the 6211 takes up to 32 us to settle to within 1 LSB of accuracy (for a full scale step).

 

The delay between your AO update and your first AI sample is going to be on the 100 ns order.  So, the output has barely had any time to adjust from its previous value.  You would expect to read close to whatever was on the channel previously.

 

Remembering that the 6211 is multiplexed, the 2nd channel will be sampled some time after the first (tch1 + 1/AIConv.Rate). With your parameters (10 kHz sample clock on 2 channels), the driver should use the default maximum convert period of 14 us on the 6211.  By this time, the output would be much closer to the final value.

 

 

So, you have one channel sampled very soon after the AO update, and another channel sampled quite a bit (2 orders of magnitude) later.  What you'd really want to do is allow as much time for the AO to settle as possible before making your analog input measurements.  Both channels should be sampled before the following update however.  You'll probably find the following properties helpful:

 

    2011-07-28_125008.png

 

The inverse of AIConv.Rate is the amount of time between the sampling of your two AI channels.  Be careful with making the rate too high, as the multiplexer on the Analog Input also needs enough time to settle (it depends on your source impedance).

 

The DelayFromSampClk.Delay is the amount of time between the sample clock and the first AI channel.  This should be high enough to allow for the output to reach its value before the first sample is taken.  However, if it is too high you won't be able to fit both channels before the subsequent analog output update.

 

 

With some combination of the above parameters, you should hopefully allow enough time for settling as well as fit all of your samples in between AO updates.

 

 

Best Regards,

John Passiak
Message 3 of 12
(4,183 Views)

Hi John -

 

Thanks for the reply and all the useful information. I've been playing around with these parameters for a little while and have some more questions.

 

I'm actually running this device at all possible rates, up to the full 250 kS/s aggregate. I was able to play with the DelayFromSampleClock parameter and tweak it up to 80 ticks when running at 125 kS/s on two channels (full rate for 2-ch AI). At ~100 ticks it started failing with ADC conversion errors. The sampling behavior improved a little, with the first "raw" sample value beiing near 1.0, though not completely there. At 10 kS/s with the delay value = 500 ticks, the first sample was a rock-solid 1.0. I think I can build a little linear scale that sets the value appropriately when a sample rate is chosen.

 

The spec says that the 1LSB settling time of the input is 7 us. But the sample clock can take one channel up to 250 kS/s, which is a 4 us period. Does that mean running this device at full rate will only give +/- 6 LSB accuracy on the input? Is it necessary to run at 142.8 kS/s or slower to get +/- 1 LSB accuracy? (On that note, I still wasn't sampling the right value running at 125 kS/s, so it seems my performance may be even worse than this?)

 

Since the measurement accuracy is given in terms of LSB, will reducing the input range from (-10,10) V to (-1,1) V improve the settling time by a factor of 10x? I tried this and didn't see any noticeable improvement.

 

When I tried to set the AIConv.Rate parameter, even to a conservative value of (1/14u), the task failed on me every time with error -200081. If I just let the driver automatically select the AIConv.Rate value, it picked 253165 Hz, with AIConv.TimebaseDiv = 79. The weird thing is that AIConv.MaxRate = 250000, so this should be an iilegal value. How does it get away with that, when I can't even set the rate to 71.4 kHz?

 

Again, thanks for your help with this.

0 Kudos
Message 4 of 12
(4,155 Views)

Hi Keith,

 

I'll answer each part of your question separately:

 

The spec says that the 1LSB settling time of the input is 7 us. But the sample clock can take one channel up to 250 kS/s, which is a 4 us period. Does that mean running this device at full rate will only give +/- 6 LSB accuracy on the input? Is it necessary to run at 142.8 kS/s or slower to get +/- 1 LSB accuracy? (On that note, I still wasn't sampling the right value running at 125 kS/s, so it seems my performance may be even worse than this?)

 

Since the measurement accuracy is given in terms of LSB, will reducing the input range from (-10,10) V to (-1,1) V improve the settling time by a factor of 10x? I tried this and didn't see any noticeable improvement.

 

The behavior you are seeing is the result of the analog output slew rate and settling time.  That is, you measure the AO before it has time to adjust to its final value. It is not an error of the AI measurement, but rather the actual voltage that is on the line at the time the measurement is made.

 

If you're curious about the AI Settling Time spec, it typically becomes a factor when measuring vastly different voltages on adjacent channels.  If you were to read from ai0:1, and had 10V on ai0 and -10V on ai1, you would see up to 6 LSB of error due to the AI settling error (assuming a low source impedance) if sampling at max rate.  Notice though that the spec is also in terms of "ppm of step" (90 ppm of step @ 4 us)The 6 LSB spec assumes a full scale transition between adjacent AI channels--90 ppm of a full-scale step corresponds to about 6 LSBs on a 16-bit device.

 

 

 

When I tried to set the AIConv.Rate parameter, even to a conservative value of (1/14u), the task failed on me every time with error -200081. If I just let the driver automatically select the AIConv.Rate value, it picked 253165 Hz, with AIConv.TimebaseDiv = 79. The weird thing is that AIConv.MaxRate = 250000, so this should be an iilegal value. How does it get away with that, when I can't even set the rate to 71.4 kHz?

 

You are getting error -200081 because the convert rate is too low to fit your samples within one period of the sample clock.  The below image shows the relationship between the AI Sample Clock (top) and the AI Convert Clock (bottom):

 

        2011-08-01_122414.png

 

To avoid error -200081, you need to make sure that all of your channels are converted during one period of the analog input sample clock.  At the maximum aggregate rate, you really can't add too much of an initial delay.  The driver is choosing a convert rate that is higher than the specified maximum in order to fit the samples within one period of your sample clock given the added initial delay.  There is not a software limit to the AI convert rate that is set, but exceeding the maximum will violate the settling time spec and will result in the hardware returning an error once you get too high.

 

On your current hardware, the solution to ensure enough settling time is to limit the rate that you update to something less than the maximum so you can add an initial delay to the samples and still sample both channels in time.  However, a simultaneously sampled device would probably be the ideal solution since you would not have to worry about allowing time to sample each AI channel sequentially. 

 

 

Best Regards,

John Passiak
0 Kudos
Message 5 of 12
(4,063 Views)

Hi John - 

 

The behavior you are seeing is the result of the analog output slew rate and settling time.  That is, you measure the AO before it has time to adjust to its final value.

 

That doesn't line up with what I'm seeing in the attached VI. (It's the same as the previous one, but updated with a subVI that chooses the smallest possible conversion period for the sample period given. I also disabled the padding behavior by setting "# padding samples" to 0.) When I run this VI at ~60kHz or faster, I still find that ch0 lags ch1 on every sample, even though the waveform starts at 0.17 V and ends at 0.13 V. This isn't a 5V swing from the last sample to the first, so I really doubt the AO slew rate is a factor here, unless it's performing wildly out of spec. The difference between the two channels is also only 0.1 V at every sample point, so the AI settling time shouldn't be a factor according to what you said. Can you think of any other explanation for the behavior I'm seeing?

 

screenshot.png

 

The below image shows the relationship between the AI Sample Clock (top) and the AI Convert Clock (bottom)...

 

In your diagram, the convert clock is a short pulse (very low duty cycle). It looks like two full periods of the convert clock are not completed within the period of the sample clock. Can I safely gather from this diagram that the only requirement on the convert clock rate is that two rising edges must appear within the period of the sample clock? Or do two full periods of the convert clock have to appear? (If the former, I can modify the attached subVI to be a little more aggressive with setting the convert clock rate, buying yet more time for the sample clock delay.)

 

On your current hardware, the solution to ensure enough settling time is to limit the rate that you update to something less than the maximum so you can add an initial delay to the samples and still sample both channels in time.

 

I'm doing that in the attached code, but at full rate, there is no extra time within the sample window to be able to delay the convert clock from the sample clock. If I set any value larger than 100 ns, I get an error. How can I run this device at full rate in a loopback?

0 Kudos
Message 6 of 12
(4,000 Views)

Here's the code I mentioned in the previous post.

0 Kudos
Message 7 of 12
(3,993 Views)

Hi Keith,

 

Looking at your screenshot, you still have a 100 ns delay from your sample clock to the first sample.  This is in-fact the minimum delay.  So, 100 ns after the AO begins its update, you take the sample.  This is not enough time for the generated voltage to reach its value.  The slew rate is specced at 5V/us, but you can't expect to extrapolate this spec to smaller transitions (i.e. the voltage will not change 500 mV in 100 ns).  Slew rate by definition is the maximum slope of the output during a full scale transition.  Making smaller changes, the slope of the transition would likely not hit the specified slew rate.

 

To summarize:

 

channel 0 currently has 100 ns between the start of the update and the measurement.  This is not enough time to settle.


channel 1 currently has 5100 ns between the start of the update and the measurement.  This is signicantly more time allowed to settle.

 

 

My drawing might not be the most clear.  Ideally... you want the maximum amount of delay between the sample clock and the first sample.  However, you also have to fit two samples of the convert clock in before the next sample clock edge occurs.  The convert clock is indeed a very small duty cycle.  It doesn't need to complete a full period before the sample clock edge, but you do need to allow an added 100 ns between the last convert clock and the first edge of the sample clock.

 

Try my attached VI in place of the one you were using to set the delay and convert clock and let me know what it looks like on your end.

 

 

Best Regards,

 

John Passiak
0 Kudos
Message 8 of 12
(3,957 Views)

Can you please save your VI as LV2009 for me?

0 Kudos
Message 9 of 12
(3,934 Views)

Hi Keith,

 

Here it is in 2009.  Sorry to have overlooked that you were using LV 2009 as mentioned in your first post.

 

 

Best Regards,

John Passiak
0 Kudos
Message 10 of 12
(3,925 Views)