LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview reading incorrect values from DAQ system

I have a National Instruments PCI-6071E card being used to measure the output of a pressure sensor.  The output should be around 4 mV when no pressure is applied.  MAX reads this value correctly, so does DasyLab.  However, when I use the Analog Input VIs to read the sensor in LabView, I'm seeing an output of 6-7 mV.  LabView is setup to simply read the channels that were configured in MAX, so I don't understand why there would be any discrepancy.  I checked and rechecked to make sure there was no offset being applied.
 
Has anyone seen a problem like this?
0 Kudos
Message 1 of 9
(4,452 Views)

Hi Marc,

Which VI are you using? Are you using any of example VI's shipped with Labview?

and what is the Hi low limit/Range/Gain settings you are using for that channel?

Regards

Dev

 

 

Message 2 of 9
(4,419 Views)
I'm using the Traditional NI-DAQ VIs.  It's just a simple AI Config -> AI Start -> AI Read.  As for the channel settings, I set them up in MAX as differential voltage channels from -10V to 10V and haven't changed anything about them.  I've changed the input limits going into AI Config in my VI, but that doesn't seem to make a difference.  But like I said, MAX and DasyLab both read the right values, it's only LabView that gives me a little bit higher voltage.
0 Kudos
Message 3 of 9
(4,408 Views)
I did some experimenting with the scan rate and found that this is whats giving me the wrong readings.  This program is being used to do shock tube testing, which requires a very high time resolution - in the microseconds.  So we're running at 300000 Hz with 4 channels and that's where I see the higher voltage.  However, if I lower the scan rate to about 1000 Hz, it reads correctly.  So I don't know if there's some parasitic capacitance on the channels that can't discharge fast enough at high scan rates, but this is the variable that seems to change the output.  Once I get up to about 50000 Hz, the voltage has increased about 1 mV.  Is there any way to limit this effect, other than just offsetting the output?
0 Kudos
Message 4 of 9
(4,405 Views)
If you are switching channels, check the settling time specification of your DAQ board. Most multichannel DAQ boards have one A/D converter and a multiplexer which selects one of the inputs to convert at any given time. The capacitances of the output node of the multiplexer and the settling time of any amplifier following it (to provide gain) result in slower settling for switched systmes than those using only one channel.

Another issue can be unterminated inputs on unused channels. Any inputs which are not used should be connected to a voltage within the common mode range of the device, usually ground.

Lynn
Message 5 of 9
(4,401 Views)

Hi Marc,

Is MAX sampling at 300KHz (each channel) when you're watching the signal there?

      I've observed "parasitic-capacitance" at the A/D converter which (like Lynn said) is switched between analog-inputs on E-series devices.  It's easy to stumble across the demonstration: while setting-up a multi-channel continuous DAQ (say 1KHz), leaving one channel unconnnected, the value measured on the "open" channel tracks the signal on the channel sampled previously.  The hard part is convincing your supervisor that this "coupling" is OK. Smiley Wink

I'd have suggested adding an "empty-channel" tied to ground, immediately-prior to the signal being discussed, but it sounds like you wouldn't be able to sample more channels fast enough.  Just copied the following out of the "E-series Help" text, in case it my be of use:

"When the multiplexer switches from channel 0 to channel 1, the input to the PGIA switches from 4 V to 1 mV. The approximately 4 V step from 4 V to 1 mV is 4,000% of the new full-scale range. For a 12-bit device to settle within 0.012% (120 ppm or 1/2 LSB) of the 100 mV full-scale range on channel 1, the input circuitry must settle to within 0.0003% (3 ppm or 1/80 LSB) of the 4 V step. Some devices can take as long as 100 µs for the circuitry to settle this much."

Cheers!

Message Edited by Dynamik on 12-23-2005 07:24 PM

Message Edited by Dynamik on 12-23-2005 07:26 PM

When they give imbeciles handicap-parking, I won't have so far to walk!
Message 6 of 9
(4,395 Views)

Ok, so here's my next question.  We're looking at getting an S Series card for this test, so that we can have a higher sampling rate on each channel.  Since the channels are independent and sample simultaneously, the capacitance on the channels shouldn't be a problem, right?  I'm sure there is still some crosstalk, but I'm hoping this would solve the capacitance problem and also give us a higher sampling rate.

Thanks for the help.

0 Kudos
Message 7 of 9
(4,332 Views)
If you are using differential mode on the 6071, you should be using bias resistors from the analog inputs to ground- refer to the manual regarding this.
~~~~~~~~~~~~~~~~~~~~~~~~~~
"It’s the questions that drive us.”
~~~~~~~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 8 of 9
(4,320 Views)
Hello Marc A,

From what you're describing about your readings, it sounds like you are experiencing the results of what we call 'ghosting' or 'cross talk', i.e. rapidly multiplexing a wide range of analog input signals from multiple channels through a single ADC (analog to digital converter) results in the amplifier on the card not being given enough time to settle to the exact value, resulting in bad readings.  The reason you are seeing different data in MAX and in LabVIEW might be based on what DAQ driver you are using in each environment (NI-DAQmx or Traditional NI-DAQ).  One of the improvements built into the NI-DAQmx driver is that when the user specifies a sampling rate for multichannel analog input tasks, the driver automatically tries to space out the samples so as to give the amplifiers on the card the maximum time within that sampling rate to settle to their correct value.  In the case of the Traditional NI-DAQ driver, the samples are taken from all the channels as fast as possible, resulting in less settling time on multichannel acquisition.  So, the first thing I would recommend is to rewrite your LabVIEW program using the NI-DAQmx VIs in order to take advantage of this performance improvement.

As for your question about S-series cards, the simultaneous sampling functionality equates to each channel on these cards having a dedicated ADC and amplifier.  Because multiplexing a range of samples through a single amplifier is not performed on these cards, you should definitely see an improvement in the amount of ghosting in your signals. Here is a link to a great Application Note that discusses the concept of amplifier settling times in more detail:

Is Your Data Inaccurate Because of Instrumentation Amplifier Settling Time?

I hope this helps!

Regards,
Travis G.
Applications Engineering
National Instruments
www.ni.com/support
 
Message 9 of 9
(4,300 Views)