01-12-2009 01:04 PM - edited 01-12-2009 01:08 PM
How do you have the channels wired? My guess is this has something to do with terminal configuration. If you use the default configuration for channels the first 8 (ai0 .. ai7) are differential, while the next 8 (ai8 .. ai15) are RSE by default. While measuring in differential terminal configuration mode the 6220 measures the voltage difference between ai0 and ai8, ai1 and ai9, ai2 and ai10, ... , ai7 and ai15. If you have connected signals to ai8 through ai15 than you likely would like to use RSE (reference single ended) or NRSE (non-reference single ended.) RSE mode measures the difference between the channel (ai0 for example) and AIGND. AIGND is connected to chassis ground (i.e. the ground of your computer). NRSE mode measures the difference between the channel (ai0 for example) and AISENSE. AISENSE is not referenced to chassis ground. If you have wired AOGND from the 6703 to AISENSE choose NRSE, if you wired AOGND to AIGND choose RSE, and if you wired AOGND to ai8 through ai15 than choose differential terminal configuration.
Regards,
Neil S.
01-23-2009 10:08 AM
Hello I have an other problem with the Daqmx function.
I make a C program in linux for testing the read and write functions. I just want to know how can I exploit this function in my code. My AI AO channel are configured in -10V/10V. So what does I write so as to have 8V in output( for example)?
I try to explain what i do.(plese regards my C program attached on this mail)
My four outputs are connected on my four inputs. So if I write a value on my output, i will be able to read the same value on my corresponding input.
So as to test that, I tested 4 fonctions: 2 in float and 2 in binary.
The function read/WriteAnalogF64 are OK. I can read what i write.
But its more difficult in binary:
I think the DaqmxWriteBinaryI16 is wrong: I can't read what I write
I test the write in float and the read in binary:
When i write 10 V in float, I read 32768 in binary and -10V==> -32768 so there is no problem here.
But if I Write in binary and I read in float, the results is wrong:
write -32768 ==> 0,0023V
-16000 ==>5V
-100 ==> 10V
100 ==> -10V
16000 ==> -5V
32000==> -0,0023V
????????
There is an offset in the two scale read and write.
So I don't understand how does it work.
Thanks for all
Best regards
01-23-2009 02:55 PM
NI-DAQmx performs AO and AI calibration in software. 10V does not necessarily equal the highest binary value, since offsets and gain errors in hardware are corrected for in software. Typically I recommend sticking with with the floating point write unless you have a good reason not to. For more information on scaling you can take a look at a few different resources. I provided an example of doing binary reads and scaling in C on another forum post:
http://forums.ni.com/ni/attachments/ni/250/39680/1/main.cpp
The easiest way to understand how DAQmx is adjusting a voltage into binary is to query for the device scaling coefficients. These are available with the following function:
Get/Set/Reset AO_DevScalingCoeff
int32 __CFUNC DAQmxGetAODevScalingCoeff(TaskHandle taskHandle, const char channel[], float64 *data, uInt32 arraySizeInSamples) ;
Regards,
Neil S.
06-20-2018 06:17 PM
OK, it looks like your gcc version string has been French localized and thus doesn't match the string stored in the kernel.
*ROFL* parsing human readable output w/o setting proper LC_* env vars ... funny beginner mistake.