06-23-2010 03:41 PM
Hi every one,
I am looking at the code of analog input and output in the example file. Why the signal to the Analog Output is first divided by 10 and multiplied by 32768 and the signal to the input is first divided by 32768 and then multiplied by 10. Is there any specific reasons for it? Is this caused by the internal structure of 7831R that I am using?
Solved! Go to Solution.
06-24-2010 02:18 PM
shuishen1983,
That is scaling for the AI/AO on R-Series FPGA card. With those FPGA cards you actually deal with the AI/AO on a hardware level such that you are reading/writing the actual ADC values. What I mean by this is that your AI is 16bits of resolution spread across a +/- 10V range. To get Voltage reading in your program you have to convert the ADC readings to meaningful values.
2 ^ 16 = 65536 ( 16 bits of resolution = 65536 values, this 65536 represents -10V to 10V )
Input Example:
Input voltage to the FPGA card is 5V, that will show up as 49152 on the ADC. To get that to a meaningful value, you follow the math below.
49152 (Raw ADC Reading) - 32768 (Account for negative numbers) = 16384 (I belive the subtraction of 32768 being done in your FPGA to account for negative numbers, that code isn't attached, so I'm not sure)
16384 (Scaled ADC Value) / 32768 (Range of Input) * 10 = 5V
Output Example:
You want to output -5V
-5 / 10 * 32768 = -16384
-16384 + 32768 = 16384 (Again, I belive this negative conversion of adding 32768 is being done in your FPGA code)
16384/65536 = .25
.25 * ( 10 - (-10) ) = -5 V (Range is -10V to 10V so we have to convert it to a "full range" of 20V)
Let me know if this helps or not!
06-24-2010 02:22 PM
Thanks Ben. That really helps me.