03-31-2016 05:09 PM
Hi,
I have run into an issue involving the getting and setting of the sample clock rate. My workflow is as follows:
1) Create a task
2) Create an AI Voltage channel
3) Set the rate to 10000 using DAQmxCfgSampClkTiming
4) Get the rate using DAQmxGetSampClkRate
5) If I get this rate and print it using the following command, I get a slightly different number(10000.00000885847800000000000000000000) :
printf("Rate = %.32f\n",rate);
I have attached sample C code that demonstrates this.
Is this a bug? If it is, can you provide a CAR number so that we can track this.
Thanks,
Varun Hariharan
MathWorks
04-01-2016 10:12 AM
What device are you using? Devices have a certain amount of timing error, and your code is asking for accuracy enough that shows the jitter in the clock. If you run the code, does that number change?
04-01-2016 12:18 PM
https://en.wikipedia.org/wiki/Floating_point
You might want to spend a bit of time reading this. You're displaying the number in floating point representation. If we cast the integer 35 to floating point, we're unlikely to get 35.00000000000000000000000000000000000000000 simply because of the way floating point is built. This is why we don't compare two floating values to see if they're equal. You've introduced quantization error purely by using float. We actually know little, if anything, about the rate the device is sampling at due to this.
Is this a CAR? Of course not. It's a fundamental misunderstanding of the datatypes you're using.