01-09-2017 08:09 AM
Hello all,
One of my experiments has parts of its inputs running on a NI 9215. I use the last NiDaqMx in my CVI code. The people running the acquisitions have noticed that each channel has its own offset, typically for the 4 channels: 0.08V, 0.11V, 0.08V and 0.13V which seems a lot. As a test, from the NI Max interface, I created a new task and ran it and the offsets there are much lower approx: 200uV, -200uV, 400uV, -100uV.
First question: why are the offsets different between my CVI prog and the test task ? In my code I use DAQmxCreateAIVoltageChan(Task, AIchan, ChanName, DAQmx_Val_Cfg_Default, -10, 10, DAQmx_Val_Volts, NULL) and little else to declare the tasks.
Now, I understand the need for regular calibration and I saw the calibration method in the task in NI Max, but I see no way to use the calibration saved there later in my prog (is it possible?). All the NiDaqMx calibration functions (DaqMxSetChanAttribute...) seem to replicate the whole process of manually doing the calibration, but in C. What's the point ? It would make the software (and its use, having to connect a trusted voltage source during program use...) a lot more complicated.
3rd question: why is the only voltage option available Differential Terminal ? Why can't I use single ended or pseudodiff ?
Final question: when looking at calibration options, I found the function NISysCfgSelfCalibrateHardware(), but it belongs to nisyscfg.h. What is the link between NiDaqMx and nisyscfg ? Can I use it or is it something different ?
Thanks.
01-10-2017 05:12 PM
Hello Gdargaud,
To sum up your post, I'm going to focus on:
1. offset differences
2. code calibration difference in MAX compared to C
3. Differential
4. NiDaqMX vs Nisyscfg
1:
Would you mind posting screenshots of the code as well as the results indicating this happening?
Additionally does this offset difference between the code and MAX happen in Test Panels as well as the Task?
2:
The MAX calibration does bundle in a lot of functionality but is going through the same process. It also calibrates the program but bundles in task.
We don't have a way to automate that functionality in C (well I suppose you could create a program and call it but that ends up being the same thing). The reason it's configured to all be done in C is to keep language sustainability. The other option would be switching from MAX to C just for calibration.
Additionally, C by it's very framework does require a lot more explicit calls.
3:
This model only allows for 4 differential measurements. Other models allow other measurements.
4.
Nisyscfg calls configuration while NiDaqMx calls into the API. You need to get information about how your device is set up while the other one actually communicates with the API and sends messages.
01-11-2017 04:24 AM
Thanks for your reply,
the hardware is 1000km from me and I'm trying to see if I can trigger the same issue on simulated hardware, but I doubt it'll work...
1 - there's a lot of code. As a summary, I do:
DAQmxGetSystemInfoAttribute, DAQmxReserveNetworkDevice, DAQmxResetDevice, DAQmxSelfTestDevice then DAQmxCreateTask, DAQmxCreateAIVoltageChan for each channel, DAQmxCfgSampClkTiming, DAQmxGetSampClkRate, DAQmxRegisterDoneEvent.
I start the task with: DAQmxSetReadAutoStart, DAQmxTaskControl and DAQmxStartTask
And finally I read the input with DAQmxReadBinaryI16 and then DAQmxStopTask. Then I repeat the last 2 series of steps.
2 - OK, I understand. Is there one of the CVI examples that implements an AI calibration ? Maybe I can use that as a source and have a rarely used calibration sequence. The calibration can be reused across runs, right ? They are not attached to a specific task (which would mean they are gone when you restart the program) ? Right ?
3 - OK.
4 - So NiSysCfg is more limited ? A subset of NiDaqMx ? I saw the example SelfCalibrateAllDevices.prj, but even if I comment out the line NISysCfgSetFilterProperty(filter, NISysCfgFilterPropertyIsSimulated, NISysCfgBoolFalse) it still doesn't work on my simulated hardware (Says "Not Supported"). Would it work on my real hardware or am I just wasting my time ?
What is actually the difference between autocalibration and real calibration ? I understand that the later is done with a stabilized alimentation, you set it to (for instance) -10V, 0V and 10V and each time you tell the calibration routine what the input it and I guess it then does some linearisation and correction. But what is the self-calibration ?
Thank you.
01-12-2017 02:24 PM
Gdargaud,
1. I can definitely understand the distance proximity. I'd also highly encourage creating a service request because this is strange behavior and should be investigated further and possibly replicated so it can be documented. Also, you are correct, it will not work on simulated hardware.
Looking at all your calls, I don't see anything out of place. This will definitely be use if you do make a service request as it will allow it to be replicated.
2. To my knowledge, there are no C versions for calibration. That would be something you would have to develop based on the existing procedure.
3. N/A
4. So the #include "nisyscfg.h" in the project will not work in simulated hardware because there isn't any meaningful configuration set up. In situations where there is hardware, the nisyscfg.h will find the system configuration and do it for you.
Here is a good resource on the differences between Self and External Calibration.
To get more active engagement from the community, it is good to have one item per post that's related to the title.
Hope that helps.
01-13-2017 03:29 AM
Thanks again for your reply,
1 - So I did not forget some command to apply some kind of calibration. Hmmm, I was hoping that was it.
4 - Thanks for the article, it's exactly what I wanted to read. But we tried the autocalibration (from the example SelfCalibrateAllDevices.prj) on our real hardware (cDAQ 9215) and it says 'Not supported'. Yet the datasheet states that self-calibration should be supported on this device.
It also says that for normal calibration, the parameters are stored in Eeprom, so I should be able to have a calibration procedure in a NI-MAX task (for instance) and then use it in my prog. Right ? Yet I don't see any function to recover calibration parameters.
01-16-2017 02:54 PM
Hello gdargaud,
Just to clarify, is your goal just to calibrate your device or to create an application that can calibrate the device in your program? If your goal is just to calibrate the device then this can be done in NI MAX. When doing calibration procedures for the NI 9215 if can also be helpful to reference this calibration link.
I hope this helps!
01-17-2017 02:40 AM
Hello,
I would like to (A) understand why there is a discrepancy in the offset measured by CVI+NiDaqMx vs NI-Max and (B) bring the offset close to zero, which probably means calibrate the card every once in a while.
I don't need perfect calibration and (ideally) I'd like to avoid adding yet another complex wart to my program to perform the calibration. If an external process can do it, so much the better.
01-18-2017 11:23 AM
Gdargaud,
"I don't need perfect calibration and (ideally) I'd like to avoid adding yet another complex wart to my program to perform the calibration. If an external process can do it, so much the better."
From my understanding:
I think both are reasonable requests. Since there aren't any calibration examples in CVI, it's hard to pinpoint why the error is occuring. I would recommend if possible to try it in LabVIEW so you at least have another data point. Additionally, you can if you have the hardware try with another device. Our main goal is to see if this issue is replicable either in different software or hardware.
If you don't have hardware available, I'd recommend making a Service Request. This way, someone can try your code in our hardware and note the discrepencies if any exist and pull in a bug report if necessary.