12-04-2024 03:32 AM
#include <stdio.h>
#include <stdint.h>
#include <NIDAQmx.h>
#define DAQmxErrChk(functionCall) if( DAQmxFailed(error=(functionCall)) ) goto Error; else
int main(void)
{
int error=0;
TaskHandle taskHandle=0;
int32 data=1;
char errBuff[2048]={'\0'};
int32 written;
/*********************************************/
// DAQmx Configure Code
/*********************************************/
DAQmxErrChk (DAQmxCreateTask("",&taskHandle));
DAQmxErrChk (DAQmxCreateDOChan(taskHandle,"Dev1/port0/line24","",DAQmx_Val_ChanForAllLines));
/*********************************************/
// DAQmx Start Code
/*********************************************/
DAQmxErrChk (DAQmxStartTask(taskHandle));
while(1){
/*********************************************/
// DAQmx Write Code
/*********************************************/
DAQmxErrChk (DAQmxWriteDigitalU32(taskHandle,1,1,10,DAQmx_Val_GroupByChannel,&data,&written,NULL));
data=~data;
}
Error:
if( DAQmxFailed(error) )
DAQmxGetExtendedErrorInfo(errBuff,2048);
if( taskHandle!=0 ) {
/*********************************************/
// DAQmx Stop Code
/*********************************************/
DAQmxStopTask(taskHandle);
DAQmxClearTask(taskHandle);
}
if( DAQmxFailed(error) )
printf("DAQmx Error: %s\n",errBuff);
printf("End of program, press Enter key to quit\n");
getchar();
return 0;
}
I'm using Linux Ubuntu 22.04LTS, I'm running the code.
when i toggle the data without any delay and plot the digital waveform in oscilloscope, i find that pulse width is around 3 microseconds as it should be 0.1microseconds according to board spec of 10 MHz
please help me in this case as fast as possible
Regards
Uzumaki
12-04-2024 10:17 AM
You're using software timing to generate the pulse, so your pulse width will depend on code execution speed (including the many layers down below the top-level driver API).
You would need to set up a hardware-timed DO task which the 6259 doesn't support directly. It *does* support it indirectly using a method known as "correlated DIO" whereby the DO task must borrow a sample clock from some external source. I often used a counter pulse train task to generate such a clock for a correlated DIO task when I was using M-series 62xx devices.
-Kevin P
12-05-2024 05:33 AM
Thanks for reply Kevin_price
"You're using software timing to generate the pulse, so your pulse width will depend on code execution speed (including the many layers down below the top-level driver API)."
can you explain it more briefly how i am using software timing because its typically impossible to generate aa proper 3microseconds square wave from digital output port. and moreover I'm not using any "delay function" or "timer" or "usleep" to generate this 3 microseconds. it is doing by its own
12-05-2024 09:33 AM
Software timing means that you're making a software function call to the driver API to set the state high and then making another software function call to the driver API to set the state low. And so on. Each of those function calls takes time to execute -- there are many layers of sub-functions to work through below the top-level API function you call directly.
You need hardware-based timing, which involves "borrowing" a sample clock and writing a buffer of DO values to generate before starting the task.
If you merely want a constant 5 MHz pulse train, you can do that much more easily with a counter output task. That's one of the things they're *designed* for.
-Kevin P
12-06-2024 11:33 PM
Thanks for the reply Kevin_Price
"Software timing means that you're making a software function call to the driver API to set the state high and then making another software function call to the driver API to set the state low. And so on. Each of those function calls takes time to execute -- there are many layers of sub-functions to work through below the top-level API function you call directly."
is there any way to find out find the driver API (i.e) software function call to know how it is functionating. Like typically in a C program if I write a 1us delay toggling data from 0 to 1 and 1 to 0, and when i see the output in oscilloscope i find that the square pulse width varies and it is not constant , but when i increase the delay b/w toggling like assume 1milliseconds i find a perfect square wave.
here my issue is they what timer they have implemented that its giving perfect square wave in microseconds range.
12-07-2024 09:31 AM
is there any way to find out find the driver API (i.e) software function call to know how it is functionating.
See C:\Users\Public\Documents\National Instruments\NI-DAQ\Documentation\NI-DAQmx C Reference Help
Like typically in a C program if I write a 1us delay toggling data from 0 to 1 and 1 to 0, and when i see the output in oscilloscope i find that the square pulse width varies and it is not constant , but when i increase the delay b/w toggling like assume 1milliseconds i find a perfect square wave.
It depends on which API you use, but it is usually only accurate to 1 millisecond on Windows.
You should always use hardware timing for accuracy smaller than millisecond. You can refer to Synchronize Correlated Digital Output with Analog Input Using NI-DAQmx and combine these examples
C:\Users\Public\Documents\National Instruments\NI-DAQ\Examples\DAQmx ANSI C\Digital\Generate Values\Cont Write Dig Port-Ext Clk
C:\Users\Public\Documents\National Instruments\NI-DAQ\Examples\DAQmx ANSI C\Analog In\Measure Voltage\Cont Acq-Int Clk
12-08-2024 05:15 PM
@UzumakiNaruto wrote:
here my issue is they what timer they have implemented that its giving perfect square wave in microseconds range.
Who is "they"? It sounded like you were describing your own code.
When we talk about "hardware timing", we're talking about using the DAQ device's circuitry, including a built-in 80 MHz timebase clock, to regulate sample timing. Extremely precise because there's no OS or software involved to cause timing variations.
-Kevin P
12-13-2024 05:14 AM
Thanks for the reply ZYOng
"It depends on which API you use, but it is usually only accurate to 1 millisecond on Windows."
I'm using linux ubuntu 22.04LTS . irrespective of hardware timing is there any software timing that gives microseconds delay?
12-13-2024 05:24 AM
Thanks for the reply Kevin_Price
sorry for the delayed reply
"Who is "they"? It sounded like you were describing your own code." Here I'm taking about daqmx C code API , not my code. example program of digital output.
"When we talk about "hardware timing", we're talking about using the DAQ device's circuitry, including a built-in 80 MHz timebase clock, to regulate sample timing. Extremely precise because there's no OS or software involved to cause timing variations."
in the above statement are you specifying that digital output port example use built-in clock. if you the above code int he very first post i'm not using built-in clock. so i wanted to understand that how am i getting 3us high and low precisely using DAQmx c API but when i implement a simple toggling function in the range of below 50 us i find that the square wave is not constant, its varying when seen in oscilloscope.
i just want to know whether you are using OS or built-in clock. if using OS how accurately are you getting this 3us even when board is 10MHz, typically i should get 0.1us data toggling
12-13-2024 07:01 AM - edited 12-13-2024 07:02 AM
I only program LabVIEW & pretty much only under Windows, so I'm not personally familiar with any C examples or what can be expected from software timing under Linux. My general impression is that most any general-purpose multi-tasking OS is going to be prone to the same kind of software-timing limitations, but the specific "severity" may differ from one OS to the next.
I'm also confused about what you're saying & asking in your latest message. Let me try commenting in place as I quote it:
"When we talk about "hardware timing", we're talking about using the DAQ device's circuitry, including a built-in 80 MHz timebase clock, to regulate sample timing. Extremely precise because there's no OS or software involved to cause timing variations."
in the above statement are you specifying that digital output port example use built-in clock.
I was only commenting on my (and ZYOng's) prior messages where both of us referred to the *need* for hardware timing to accomplish reliable microsecond-level precision. I wasn't trying to comment on any specific example code. And I *can't* comment on any C examples because I don't have any at hand.
if you the above code int he very first post i'm not using built-in clock. so i wanted to understand that how am i getting 3us high and low precisely using DAQmx c API but when i implement a simple toggling function in the range of below 50 us i find that the square wave is not constant, its varying when seen in oscilloscope.
Again, sorry, I haven't seen the C example so I can't answer. I strongly *suspect* that the one which produces precise 3 us high and low times is using hardware timing, but I can only speculate.
i just want to know whether you are using OS or built-in clock. if using OS how accurately are you getting this 3us even when board is 10MHz, typically i should get 0.1us data toggling
Again, the 10MHz spec for the device is a spec that *only* applies to hardware-timed generation. NI never gives "specs" on software timing because it's system-dependent, not entirely predictable, and largely outside their control.
If you need precise timing down at the microsec or sub-microsec level, I'm sure you'll eventually reach the same conclusion that you *need* to configure the task for hardware timing. As I mentioned earlier in the thread, your somewhat older device only supports "correlated DO" hardware timing, i.e., you need to supply a clock from somewhere else for the DO task to "share".
But another thing I mentioned earlier is that you *might* be able to produce your precision-timed pulsing with a very simple counter task. If it's just going to toggle high and low at a constant rate, that's exactly what a counter can do *very* easily. If so, look for examples of counter output tasks.
-Kevin P