01-06-2012 05:44 AM
hi,
i am developing an application with c++ that is using an anaolg voltage input. i develop with a simulated usb-6210 device.
on a slow computer I get a 200279 buffer error for which I found a lot of useful information already. I am trying to understand the behaviour of the buffer comparing a slow and a fast machine. please see the following image:
i collect the available samples in the buffer every second with DAQmxGetReadAvailSampPerChan. the exact same version of my app runs on both computers.
my questions are:
a) how come the buffer behaves this way in the upper image? a 200279 never occurs on the fast computer.
b) on the lower diagramme you can see the available samples building up although returning to zero every couple of seconds. why is that and can i somehow avoid this?
thanx!
Solved! Go to Solution.
01-06-2012 08:00 AM
mael15,
Can you give details on how you're reading the data, and what you're doing with the data once acquired? I assume you're running a continuous acquisition, and that you're reading/processing data in a loop. In this scenario, your USB-6210 will stream data to a buffer allocated by DAQmx at a fairly constant rate. The 'read' loop in your program will remove data from this buffer. From the look of your graph, it appears that as time goes on your read loop slows down to the point where it cannot pull data from the buffer as fast as the device is streaming data into the buffer. Why this happens is hard to say without seeing the code you're running. Have you benchmarked the 'read' loop? Is there a particular operation that begins to take longer as you acquire more data (memory allocation / file access)? If you let the 'fast' machine run longer, does it eventually exhibit the same behavior?
Dan
01-06-2012 10:50 AM
hi dan,
thanx for you answer. yes, i use continuous data acquisition and then i draw the data on the screen. i have solved the crash on the slower machine by implementing a ring buffer. i used to have a buffer that was constantly growing, which made additional memory nescessary. the slower machine also has less memory than the fast one, so i guess it slowed down when less and less memory was freely available.
i suspect the same thing would have happened on the fast computer, but this might have taken hours (4 times more memory) so i do not know for sure.
the spikes in the upper graph every 20 seconds are still a mystery but not that important.
greets,
mael15