01-12-2007 08:51 AM
01-12-2007 09:47 PM
01-15-2007 12:18 PM
01-16-2007 09:32 AM
01-17-2007 05:11 PM
01-18-2007 09:27 AM
Neil,
Thanks for the reply. I will file a change suggestion, as you suggested.
It appears that I also have some confusion as to how the read function operates. Regarding my application, I should point out that I am doing a reference triggered AI acquisition. Thus, I only read samples _after_ the acquisition is complete, so I wouldn't expect the sampling rate to matter. Say, for example, that I am acquiring 1 M samples. I can set the buffer size to 1000. After the acquisition completes (I get a done event, and all 1 M samples are in the FIFO), I invoke the read function once to read _all_ of the samples and everything seems to work (most of the time; occasionally I get the error). If I set the buffer size to 100, I almost always get an error. Keep in mind that there are actually three pieces of memory involved in this process: the device FIFO (1 MS, in this example), the NIDAQmx buffer (1000 or 100 S), and my own piece of memory for storing the samples (1 MS). The read function (DAQmxReadBinaryI16) stores the samples in my buffer. However, the process by which it does this is not clear to me. Does it go through the small NIDAQmx buffer? Does it read directly from the FIFO (I kind of doubt that it does)? Would it help if I broke up the read into several pieces?
Thanks.
Jason
01-19-2007 01:34 PM
01-19-2007 01:43 PM
01-25-2007 10:47 AM
09-03-2008 08:29 AM - edited 09-03-2008 08:33 AM
reddog wrote:
drotarj,
As you've probably seen in another thread, you can work around the problem where the done event doesn't fire when acquiring more than 1 million samples by explicitly setting the buffer size equal to the acquisition size.
In regards to the reference trigger problem, you are correct in your assumption. For devices with large onboard memory (like the 6132 and 6133) where the reference triggered acquisition will fit entirely within the onboard memory, the driver will acquire data until the acquisition completes and then begin transferring data to host memory as soon as the acquisition is done. By default, the driver will cap the buffer size at 1 million samples in this situation in an attempt to keep memory usage at a reasonable level. This means the driver will prefetch up to 1 million samples per device once the acquisition completes. Unfortunately, the 6602 doesn't have a lot of onboard memory, and its FIFO depth is effectively 2 or 3 samples. This along with the PCI bus contention caused by the 613x devices is likely the cause of the errors you are seeing with your counter tasks. Aside from placing your 6602 on a separate PCI bus segment (which I'm guessing isn't available on your current system architecture), the only solution I can think of is the manually configure the buffer size of the reference triggered tasks to something much smaller (say 1,000 samples or so). This will minimize the amount of data sent across the PCI bus by your AI tasks until you make subsequent read calls on your AI tasks. The tradeoff is that your read calls will likely take a little longer and require more CPU utilization since your streaming data in much smaller chunks. If your acquistion size doesn't fit entirely within the onboard memory, this solution won't work since the driver will revert to streaming all of the data back to the host while the acquistion is in progress. Hopefully this information helps.
reddog,
You mentioned PCI bus contention caused by 613x devices. Is this documented somewhere? I have two 6132s on order and am modifying my program to use them (previously used 6120s). Am I going to run into problems? This is a high-speed acquisition program (>20MB/s throughput). I don't have much time to get the system running once I get the cards, so I'd like to know if I'm in for some 'weird behavior'.
I'm hoping you're just referring to the high bus bandwidth required by high-speed devices, and not something abnormal.