Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

unable to acquire more than 1MS/channel with PXI-6132

When I try to acquire analog input samples with a PXI-6132, it only works if the number of samples (per channel) is set to less than or equal to 1 M. When I go above 1 MS/ch, the AI task never generates a done event. However, if I set the data transfer mechanism to programmed IO (instead of DMA, which is what it defaults to) everything works, but the data transfer is quite slow. Is there a limitation on the number of AI samples that can be acquired using DMA?
 
Also, I noticed that, when I do a _pretriggered_ analog input acquisition (again with the PXI-6132), the attainable bandwidth of my buffered counter operations (PXI-6602, using DMA, and running concurrently with the AI operation) decreases considerably. The PXI-6132 has a 16 M FIFO and, therefore shouldn't have to use any bus bandwidth (since I am normally acquiring less than 1 M samples). I believe that the NIDAQmx driver is reading the FIFO into memory as soon as the AI operation is done, instead of waiting for a read function to be called. Note that everything works fine if I increase the number of AI samples so that the AI operation completes _after_ the counter operation has completed. I don't think the problem is my application, because I have designed it so that the program waits for all tasks to complete before attempting to read any buffers. I tried setting the AI data transfer mechanism to programmed IO, but this makes it so that the AI operation doesn't work at all.
 
 
system summary:
 
PXI system with 1 PXI-6602, 2 PXI-6132 (16 M FIFO, 4 channels), 2 PXI-6133 (16 M FIFO, 4 channels)
 
OS: Windows XP
 
compiler: mingw C++ compiler
 
NIDAQ driver: NIDAQmx
0 Kudos
Message 1 of 11
(5,488 Views)
drotarj,

As you've probably seen in another thread, you can work around the problem where the done event doesn't fire when acquiring more than 1 million samples by explicitly setting the buffer size equal to the acquisition size.

In regards to the reference trigger problem, you are correct in your assumption.  For devices with large onboard memory (like the 6132 and 6133) where the reference triggered acquisition will fit entirely within the onboard memory, the driver will acquire data until the acquisition completes and then begin transferring data to host memory as soon as the acquisition is done.  By default, the driver will cap the buffer size at 1 million samples in this situation in an attempt to keep memory usage at a reasonable level.  This means the driver will prefetch up to 1 million samples per device once the acquisition completes.  Unfortunately, the 6602 doesn't have a lot of onboard memory, and its FIFO depth is effectively 2 or 3 samples.  This along with the PCI bus contention caused by the 613x devices is likely the cause of the errors you are seeing with your counter tasks.  Aside from placing your 6602 on a separate PCI bus segment (which I'm guessing isn't available on your current system architecture), the only solution I can think of is the manually configure the buffer size of the reference triggered tasks to something much smaller (say 1,000 samples or so).  This will minimize the amount of data sent across the PCI bus by your AI tasks until you make subsequent read calls on your AI tasks.  The tradeoff is that your read calls will likely take a little longer and require more CPU utilization since your streaming data in much smaller chunks.  If your acquistion size doesn't fit entirely within the onboard memory, this solution won't work since the driver will revert to streaming all of the data back to the host while the acquistion is in progress.  Hopefully this information helps.
0 Kudos
Message 2 of 11
(5,461 Views)
drotarj,

I noticed you posted to this thread:

http://forums.ni.com/ni/board/message?board.id=250&message.id=27244

I post here so that others that see this thread will be able to locate the conversation in the other thread.  The engineers in R&D seemed to agree that this 1MS limitation is not intentional.  This was reported to R&D (# 459EOPVF) for further investigation.  A possible workaround was suggested by DAQAroundTheClk in the other thread.  The workaround proposed was to manually set the size of the input buffer to the number of samples in your finite task.

I also wanted to know if you have been able to resolve the issues that you discussed earlier in this thread.  The transfer across the PCI bus is not controlled by the DAQmx Read function, but is done independently of the read command as you have noticed.  This is the case because by default the data is transfered across the PCI bus using DMA (Direct Memory Access).  The 6132 writes the data into the computer memory independent of the CPU or other processes.  This ensures that the processor and other resources are available for other processes.  Unfortunately, this seems to be causing issues for your application because of the very small FIFO on the 6602.  At what rate is your counter task buffering?  Are you using multiple buffered counter operations or are you only using a single buffered counter operation?  Also could you provide me a better understanding of your overall application?  I can imagine some possible workarounds but their feasibility is based on what you are trying to accomplish overall.

I look forward to hearing from you.

Regards,

Neil S.
Applications Engineer
National Instruments

0 Kudos
Message 3 of 11
(5,445 Views)
Neil,
 
Thank you for your reply.
 
My 1 MS/channel limitation issue has indeed been resolved. As suggested in another thread, I just manually set the buffer size to the number of samples I am acquiring. Thanks to everyone who replied on that issue.
 
Regarding my application, I am creating a data acquisition program that supports triggered analog input and counter input. The number of counters used and the number of analog inputs used will vary, so I would like to find a workaround that doesn't depend too much on such details. That being said, I will most likely use three counter buffered counting events on one PXI-6602, plus a large number of analog inputs (I have 2 6132s and 2 6133s). Only some of the AI cards will use pretriggers (probably 1 or 2), though. Per Reddog's suggestion, I tried reducing the buffer size. Reducing the buffer size to 100 (it appears that you can't go below 66) allowed me to get the maximum rate from my counters and didn't slow down data retrieval all that much. However, when I acquire a large number of AI samples (and sometimes when acquiring only a small number), I get error -50410 (There was no more space in the buffer when data was written. The oldest unread data in the buffer was lost as a result). This error prevents me from getting all of the data. Also, I tried setting the transfer mechanism to programmed IO, and found that I could only read a number of samples equal to the number of posttrigger samples. Furthermore, the samples I did get were "old", corresponding, I suspect, to the first filling of the circular buffer.
 
After hearing the discussion on this topic, and after trying some suggestions, it is evident that I have some confusion as to the meaning of the term "buffer". I would have thought that, for my AI task, the buffer would just be a block of memory in the 6132 FIFO, which would have to be at least as big as the number of samples I am acquiring. However, it now seems that the buffer is a block of system RAM to which the FIFO data gets transfered. On the other hand, if I don't use a reference trigger, the situation seems to be different; the buffer does indeed seem to be a block of FIFO memory (although it is hard to tell one way or the other). Perhaps someone could clarify this for me. If possible, I would suggest that NI eventually add a value of DAQmx_Val_OnRequest to the data transfer request condition property to allow the data tranfers to take place only when requested.
 
Jason
 
 
0 Kudos
Message 4 of 11
(5,438 Views)
Jason,

Typically when we refer to a buffer in our API we are refering to the piece of memory in the computer's RAM that is allocated to store data from the data acquisition task.  While both the on-board FIFO and the computer's RAM could be considered buffers, setting the buffer size in DAQmx changes the amount of memory used in RAM and does not refer to the on-board memory FIFO.

The error -50410 indicates that your computer RAM buffer is overflowing.  This can be prevented by either reducing the sampling rate (which does not seem appropriate in your application), increasing the buffer size (which seems not to be possible because of the previous issue with the counters) or reading the data from the buffer in bigger chunks and more often.  This last approach may be the most pratical but it will depend on the architecture of your application.  If it is possible to reduce the execution time of the loop that reads from the AI buffer it will allow the DAQmx Read commands to more effectively empty the buffer to create room for new samples.  Attempt to remove all processing from the loop that performs the DAQmx Read.

Another possible approach would be to try the various transfer request conditions to see if you can increase the size of the RAM buffer without affecting the performance of the counters.  Specifically, I recommend setting the transfer requestion condition to Onboard Memory More than Half Full or Onboard Memory Not Empty.

Obviously the ideal solution would be to have control over exactly when data is transfered across the PCI bus.  Unfortunately, this functionality currently does not exist.  I recommend filing a product suggestion that such functionality be added to a future version of DAQmx.  You can file a product suggestion at:

http://digital.ni.com/applications/psc.nsf/default?openform


Each product suggestion is reviewed by a member of the Research and Development staff.

If none of the workarounds solve the problem let me know and I can see if there may be some other way we can approach this problem.

Regards,

Neil S.
Applications Engineer
National Instruments
0 Kudos
Message 5 of 11
(5,404 Views)

Neil,

 

Thanks for the reply. I will file a change suggestion, as you suggested.

 

It appears that I also have some confusion as to how the read function operates. Regarding my application, I should point out that I am doing a reference triggered AI acquisition. Thus, I only read samples _after_ the acquisition is complete, so I wouldn't expect the sampling rate to matter. Say, for example, that I am acquiring 1 M samples. I can set the buffer size to 1000. After the acquisition completes (I get a done event, and all 1 M samples are in the FIFO), I invoke the read function once to read _all_ of the samples and everything seems to work (most of the time; occasionally I get the error). If I set the buffer size to 100, I almost always get an error. Keep in mind that there are actually three pieces of memory involved in this process: the device FIFO (1 MS, in this example), the NIDAQmx buffer (1000 or 100 S), and my own piece of memory for storing the samples (1 MS). The read function (DAQmxReadBinaryI16) stores the samples in my buffer. However, the process by which it does this is not clear to me. Does it go through the small NIDAQmx buffer? Does it read directly from the FIFO (I kind of doubt that it does)? Would it help if I broke up the read into several pieces?

 

Thanks.

 

Jason

 

 

0 Kudos
Message 6 of 11
(5,400 Views)
Jason,

Your approach of reducing the buffer size manually has some potential.  However, for this approach to work you will need to modify when you are reading samples out of DAQmx's memory buffer into your application.  By default the DAQmx Read reads the first pre-triggered sample from the buffer.   Since the first pre-triggered sample cannot be determined until the trigger has occurred the read does not return until this time.  This can easily cause buffer overflows. 

Let's take an example.  Say we set the number of pre-triggered samples to 1000 and the number of overall samples to 2000.  Furthermore, say we manually set the DAQmx buffer size to 500 samples.  DAQmx will continue to transfer samples to the buffer in 500 sample chunks until the acquisition is complete.  Since you do not read from the buffer until all samples have been acquired the buffer overflows.  It returns 2000 samples but if you look closely the 2000 samples are actually the same 500 samples (the last set of samples moved into the buffer) repeated 4 times. 

To overcome this behavior you can manually do more of the memory management by reading out of the buffer before the trigger even occurs.  Since the card is continuously acquiring before the trigger occurs these samples can be manually read into your program.  This will allow the driver to continue to move the data in the buffer and you will not need to access previous data because you would have already read it out.  This does mean you potentially will receive many more pre-trigger samples than you specify but you can handle this in your application by managing your own circular buffer.  This will allow you to transfer the data is smaller chunks allowing the buffered counter operations access to the PCI bus.

Attached is a simple example that uses this approach.  A few things to note:

1.  The size of the buffer must be small enough to give your buffered counter operations sufficient access to the PCI bus
2.  Since you are reading sample continuously into the DAQmx buffer you will need to have a buffer large enough to avoid buffer overflow errors.
3.  You must read from the buffer in a loop until the trigger has occurred.

I think if you are able to balance the different parameters correctly you can get this to resolve the issues with your application.  Let me know if you have questions about taking this approach.

Regards,

Neil S.
Applications Engineer
National Instruments

0 Kudos
Message 7 of 11
(5,380 Views)
Jason,

I just looked back at your posts and realized that you are using C++ not LabVIEW.  Sorry.  The same approach however, would still work.  I would recommend taking a look at the following example:

http://sine.ni.com/devzone/cda/epd/p/id/908


Specifically take a look at 200307.zip.  This example has both a start and a reference trigger but could be easily modified to remove the start trigger.  The second modification you will need to make is to add the DAQmxCfgInputBuffer command to manually size the DAQmx Input Buffer.

Let me know if you have questions about modifying the example.

Regards,

Neil S.
Applications Engineer
National Instruments

0 Kudos
Message 8 of 11
(5,379 Views)
Jason,

Let me try to clarify some things in regards to the buffering and reference triggers.  First, the typical method for transferring data in the driver is to allocate a buffer in host memory and then transfer data between the host buffer and the onboard memory of the device.  When you read data through the C API, data is then read from this buffer and copied into the array passed as a parameter to the read call.  When using a reference trigger, the typical method used for most devices is to allocate a buffer that is exactly pre-trigger + post-trigger samples big and to set the overwrite mode to allow overwriting of unread samples.  Data is then continuously written in the buffer in a circular fashion until the trigger occurs and the acquisition stops.  At this point, calling read through the API will begin returning data at the start of pre-trigger data.  Using this method, data is continuously transferred across the bus your device is sitting on until the acquisition completes.  For devices that support the When Acquisition Complete data transfer requestion condition, the model changes slightly.  First, for this mode to work, the total number of pre-trigger + post-trigger samples must fit entirely within the onboard memory.  Second, instead of continuously transferring data across the bus until the acquistion completes, data is written in a circular fashion to the onboard memory until the acquistion completes.  At this point, data is then automatcially transferred to the host buffer by the driver.  Subsequent read calls will then still copy data from this buffer to the array passed into the read call.  Finally, the buffer settings used by the driver for this case are also a little different.  Allowing overwriting of unread samples in the buffer is no longer allowed, and the driver will cap the buffer size at 1 Million samples unless the buffer size is explicitly set by the user.  This mode of data transfer is designed to minimize bus utilization and memory usage when acquiring numerous channels across a number of devices at very high rates and is the default data transfer mechanism chosen by the driver when using a reference trigger for devices that support it.  Since the 6132 and 6133 have a large amount of onboard memory, they support this mode of data transfer by default.  You can revert the behavior to the original method described by changing the data transfer mechanism to Onboard Memory Not Empty.  The other data transfer request conditions are generally there for interrupt data transfer and don't apply to DMA for many devices since they would interfere with DMA bursting and more efficeint usage of the bus.

With all of that said, I believe you are running into a couple of bugs with the driver.  The -50410 error appears to be the result of a race condition that can occur if the DMA process needs to stop and restart frequently while waiting for the user to read data from the host buffer.  Since you've intentionally set the buffer size small to reduce bus utilization while the counters are running, this is likely what you're running into since the DMA process is probably having to stop and restart numerous times in order to prevent overwriting of unread data.  A possible work around (although admittedly undesirable one) is to turn off multiprocessor/multithreading support in the BIOS and see if it eliminates the error.  I've also been able to reproduce the issue you reported using the Programmed I/O data transfer mechanism.  I'm not sure this was ever designed to work with a reference trigger, but it makes sense in this situation for your use case.  I'll have to look into this further to see if it's really a bug or a missed error condition that the driver didn't report.  Outside of this, I don't know if I have any more suggestions for you to try in order to get both the counters and AI channels running at the same time at the rates you desire.  I'll post back later if I think of anything else or find any other work arounds.
0 Kudos
Message 9 of 11
(5,337 Views)

reddog wrote:
drotarj,

As you've probably seen in another thread, you can work around the problem where the done event doesn't fire when acquiring more than 1 million samples by explicitly setting the buffer size equal to the acquisition size.

In regards to the reference trigger problem, you are correct in your assumption.  For devices with large onboard memory (like the 6132 and 6133) where the reference triggered acquisition will fit entirely within the onboard memory, the driver will acquire data until the acquisition completes and then begin transferring data to host memory as soon as the acquisition is done.  By default, the driver will cap the buffer size at 1 million samples in this situation in an attempt to keep memory usage at a reasonable level.  This means the driver will prefetch up to 1 million samples per device once the acquisition completes.  Unfortunately, the 6602 doesn't have a lot of onboard memory, and its FIFO depth is effectively 2 or 3 samples.  This along with the PCI bus contention caused by the 613x devices is likely the cause of the errors you are seeing with your counter tasks.  Aside from placing your 6602 on a separate PCI bus segment (which I'm guessing isn't available on your current system architecture), the only solution I can think of is the manually configure the buffer size of the reference triggered tasks to something much smaller (say 1,000 samples or so).  This will minimize the amount of data sent across the PCI bus by your AI tasks until you make subsequent read calls on your AI tasks.  The tradeoff is that your read calls will likely take a little longer and require more CPU utilization since your streaming data in much smaller chunks.  If your acquistion size doesn't fit entirely within the onboard memory, this solution won't work since the driver will revert to streaming all of the data back to the host while the acquistion is in progress.  Hopefully this information helps.

 

reddog,

 

You mentioned PCI bus contention caused by 613x devices.  Is this documented somewhere? I have two 6132s on order and am modifying my program to use them (previously used 6120s).  Am I going to run into problems?  This is a high-speed acquisition program (>20MB/s throughput).  I don't have much time to get the system running once I get the cards, so I'd like to know if I'm in for some 'weird behavior'.

 

I'm hoping you're just referring to the high bus bandwidth required by high-speed devices, and not something abnormal. 

Message Edited by wired on 09-03-2008 08:33 AM
0 Kudos
Message 10 of 11
(4,645 Views)