Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Buffer size limitations in finite acquisition - best way to do very long acquisitions

Solved!
Go to solution

Hello,

 

While checking the scalability of the finite acquisition for very long acquisitions, I got some error message that DAQmx_Buf_Input_BufSize was too big and should be < 0xFFFFFFFF or 0x1FFFFFFF.

I am using cfg_samp_clk_timing(.., samps_per_chan) or samp_quant_samp_per_chan to configure the duration of the test. However, I did not expect it to also allocate a buffer of the same size. This will clearly limit the final duration of acquisition based on RAM and not storage size/speed (OK, this limits mostly at high sampling rate... and long acquisitions would probably be at lower rates). Is my understanding correct ?

 

I do not need a very big "acquisition time" accuracy. If I want to scale in time, should I then go using only continuous mode and stop based on some timer or a callback counting the acquired samples ?  In that case samp_quant_samp_per_chan is used to determine the buffer size according to the doc so you can adjust to something much smaller (of course, saving to storage must go fast enough to ensure this buffer can be emptied).

But well, in the finite acquisition case, we could also say that this parameter determines the buffer size so I am not sure about what the doc meant

 

0 Kudos
Message 1 of 5
(64 Views)
Solution
Accepted by topic author ft_06

For finite acquisition, the total # samples you plan to acquire will *also* set the buffer size.  DAQmx's assumptions for finite acquisition are to fill every slot in the buffer exactly once and never wrap around.  Finite acquisitions most typically read all samples at once after acquisition is completed, so there needs to be a buffer slot for every one of them.

 

So yes, if you don't need extreme precision in your total # samples or acquisition time, you can configure instead for continuous acquisition.  In this mode, DAQmx will (if necessary) make a decent guess at a buffer size and keep wrapping around to refill it.  You as the app programmer must "stay ahead" of this process by reading samples out of the buffer as the acquisition continues, opening up buffer slots for new samples to fill.  And so on.

 

With many devices, there's a "sneaky" way to get both a huge and precise # of total samples by declaring & servicing a continuous acquisition while generating a finite sample clock with a counter.  See this recent thread where I described it in more detail.   (Note: though the linked thread is about generating output, most of the same ideas carry over to capturing input.)

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 5
(46 Views)

Hello,

 

I will set your post as a solution. Hopefully, I don't need accuracy and timer has been tested as sufficient

 

Do you have more insight in real-time constraints of continuous mode ? I tried some acquisition at sampling rate 100 kHz and I found it prone to fail to any perturbation (if I prevent perturbations, it can run for 10s of minutes)

For example, it was failing after 5 or 6s... even when I increased samp_quant_samp_per_chan to 8s of samples, then 16s. If I think about this, I can identify those RT constraints:

- 2K samples onboard buffer -> this one has to be dumped into buffer in RAM

- 8 or 16s buffer of samples (per channel) in RAM. It is filled from device onboard buffer and emptied within my callbacks

 

My callbacks are like 4 to 5% of CPU load (and well, I put 8 or 16s of buffer size and it is failing after 5 to 6s) so I have the feeling that this is the emptying of onboard buffer that fails. What can we check for that ?

I will also not read the data in my callback to see if that fills the RAM buffer

0 Kudos
Message 3 of 5
(25 Views)

A longstanding general rule of thumb is that most continuous acquisitions behave well when you continually request ~0.1 sec worth of samples per read.  That'll fire your callbacks for reading 10 times a second.

 

With app memory being so plentiful these days, I habitually make my buffers at least 5-10 seconds worth of samples even though 1 second would probably be plenty.  The key is to make the buffer considerably bigger than the # samples you read and to make sure that each read is pretty much retrieving all the accumulated samples.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 5
(18 Views)

Thanks.

I am used to these tradeoffs on embedded platforms (worked on 2.75G modems, audio playback, ...) and I am still finding it quite empirical in nidaq case due to these observations:

- I can put a huge buffer size like 1000x instead of 4x or 8x... still it fails after 5s at 100K and the buffer is not at all filled

 

- if my callbacks are triggered 2x faster, it improves a lot. Or if I process less in the callback (still processing was already like just 4% CPU)

 

- API does not care if I do task.read() or not. Simply running the callback and returning from it seems to move the "read/write" pointers and thus freeing space for next samples.

 

That is why I am a bit confused into which buffer is copied to which buffer and from which buffer it reads.

At the end, your rule of thumb does not work for me (although this is exactly the typical rule of thumb for this kind of use case) if I stick to "every 1s" callback

My assumption is that the copy from onboard buffer to a RAM buffer is hidden and that just buffer size and callback rate should be linked. So I should never fail at 5s when my buffer is like 1000s 😉

 

But I will tune my callback to a higher rate as it improves the situation, even if I do not understand the rationale

 

 

 

0 Kudos
Message 5 of 5
(13 Views)