LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Producer Consumer Pauses at high rates

Solved!
Go to solution

Code attached.

 

I'm measuring data from a cDAQ 9188 using 5x 9222 BNC modules. The 9222 modules are 4 channels and can acquire at up to 500 kS/s/channel. System is hooked up via gigabit ethernet.

 

At this time I'm attempting to have the system acquire data at 500 kS/s on all channels (20) with at least some pre-trigger (1/2 second by default, note on this later). The system works fine at 250 kS/s and can monitor for long periods of time... minutes. 

 

Attempting to run the VI at speeds in excess of 300 kS/s occasionally results in a stutter and then a pause in the VI. Basically, the graph starts to show data and then it pauses. An error is not always thrown and the run button is still down.

 

On the rare occasions where it does continue running it quickly throws a "Not Enough Memory To Complete Operation" error. Speeds higher than 300K throw that error more reliably.

 

Adding the DAQmx Configure Buffer seemed to help with a previous "buffer" error I was getting.

 

When I use less channels, say 16 channels instead of the full 20, I can do much higher speeds before it stutter/pauses or throws the memory error. 

 

When I create a virtual cDAQ 9188 with 5x 9222 modules it can throw the same errors but it performs better. I can have more channels at higher speeds before I run into the same problems.

 

Is this my LabVIEW code or is it some other problem? PC hardware? Do I just have too many modules?

 

Also, is the pre-trigger what I think it is? If I have a 1/2 second pretrigger does this mean I get 1/2 second of data buffered for recording or is it ignoring that first 1/2 second of data and THEN triggering. It may be the latter, which is what I don't need but that's a problem for later.

 

Thanks!

 

 

0 Kudos
Message 1 of 22
(1,588 Views)

@skinnert, your issue may be just display (updating the graph with all this data) and not the actual acquisition.

Can you try just writing to a TDMS file in the consumer loop and see if you hit the same limits? The speed or channel count you mentioned are not uncommon.

If writing to tdms helps you get over the performance issue, you can set up a separate loop to read from TDMS and process/display at whatever rate it can keep up with.

Message 2 of 22
(1,552 Views)

This was a good suggestion but it didn't seem to help.

 

I *think* I'm getting a little more data before it pauses and stops acquiring. 

 

At low speeds (250K) the file size continuously grows. At higher speeds the file reaches ~100 MB (some variation) and then stops. I can't tell anything from the LabVIEW front panel. If I wasn't looking at the file size actively I would assume it's still running fine and recording.

 

 

 

 

0 Kudos
Message 3 of 22
(1,534 Views)

Please don't open a reference to the TDMS file in the consumer loop every time you need to write. just use the reference you get when you create the tdms file outside the loop and just use that for logging.

 

BTW, you have specified the pre-trigger sample size, but what is the trigger source in your set up?

Message 4 of 22
(1,522 Views)

Can you please check how many samples you are getting on each read?

Message 5 of 22
(1,520 Views)
Solution
Accepted by topic author skinnert

Try the following:

 

  1. Get Rid of Producer/Consumer.
  2. Use the built in TDMS logging features.
  3. Get a even multiple number of samples of the Disk Sector size. (This is key)
  4. Set the buffer to ~8s of data.
  5. You can get rid of the display if too slow and change to Log Only mode.

See attached and snippet. You can try to implement pretrigger, I did not do that.

 

snip.png

Message 6 of 22
(1,496 Views)

These are all great suggestions. I'm at home now, I'll try a bunch of these things when I get back to work.

 

Thanks!

0 Kudos
Message 7 of 22
(1,482 Views)

@mcduff

 

This code works up to 250 kS/s but then it pauses again,  think it's close.

 

I'm particularly interested in the section you added under "Make Even Multiples of Disk Sector Size" and all the "DAQmx Configure Logging (TDMS)" stuff. I think this is determining the amount of diskspace available and using that to set the buffer size based on the sample clock rate?

 

I'm not familiar with the event register stuff either. What does it do? 

 

Thanks for looking at this!

0 Kudos
Message 8 of 22
(1,423 Views)

@Ravi_Beniwal

 


@Ravi_Beniwal wrote:

Please don't open a reference to the TDMS file in the consumer loop every time you need to write. just use the reference you get when you create the tdms file outside the loop and just use that for logging.

 

BTW, you have specified the pre-trigger sample size, but what is the trigger source in your set up?


Yes, I took the reference to the TDMS file out of the consumer loop. Without having the graph that was a method I was using to watch the file size grow to verify that it was actually writing to disk. I've removed it now.

 

You know, I didn't setup a trigger source... I should probably do that.

0 Kudos
Message 9 of 22
(1,410 Views)

interesting update...

 

I found a 9189 and replaced the 9188 I was using.

 

It mostly works now! I can take about 30 seconds of data with 20 channels at 500 kS/s. It then crashes and gives an out an insufficient memory error.

 

If I use less channels, it works longer. Maybe forever?

 

I looked at the differences between the 9188 and 9189 chassis and I can't tell why one would work and the other wouldn't.

 

https://www.ni.com/docs/en-US/bundle/cdaq-9188-specs/page/specs.html

https://www.ni.com/docs/en-US/bundle/cdaq-9189-specs/page/specs.html

 

I'm still interested in making this code work consistently so I wouldn't say my issue is closed.

 

Thanks again!

 

0 Kudos
Message 10 of 22
(1,397 Views)