LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Producer Consumer Pauses at high rates

Solved!
Go to solution

@skinnert wrote:

@mcduff

 

I'm particularly interested in the section you added under "Make Even Multiples of Disk Sector Size" and all the "DAQmx Configure Logging (TDMS)" stuff.


The DAQmx API has built in logging features that will save data in a highly efficient manner, there is no way this can be beat with a producer-consumer loop. Unless you need a special file then the built in logging is always the best way to go, and simpler to program.

 

File writing can be a bottle neck, writing files that a multiples of the disk sector size allows the writes to occur without any offsets, increasing the efficiency. Not sure why NI recommends an even multiple.

 


@skinnert wrote:

@mcduff

 

I'm not familiar with the event register stuff either. What does it do? 


The Event register triggers an events when the amount of requested data is available. Using an event structure instead of polling allows more options in the future, ie, changing the task, starting a new file, etc.

 

Try this: Disconnect the wire from the read to plot and see how you do. The data won't be displayed. See if that helps. No LabVIEW here, need to look at the properties for networked cDAQ to see if there is anything. Will check when at the office.

Message 11 of 22
(1,288 Views)

Can you post a screenshot of the error?

Message 12 of 22
(1,282 Views)

skinnert_0-1683127549375.png

 

This is the error I'm getting now.

0 Kudos
Message 13 of 22
(1,279 Views)

Which code gives that error? This is a general out of memory error, you have buffer copies somewhere, not a DAQmx error.

Message 14 of 22
(1,273 Views)
Solution
Accepted by topic author skinnert

Looking at your screenshot, it appears you are trying to display 5 s of data.

 

20 channels at 500kSa/s for 5s, displaying doubles, is 80MBy!

 

A plot has 2 buffer copies. In addition you are trying to display 2,500,000 points on a few hundred pixels, it's not happening. If you want to display the data, do it in a separate loop and decimate the data.

 

Look at this https://forums.ni.com/t5/LabVIEW/Re-Rube-Goldberg-Code/m-p/3771315/highlight/true#M1062724

Message 15 of 22
(1,271 Views)

Ok, this is really good. I'm going to work with this a bit more to see if I can get some results out.

 


@mcduff wrote:

@skinnert wrote:

@mcduff

 

I'm particularly interested in the section you added under "Make Even Multiples of Disk Sector Size" and all the "DAQmx Configure Logging (TDMS)" stuff.


The DAQmx API has built in logging features that will save data in a highly efficient manner, there is no way this can be beat with a producer-consumer loop. Unless you need a special file then the built in logging is always the best way to go, and simpler to program.

 

File writing can be a bottle neck, writing files that a multiples of the disk sector size allows the writes to occur without any offsets, increasing the efficiency. Not sure why NI recommends an even multiple.

 


@skinnert wrote:

@mcduff

 

I'm not familiar with the event register stuff either. What does it do? 


The Event register triggers an events when the amount of requested data is available. Using an event structure instead of polling allows more options in the future, ie, changing the task, starting a new file, etc.

 

Try this: Disconnect the wire from the read to plot and see how you do. The data won't be displayed. See if that helps. No LabVIEW here, need to look at the properties for networked cDAQ to see if there is anything. Will check when at the office.


Disconnecting the wire from the plot helped. I was able to record all 20 channels at 500K. 

 

That seemed weird, so I hooked the graph back up and it started working. I strongly suspect I screwed something up the first time. Let's go ahead and blame an insufficient amount of coffee. 

 

I need to check this against the 9188 chasis as well. 

 

I have to admit, I'm not even sure how this code is recording data. I've only ever used the standard TDMS structure.

 

It works really well though.

Message 16 of 22
(1,266 Views)
Solution
Accepted by topic author skinnert

Some other thing to try:

 

  1. Look at this article and see if you can change the setting for your network card.
  2. If you to display a longer time trace, you will need to display the data in another loop and decimate it for display. I suggest reading the data in raw format, decimating raw data, then scaling it.
  3. You can configure the built in logging to start new files etc if desired. You can also add a case in the event structure where you pause saving, etc.

If you have an error please post an image of it.

 

The differences between original code and my example:

  1. Use built in logging <- KEY
  2.  Set number of points to even multiple of the disk sector size <- KEY
  3. Make sure the buffer is big enough, I like 4-10 seconds of buffer
  4. Don't display too  many points <- Key
Message 17 of 22
(1,246 Views)
Solution
Accepted by topic author skinnert

It sounds like you've got it mostly figured out, I just wanted to echo the whole "keep track of buffers" thing. Buffer copies will make your 80 MB of data explode into a LOT more than that.

 

I had a DAQmx project a while back that (IIRC) sampled data over two cards at a total of ~2 MS/sec. At 8 bytes per sample (they were doubles) that was 16 MB per second, so on the order of what you're dealing with.

 

My application was a monitoring one and didn't need to keep track of ALL of the data. It just needed to know when data changed REALLY accurately across a coupled dozen channels.

 

I would take a block of data and do some processing, including some curve fits and averaging, then log a few data points. Something like a max, min, and average (I can't recall).

 

I made sure I wasn't trying to display ALL of my data at once, and this code was able to run indefinitely without causing memory issues. I think the longest experiment ran for a few weeks, maybe a month, of continual data monitoring at very high rates. So yes, you can get super high throughput with DAQmx, but once you start trying to display a boatload of data all at once (and update plots super frequently) you can run out of memory REALLY quickly.

Message 18 of 22
(1,043 Views)

Late reply, but better late than never!

 

I can only mark 1 thing as a solution so I did that. The McDuff solution was what contributed most to my success. I like the DAQ-MX logging quite a bit. I haven't yet figured out how to enable/disable it since I don't want it recording all the time and also how to add additional header information to the resulting TDMS files... I've got some research to do but I'll get there.

 

Ultimately a number of you nailed the real problem on the display. I was displaying WAAAY too many points. Once I decimated the data down to a manageable amount to display it ended up running just fine. 


Thanks for the help!

0 Kudos
Message 19 of 22
(950 Views)

@skinnert wrote:

Late reply, but better late than never!

 

I can only mark 1 thing as a solution so I did that. The McDuff solution was what contributed most to my success. I like the DAQ-MX logging quite a bit. I haven't yet figured out how to enable/disable it since I don't want it recording all the time and also how to add additional header information to the resulting TDMS files... I've got some research to do but I'll get there.

 

Ultimately a number of you nailed the real problem on the display. I was displaying WAAAY too many points. Once I decimated the data down to a manageable amount to display it ended up running just fine. 


Thanks for the help!


Actually you can mark more than one reply as a solution.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
Message 20 of 22
(942 Views)