LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Am I using the in place structure properly to push/pop data?

Hey fancy folk,

 

TLDR; towards the bottom of the post is an image with me using the in place struct.  I am trying to push/pop the data from a new array into the old array.  I don't think I am doing this properly and was hoping someone could offer insight as to what I should be doing.

 

My Problem:

I am going to have a lot of data stored into a circ buffer (15 XY signals, each with 10k points. 10k for time array, 10k for Y array, which is 20k per signal.  20k * 15 signals = 300k data points) and I wanted to be memory and CPU efficient with updating my circular buffer.

 

My old method:

My previous method has been to concatenate the new array onto the old array then delete the part of the old array I didnt need.  To be explicit, say old array is len 10, new array is len 5, and my desired size is 12. I would build the array (concatenate the 5 elements to the old 10) then split the "newly built array" at index 3 and grab the remaining 12 elements, then build a cluster with the time and data 12 element long arrays.

 

This has always seemed horribly inefficient, but hey, it worked and I don't see a push/pop.  With more data that is being updated, I didn't want to bog down my code which led me to the in place structure.

 

New method:

I am using the in place structure with the "In Place In/Out Element" to grab the array, split it at the desired position, then rebuild the array and store back into the original memory location.  I am not sure if this is any more efficient since I am still using the split and build array functions like I would for my original method.  It feels like I'm still doing my old method, but with more steps. 

 

Quick note, the F case just concatenates the new array to the old array.  I know I am still being inefficient since I am not initializing my "out XY" array to my desired size, but 1 step at a time :D.

 

Matt_AM_0-1723497408262.png

 

Additional Question: Should I wrap the in place struct with another in place struct to to bundle/unbundle the XY signal?

 

Thanks,

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 1 of 15
(577 Views)

Since you are splitting and concatenating you will have memory operations, which could slow you down. (Note 300k of doubles is only 2.4MByte of memory, not a lot in a modern system. I have a system that produces 1.6GByte of data per second.)

 

Other possible options:

  1. Make a fixed length Queue. Use a lossy insert to replace elements when the circular buffer if full. Since elements are placed in order than no need to rotate the array. This can be a problem when reading the queue as that element i removed when read, not sure about your application.
  2. You could make an array of DVRs. Replace/Create the DVR at the index where you want to insert. Read the DVR for data display/analysis/etc. Destroy the DVR to recover memory.

Looking at you picture there may be better ways to do everything inplace where possible. However, sometimes the inplace functions also make memory copies, which may be the case in your application. This can occur if the compiler thinks the array length can change. Without attaching any code no one here can experiment.

0 Kudos
Message 2 of 15
(561 Views)

Yes the IPE you have there very likely does nothing. This is often true for IPEs generally, because the compiler can be smart enough to figure out how to do things in place even if you didn't explicitly ask it to do so.

 

One thing that could help is as you mentioned using an IPE for the cluster. If you're optimizing this kind of thing, turn on buffer allocation display. The goal is to reduce black dots, you can get rid of 2 by operating on the cluster in place

avogadro5_0-1723502706753.png

 

Message 3 of 15
(547 Views)

Stay with the Ring Buffers.  As you "remove" (read?) elements from the "oldest" location in the Ring Buffer, move the pointer to open up "space" between the last (circular) location written to and the new "beginning of data" pointer you just moved.  Maintain a count of how many elements are in the circular buffer (analogous to "Queue is full") -- indeed, a circular buffer is one way to implement a Queue.

 

With a pointer to "start of data" and one to "end of data", you don't need to move the data, you just move the pointers.

 

Bob Schor 

Message 4 of 15
(535 Views)

Depending on how much data you're talking about i'd go with a Fixed/Limited queue or simply use a fixed array and use Rotate Array and add at the start (or end).

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 5 of 15
(490 Views)

Fancy Folk,

 

I ended up going with a queue of max size and lossy enqueue.  See image below.  Let me know if there are ways to better optimize it.  

 

Matt_AM_0-1723561431824.png

 

The only thing I wasn't 100% on is if I am using the lossy enqueue properly.  I don't think there is a way to "mass add data to the queue" without having to add it individually via the for loop. 

 

I wrote a quick proof of concept VI, consisting of making 5 signals performing 20 writes with each write being 3 elements long where the max queue size is 10.  The circ buffer code works as I expected.

 

Thank you all for your suggestions and clarification, I really appreciate it!

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 6 of 15
(456 Views)

You can download a Circular Buffer implementation from VIPM.

Malleable Buffer Toolkit for LabVIEW - Download - VIPM by JKI

 

There are a lot of questions with your code:

  1. Is the data contiguous? If so, why do you need an array of timestamps?
  2. Do you need to store data one element at a time, why not store arrays of data?
  3. What is the purpose of your time stamp/data cluster at the end?
  4. Are the number of points acquired each time variable?
0 Kudos
Message 7 of 15
(425 Views)

1) Is the data contiguous? If so, why do you need an array of timestamps?

 

My data is continuous.  I am including a time stamp array because that's how I've done it in the past when getting multiple data rates to show up on the same XY graph. All my data for this test should all be sampling at the same rate, but I'm doing this in case I have multiple data rates going to the same XY graph for code reuse purposes.

 

I've had issues with waveform charts/graphs "resetting" due to being updated with data from different sampling rates.  I've found XY to be more set up, but consistent with how I expect them to react.  

 

 

 

2) Do you need to store data one element at a time, why not store arrays of data?

  

I'd prefer to be able to store new data as an array, but I am not sure how that would work under the lossy queue logic I am using. I would assume that storing it as an array would be a lot more efficient than doing it 1 element at a time.

 

My logic being for doing 1 at a time is; If I made the queue an array and not an individual element, then I'll be needing to dequeue the array, modify it, then requeue.  I am assuming this will mean that LV will have to allocate memory for the new array while also deallocating the old array, which brings me back to the original method I was doing.

 

 

 

3) What is the purpose of your time stamp/data cluster at the end

 

Those clusters are going to an XY graph on the FP.  They are going to be split up amongst the different stations and combined with other data at different sampling rates.

 

 

 

4) Are the number of points acquired each time variable?

 

Theoretically they should be the same, continuous timing for DAQmx read, but I don't want to account for number of samples read for code reuse purposes.  My coding smell is to have a wait time in a loop and then read all samples from DAQmx and send to my display (consumer) loop.  I could instead do a "read num samples" based on the clock I'm using If I wanted to keep things consistently updated at X ms.  EG, if my clock is 1 khz and I want a 10 ms update rate, set num samples to read at 10 then timeout of .015 (accounts for a bit of wiggle room but wont hang).

 

However, I like my method because if I do any order analysis (gating off of an encoder), as long as I stay under the max samples/sec of the card, I can update my code at a somewhat consistent time.  I 100% understand a "wait 10 ms" will be roughly 10 ms, but at the end of the day, I'm just using that wait to allow the rest of my code time to do its thing. 

 

Also, digressing to code hang, with the "wait 10 ms and read all samples", if no samples are actually read, then my code sends an empty array and nothing, from my perspective, seems to be effected by the empty array of data.

 

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 8 of 15
(397 Views)

@Matt_AM wrote:

Fancy Folk,

 

I ended up going with a queue of max size and lossy enqueue.  See image below.  Let me know if there are ways to better optimize it.  

 

Matt_AM_0-1723561431824.png

 

The only thing I wasn't 100% on is if I am using the lossy enqueue properly.  I don't think there is a way to "mass add data to the queue" without having to add it individually via the for loop. 

 

I wrote a quick proof of concept VI, consisting of making 5 signals performing 20 writes with each write being 3 elements long where the max queue size is 10.  The circ buffer code works as I expected.

 

Thank you all for your suggestions and clarification, I really appreciate it!

Matt


This queue probably hurts more than it helps. The reason people use queues for this kind of thing is to avoid transient data copies as arrays are resized, but by flushing the queue every time you're forcing a full copy anyway. The queue stores the data in memory in something like a linked list: in order to create the array version it has to traverse the linked list and make a copy of every element to get the contiguous array representation of the same data.

Message 9 of 15
(293 Views)

@avogadro5 wrote:

 

... but by flushing the queue every time you're forcing a full copy anyway.

 

Just wondering, how am I flushing the queue? Is it when I am querying all elements in the queue?  This is where my knowledge of LV lacks; the "nuts and bolts" side of things.

 

If I am doing this wrong, what is the correct way to use the queue as I want, or is it just I am not using an efficient method for doing what I want?  Would using a shift register with an IPE to update the XY bundle be the correct methodology?  I can add a feedback node to my queued circ buffer or I can add a shift register to the loop outside and pump the "old XY" into the circ buffer.  I'd most likely go with a feedback node since this way I keep the queued circ buffer self contained, and all it needs is the new data and num points to run.

 

Digressing to the current queue circ buffer I am using, I thought that queues basically were a list of points and data, which point to where the next set of pointers/data is at.  By creating a 1000 elements and doing a lossy enqueue, I am assuming that the queue has to update the initial "start here" pointer (element 0), and the "old stop here" pointer (give more details below). So when lossy enqueuing, I wouldn't need to create an array, then remove data from the array, then send that array to my XY bundle, I'd just need to update pointers and query the queue for all elements.

 

To be verbose with the "old stop here" pointer, say a queue is size 100. "start here point" is element 0 and "end here" pointer is element 99.  If I lossy enqueue 3 elements, the "old start here" pointer references the "old element 2" (now element 0 due to the lossy enqueue removing the 1st 3 elements) and the "old end here", which reference element 96 now, would be updated to point to element 97, which continues to the "stop here" pointer of element 99 again.  Hopefully this makes sense, I tried giving a clear example of what I meant by "updating the old stop here pointer" being updated once a lossy enqueue happened.

 

Thanks,

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 10 of 15
(237 Views)