01-10-2018 09:23 AM
@Blokk wrote:
Thanks all for the useful hints! I play with my code, and try to enhance it 🙂
By the way, I did a stupid thing in that decimation VI (see my first post, last snippet): I calculate the decimated array from the Time Stamps array N times inside the FOR loop! Stupid me! 🙂 It is enough to calculate it once outside the FOR loop, since I use the same time stamp array for all XY curves 🙂
Make me wonder if LV decided if that was "loop invariant code" and optimized out of the loop.
Ben
01-10-2018 09:35 AM
Good point!
01-12-2018 08:49 AM - edited 01-12-2018 08:55 AM
I did not have enough time to do more detailed tests, but the first results show that I get about 2X slower speed with the data reference value buffer ring approach, compared to my naive "build array / get array subset" approach.
I attach two of my modified subVI where I implemented the DVR Ring buffer toolkit, if someone wanna have a look 🙂
(also the vip file)
edit: I did not benchmark it properly, and that above mentioned 2X should be meant that my Graph loop cannot keep the 1 sec iteration time where I read out the required channels to display (the loop iteration takes 2 sec with the DVR approach). So this 2X is more like ~10X factor...I will test more later, maybe next week...
edit2: sorry if some typdef ctrls are missing, but they are not relevant in these examples...
01-12-2018 09:03 AM
DVR's do not mean "faster" in all cases. They do allow for strict control of in-placeness and that is where performance improvements may be observed. Based on what I have observed, if you construct your non-DVR code such that it is allowed operate in-place, then the DVRs are more of a hassle than a benefit.
Granted, DVR do provide a mechanism allow sharing a buffer between threads so if you need that, then the DVR is useful. Examples that come to mind are sharing a common instance of a class between multiple instances of other classes as in Singleton.
Ben
01-12-2018 10:39 AM
When dealing with large data sets (you mentioned 172800 as the buffer size) you shouldn't use "Build Array". As Ben mentioned, LabVIEW requires the array memory to be contiguous. Should windows allocate the new element in a non-contiguous manner, you'll get an "out of memory" error (not sure if this was resolved recently but I've encountered it with LabVIEW 2012 on Win XP).
Pre-allocate the full buffer size and use a counter (simple shift register) within the AE to keep track of the target index for inserting new elements. Use In-Place structure with index/replace array element to act on the array in memory without duplication.
Should you reach the full buffer size, stop increasing the counter and use a combination of "Rotate 1D array" and the in-place structure. Shift the oldest datapoint to the end of the array and replace it. *Base this decision on the counter value instead of "Array Size" *
01-12-2018 11:05 AM
@proland1121 wrote:
When dealing with large data sets (you mentioned 172800 as the buffer size) you shouldn't use "Build Array". As Ben mentioned, LabVIEW requires the array memory to be contiguous. Should windows allocate the new element in a non-contiguous manner, you'll get an "out of memory" error (not sure if this was resolved recently but I've encountered it with LabVIEW 2012 on Win XP).
Pre-allocate the full buffer size and use a counter (simple shift register) within the AE to keep track of the target index for inserting new elements. Use In-Place structure with index/replace array element to act on the array in memory without duplication.
Should you reach the full buffer size, stop increasing the counter and use a combination of "Rotate 1D array" and the in-place structure. Shift the oldest datapoint to the end of the array and replace it. *Base this decision on the counter value instead of "Array Size" *
Yes, this was advised in an earlier post. However, I do not see any out-of-memory error (I monitor RAM usage, and it is not growing after the 2D array reached its limit; I limit the array size). Since the original code works just fine (running for months now without memory leak and any other problem) and fast enough for this app, I will use it for now. The main motivation of this post was to learn more efficient methods, where I got lots of good advises, thanks!
01-12-2018 11:11 AM
@Blokk wrote:
Yes, this was advised in an earlier post. However, I do not see any out-of-memory error (I monitor RAM usage, and it is not growing after the 2D array reached its limit; I limit the array size). Since the original code works just fine (running for months now without memory leak and any other problem) and fast enough for this app, I will use it for now. The main motivation of this post was to learn more efficient methods, where I got lots of good advises, thanks!
Sorry for the confusion. I didn't mean to hint at a memory leak issue. There is no error code for non-contiguous memory. The error that will be displayed by LabVIEW will say "out of memory". We had a student set-up a 24-hour monitor program for weather conditions at our solar array, that used the same approach as yours (build array and delete a subset of the array every hour). It would run anywhere from 3 days to 2 months before we'd get an "out of memory" error despite Windows Task Manager showing plenty of free memory.
01-12-2018 11:16 AM
Interesting! But was this really due to the build array approach? I cannot believe that simply building an array, and keeping its size limited could result in a crash/error. Someone else could comment on this? Can this really happen?
01-12-2018 11:36 AM - edited 01-12-2018 11:42 AM
from 2009... http://digital.ni.com/public.nsf/allkb/F3FB3A32B51BF33E8625763100704136
Additionally, the Build array function carries out a memory copy operation, which can slow you down.
Excerpt from: http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/vi_memory_usage/
"If the size of an output is different from the size of an input, the output does not reuse the input data buffer. This is the case for functions such as Build Array, Concatenate Strings, and Array Subset, which increase or decrease the size of an array or string. When working with arrays and strings, avoid constantly using these functions, because your program uses more data memory and executes more slowly because it is constantly copying data"
Basically, each build array operation attempts to first copy the data from one space in memory to another. This is the step vulnerable to "out of memory error". I may be wrong about it needing to be contiguous in regards to this error.
01-12-2018 11:38 AM
@Blokk wrote:
Interesting! But was this really due to the build array approach? I cannot believe that simply building an array, and keeping its size limited could result in a crash/error. Someone else could comment on this? Can this really happen?
"The devil is the details".
Pre-allocating the max array size and managing the pointers for "replace Array Subset" should work fine. Now if the "reshuffle" of the array was not handled elegantly there may have been an extra buffer required so as I eluded to... it depends on how it was done.
Now back when I was writing round-robin buffers, I was not reshuffling the array stored in the AE. Was keeping track of where the new data was being inserted and ONLY when the data was being requested did I append the more recent data to the older data as it was being returned to the calling VI. Using that approach the data was never moving in the buffer itself. It required tracking insertion points for the new data and where the oldest data was found AND I had to "wrap" the data when ever the updates filled to the end of the buffer but did not have enough room for all so the remaining update data had to be inserted starting at offset "0" again.
But as I wrote above...
Polymorphic queues have worked so well that I have not had to write a round-robin buffer since about LV 7.0.
Ben