06-02-2022 04:54 AM
Hello,
I am trying to ONLY keep numbers between specific boundaries from an array, in this example the 1D array is 300 long but in reality it is going to be around 100K data. I am doing something wrong and can't seem to figure it out. I would appreciate any kind of help, I am attaching my VI and the initial pic for easy understanding.
Cheers,
Diural
Solved! Go to Solution.
06-02-2022 05:46 AM
Put probes in these places, turn on "highlight execution" and give the loop a few spins. I have a feeling that the probes will not contain the data that you believe they should.
It seems like you are trying to build a filter. I would recommend using a conditional indexing tunnel:
06-02-2022 05:50 AM - edited 06-02-2022 06:00 AM
After the first true the shift register holds what value?
one possible solution?
and I'm shure altenbach will come up with a much smarter solution 😉 😄
EDIT: If you rigth click on the in range and coerce.vi , you can choose the bounderies to be included or not ... seems your code wanted to exclude them ...
06-02-2022 06:43 AM
@Henrik_Volkers wrote:
After the first true the shift register holds what value?
one possible solution?
and I'm shure altenbach will come up with a much smarter solution 😉 😄
EDIT: If you rigth click on the in range and coerce.vi , you can choose the bounderies to be included or not ... seems your code wanted to exclude them ...
Altenbach probably won't chime in since no complex numbers would be involved.
UNLESS: he benchmarks migrating the In Range and Coerce in front of the for loop and / or enables loop parallelism to gain an extra warp speed factor.
06-02-2022 08:30 AM
Thank you LLindenbauer and Henrik for the insight. I have done it thanks to you guys.
Cheers,
Diural
06-02-2022 10:32 AM - edited 06-02-2022 10:38 AM
@JÞB wrote:
Altenbach probably won't chime in since no complex numbers would be involved.
I often do range checks from the position of two cursors, so upper and lower limits can change places depending on use (I call them "limit A" and "Limit B" instead of upper/lower limit). This will make it more universal. Of course sometimes you do want an empty array as result if the upper <lower so handling need to be decided and documented, of course)
I am not sure if the range check can take advantage of SSE instructions, but if it does, taking it out of the loop might be a performance advantage (at the cost of having to allocate a boolean array, so things need to tested in detail 🙂 )
I will chime in explaining some of the glaring mistakes and over-complications in the original VI. It is good to learn about proper use of shift registers and autoindexing, even if a particular problem does not even need them.
06-02-2022 11:27 AM
@JÞB wrote:
@Henrik_Volkers wrote:
After the first true the shift register holds what value?
one possible solution?
and I'm shure altenbach will come up with a much smarter solution 😉 😄
EDIT: If you rigth click on the in range and coerce.vi , you can choose the bounderies to be included or not ... seems your code wanted to exclude them ...
Altenbach probably won't chime in since no complex numbers would be involved.
UNLESS: he benchmarks migrating the In Range and Coerce in front of the for loop and / or enables loop parallelism to gain an extra warp speed factor.
AND it fits on a postage stamp.
06-02-2022 12:30 PM
@billko wrote:
@JÞB wrote:
@Henrik_Volkers wrote:
After the first true the shift register holds what value?
one possible solution?
and I'm shure altenbach will come up with a much smarter solution 😉 😄
EDIT: If you rigth click on the in range and coerce.vi , you can choose the bounderies to be included or not ... seems your code wanted to exclude them ...
Altenbach probably won't chime in since no complex numbers would be involved.
UNLESS: he benchmarks migrating the In Range and Coerce in front of the for loop and / or enables loop parallelism to gain an extra warp speed factor.
AND it fits on a postage stamp.
And a benchmark is attached along with the test vis backsaved to 2019
Items seen on my Windows 10 laptop running on an Intel CORE i3 in the original 2021 CE (Now you understand why I prefer to post by phone)
I imagine that a super optimized approach could be tailored if there are "things" known about the data set that cannot be known in a general solution. For normal day to day use the amount of optimization effort would be unlikely to be beneficial.
06-02-2022 01:29 PM - edited 06-02-2022 01:58 PM
@JÞB wrote:
Enabling and disabling loop parallelism does not have an observable effect with the conditional tunnel forcing the reordering of the array.
Both codes speed up significantly when I disable parallelization on my PC.
Your "inside" solution truly falls apart if the ranges include all data (lower=-inf (or -20k), upper=inf (or 20k)), where it goes up to a few seconds (!!!!) on my PC. (This might be a compiler optimization problem!?)
OTOH, if we do a classic explicit solution (below), it is several orders of magnitude faster (~60ms) in that same case (i.e. all data retained).
Even for other limits (e.g. 100-200), this is faster (100ms vs 150ms). Same if the range is zero and the output empty (limit1 =limi2), (90ms vs 140ms).
I am sure things can be further optimized.
Tested in LabVIEW 2020SP1. YMMV of course!
06-02-2022 02:43 PM
@altenbach wrote:
@JÞB wrote:
Enabling and disabling loop parallelism does not have an observable effect with the conditional tunnel forcing the reordering of the array.Both codes speed up significantly when I disable parallelization on my PC.
That does not surprise me although I wouldn't dare guess if it is a better optimization in 2021 or if your machine simply outperforms this old clunker.
Your "inside" solution truly falls apart if the ranges include all data (lower=-inf (or -20k), upper=inf (or 20k)), where it goes up to a few seconds (!!!!) on my PC. (This might be a compiler optimization problem!?)
A known issue with large arrays and conditional terminals requiring reallocation of the output buffer as the array grows. The explicit solution provided in your reply would operate in place.
OTOH, if we do a classic explicit solution (below), it is several orders of magnitude faster (~60ms) in that same case (i.e. all data retained).
Even for other limits (e.g. 100-200), this is faster (100ms vs 150ms). Same if the range is zero and the output empty (limit1 =limi2), (90ms vs 140ms).
I am sure things can be further optimized.
Again, if anything is known about the data .... But, your finding here is suggestive of an alternate approach to the conditional tunnel implementation as currently released. it certainly would be possible to pre-allocate an output array of size N, operate in place on that array with a hidden "FBN initialized to size N/Case and trim" on the conditional tunnel at the cost of some memory increase in sparse conditions. while the current implementation is "Case/Build Array with some advanced reallocation chunking" as I recall from conversations elsewhere after the original release wasn't performing super well.
Tested in LabVIEW 2020SP1. YMMV of course!