08-14-2012 08:21 PM
Not that I need it, but I could not find any help on the new LV 2012 Conditional Indexing feature. It appears in the contextual menu, but the contextual help does not provide any hint of what this is, and it's nowhere to be found in the Help:
Solved! Go to Solution.
08-14-2012 10:08 PM
08-15-2012
12:47 AM
- last edited on
01-10-2025
01:36 PM
by
Content Cleaner
You can find the help section here, although it's pretty slim - https://www.ni.com/docs/en-US/bundle/labview/page/conditionally-writing-values-to-loop-output-tunnel...
I can say that the performance was tested during the beta and that the performance is the same as using Build Array in a case structure. I'm assuming NI will change this to have better performance, but I don't know when.
08-15-2012
07:49 AM
- last edited on
01-10-2025
01:36 PM
by
Content Cleaner
@tst wrote:
You can find the help section here, although it's pretty slim - https://www.ni.com/docs/en-US/bundle/labview/page/conditionally-writing-values-to-loop-output-tunnel...
I can say that the performance was tested during the beta and that the performance is the same as using Build Array in a case structure. I'm assuming NI will change this to have better performance, but I don't know when.
It looks like they may have already. I am not getting poor performance when benchmarking in the 2012 release right now.
08-15-2012 07:58 AM
I'll have to do some digging to make sure, but I beleive the for loops were fine performance wise because LabVIEW can pre-allocate the maximum size. The while loops behaved the same as the build array (or was it even worse?). Like I said, I'll have to do some digging in the beta forum.
08-15-2012 08:00 AM - edited 08-15-2012 08:01 AM
I just wrote my own benchmarking code and it seemed to work fine with the for loops. I would expect the while loops to have to use whatever algorithm build array uses when it grows, due to not knowing the max size of the array before hand so this makes sense. What I can't figure out is why my preallocate, replace array subset, trim off the "fat" method is still slower than the conditional autoindexing. What could they be doing differently in the background that would enhance this performance?
08-15-2012 08:18 AM
@for(imstuck) wrote:
I just wrote my own benchmarking code and it seemed to work fine with the for loops. I would expect the while loops to have to use whatever algorithm build array uses when it grows, due to not knowing the max size of the array before hand so this makes sense. What I can't figure out is why my preallocate, replace array subset, trim off the "fat" method is still slower than the conditional autoindexing. What could they be doing differently in the background that would enhance this performance?
Care to share your benchmarking code? From what I was digging up, the conditional terminal should behave EXACTLY like the Build Array method.
For anybody that cares, there is a CAR 342504 to document the performance "issues" we found.
08-15-2012 09:17 AM
Let me know if I made a mistake. 7 a.m. is pretty early for me!
08-15-2012 12:12 PM - edited 08-15-2012 12:16 PM
Sorry it took so long to get back. I had to download LV2012. I think I still need to download the patch. And also setup all of my setting and addons. VIPM will be busy for the rest of the day.
Anyways, your benchmark is flawed. Add outputs to the arrays and you will get radically different results. I think some of your loops didn't exactly iterate the 10000 times since there was no output. And I have no clue what you were trying to show with the second benchmark.
I didn't edit the labels yet. So for those who haven't looked at the VI, "x/y" is the time for pre-allocating and keeping only what is needed. "x/y 2" is the same except with another data type. "x/y 3" is using the build array. "x/y 4" is using the new conditional indexing feature. Each one is looped 10000 times and the average is given (in ms).
08-15-2012 12:21 PM - edited 08-15-2012 12:23 PM
Doh, I just assumed you didn't need the output, but that makes sense if LabVIEW sees it is unused it may optimize. Thanks for looking over that. The second benchmark was just showing in placeness for that scenario when I sent it out to my coworkers also in case some were unfamiliar. Had no real value for this benchmarking