LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

case structure parameter efficiency

Darren wrote;

"I'll see if I can get someone to enlighten us."

Oh yes PLEASE!

Since it was Greg's suggestion to add it to the VIAnalyzer, this would be a nice oppertunity for him to make another guest appearence.

Besides, we are still trying to trick him into becoming the the first Blue Enthusiast. Smiley Wink

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 11 of 27
(17,774 Views)
The real reason this test was added wasn't for the case shown in Ben's VIs. To really show why, make the following changes to the VIs.

Make the big array flow through the VI, as an in and out. Then make it go through a shift register. Now whether the data is being read or not, the terminals outside the case run pretty consistent. If the timings were expanded, they should be similar numbers. And the conditional read doesn't fare so well, it goes from slower to amazingly slower. The changes and timings are in the jpgs.

The real reason is inplaceness. However you choose to look at inplaceness, via dots or whatever, you will see inplaceness differences when a conditional terminal sometimes accesses its callers data and sometimes doesn't. This usually means that the subVI can't operate inplace to its caller, and a data copy is made each time the subVI is called. If you play around with this some more, conditional outputs on VIs are also problematic, and making both conditional isn't a good thing either.

So, back to the original diagram, in terms of data copies, the diagram is calling a VI N times, and passing in the same data, constant data, each time. The caller has to protect the data on its wire from the subVI. At first glance, the subVI doesn't touch the data. But what if you had a breakpoint set on the subVI. At the breakpoint, the panel is open, and you type in new data. Oops. The caller is supposed to protect the data, but if the terminal is conditionally read, it can't predict you won't do what I just described. On the otherhand, if the terminal is on the top level, the breakpoint on this VI will always happen after the data from the terminals is read. That means the subVI can truly be inplace only if its terminal is owned by the top diagram and not placed into a loop, sequence, or case diagram.

So, this test was added to hilight subVIs that didn't follow this convention. For UI VIs, it probably doesn't matter, but for worker subVIs being called with largish datasets, it isn't a good thing.

Clear as mud?
Greg McKaskle
Message 12 of 27
(17,763 Views)
I know I attached jpegs. The forum monster seems to have eaten them, or it doesn't like macs.

To summarize the results, with data going through the subVIs and circulating in shift registers, when no data is read out subVI tends to be a bit faster, but similar in speed. When the data is read, the out subVI is a couple thousand times faster, ymmv.

Greg McKaskle
Message 13 of 27
(17,743 Views)
Excellent question and thread.  Thanks to everyone.

Matt
Message 14 of 27
(17,741 Views)

Thank for the vist Greg!

As usual, I will have to read your post a couple of more times to get it all.

While developing the benchmark VI's. I originally was passing the array back out CONDITONALLY and that really confused my numbers.

While playing with it I came to realize that I was confusing my measurements of the data passed in with what was being returned.

What I saw there (try it yourself if you are interested) was if the output terminal was in the case that was not executing, the sub-VI ran even slower!

I had also read about sub-VI being able to re-use the callers buffers under special circumstances but alway had trouble predicting when it would or would not re-use the buffer.

NOW I have a clue and can start looking closer.

Something tells me the "form" of my sub-VI's may never be quite the same ever again. Smiley Happy

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 15 of 27
(17,736 Views)

Wow, I had not expected this much interest in what I considered to be an arcane point.

Thanks to all for the interest and the info.

I have been an active member of the microchip forum (my embedded background), but I think I need to start spending more time here.

 

I am still curious as to how reliable single timing measurements are when doing benchmarking since Windows can "disappear" for undocumented amounts of time.

 

0 Kudos
Message 16 of 27
(17,647 Views)


@greg McKaskle2 wrote:
... But what if you had a breakpoint set on the subVI. At the breakpoint, the panel is open, and you type in new data. Oops. The caller is supposed to protect the data, but if the terminal is conditionally read, it can't predict you won't do what I just described. On the otherhand, if the terminal is on the top level, the breakpoint on this VI will always happen after the data from the terminals is read. That means the subVI can truly be inplace only if its terminal is owned by the top diagram and not placed into a loop, sequence, or case diagram.

...
Clear as mud?

Mayb e I am understanding this wrong, but it seems that If I disable debugging, or even set the subVIs to subroutine priority, the "protection" is no longer really needed. Still, the penalty stays about the same under these conditions, it seems.
Message 17 of 27
(17,634 Views)

I am still curious as to how reliable single timing measurements are when doing benchmarking since Windows can "disappear" for undocumented amounts of time.



The typical limit is the timer resolution rather than thread swapping. If you use a large enough data set or use a loop to run through the algorithm enough to take tens to hundreds of milliseconds, then you can get a pretty good measure of how the operation time. True, the OS will schedule stuff differently each time through, and LV itself is a threaded scheduling system. But the overhead of these scheduling operations is in the range of ten microseconds. So, if you run the rest a handful of times and see reasonable clustering of data, then you can have some faith in the numbers.

On rare occasions, it is good to kill other apps, pull the network cable or flush the disk cache -- depending on what you are measuring.

The best way to determine if it works for you is to make the measurement and build confidence in the data. The biggest thing to watch for is to make sure you are measureing what you think you are, and don't have parallel operations going on, and that you sequence your timing with the operation.

Greg McKaskle
Message 18 of 27
(17,629 Views)
Mayb e I am understanding this wrong, but it seems that If I disable debugging, or even set the subVIs to subroutine priority, the "protection" is no longer really needed. Still, the penalty stays about the same under these conditions, it seems.





There are some efficiencies LV will take advantage of if you make a subVI into a subroutine, but it won't magically do all of them. In future releases, more and more of these will likely get implemented.

Greg McKaskle
Message 19 of 27
(17,629 Views)

if you make a subVI into a subroutine
Greg McKaskle

Okay, my LabVIEW ignorance is going to show again, but I would rather look stupid than be stupid...

 

I thought a SubVI was a subroutine (based on conventional programming).

What is the difference?

Thanks.


 

Message 20 of 27
(17,605 Views)