LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Eleven Ways to Update an Indicator from within a subVI. Their Relative Performances and Quite Far Reaching Consequences...

Yes, I know, you wanted to do this some day. So I did it for you. Just run (and then stop) the Main VI from the attached set (Saved in LabVIEW 2016 32-bit). I suspect (and hopeSmiley Tongue) the numbers will be quite a surprise and even a shock, especially for the fans of one particular method and some very aggressively promoted frameworks which use that method.

Message 1 of 53
(6,025 Views)

Running them all in parallel produces some "greedy loops" which slows all the rest of them down.  If I add a 1 ms wait to every single While loop (except the two with event structures in them), they're all in perfect sync with the exception of the "Value Property of Indicator Reference", which falls behind.

 

To do an actual fair test, run them sequentially, not in parallel.

Message 2 of 53
(5,980 Views)

Sure, if we add to the loops something taking a larger fraction of the entire length of their iterations, then the performance difference will LOOK smaller. I had an empty watchdog queue at some point so the Preview Queue Element node took much (relatively, of course) longer to execute even with a zero timeout than it does when there is indeed an element in that queue. But the thing is that if I didn't have that Preview Queue Element in the sending loops at all (let alone your "huge" artificial delay which contributes the bulk (almost all) of the total time each iteration takes), the performance difference would be even larger. So, try putting a Diagram Disable structure around those Preview Queue Element node and you will see it for yourself. "Greedy loops" you are saying? Some are greedier than others for some reason? OK, I will make a sequential version and we will see.

0 Kudos
Message 3 of 53
(5,973 Views)

OK. Here it is with a "sequential" version included. The differences are even more obvious now. Note that for non-lossy regular queue and the event structure queue I now take into account only the iterations which have been dequeued in the receiver ("fully processed") by the time the sender finished sending.

Message 4 of 53
(5,932 Views)

As I already commented on LAVA. Turn off Debugging first, otherwise your measurements are useless.

0 Kudos
Message 5 of 53
(5,901 Views)

There's also a twelfth method Set Control Value by Index. It's actually quite fast.

 

And your Sequential code has a race condition where the loops are stopped sometimes before they have processed all items.  The counter index for all consumers should all be equal, but when I run the code, they are not, this skews the results massively.

0 Kudos
Message 6 of 53
(5,894 Views)

Looking at the code, a few other things occur to me.  Comparing Lossless vs Lossy modes of communication directly is not a 1:1 comparison.  A Lossy Queue will obviously run faster than a lossless Queue, since it does less work. Values get lost.

 

So it's important to note which methods are lossy and which are lossless. On my machine, User Events come out looking pretty good actually. I see nearly 1.5k Iterations processed per ms. The best Channel (High Speed Streaming) comes in at half that.  But your calculation of "Iterations per ms" is wrong.  Instead of taking the last element of the protocol being tested, you need to take the number of times the consumer loop has iterated.  This hugely favours lossy methods because you're taking a value of, say 90k as the last value when perhaps only 30k iterations have taken place.

 

The problem with benchmarking Notifiers or SEQs this way is that you're receiving almost no data, so the majority of the "communication" never actually takes place.  On my test VM, the iteration counter for the consumer never indicates having received anything before the notifier or SEQ is actually destroyed, defeating the purpose of the test.

Message 7 of 53
(5,879 Views)

@styrum wrote:

Yes, I know, you wanted to do this some day. So I did it for you. Just run (and then stop) the Main VI from the attached set (Saved in LabVIEW 2016 32-bit). I suspect (and hopeSmiley Tongue) the numbers will be quite a surprise and even a shock, especially for the fans of one particular method and some very aggressively promoted frameworks which use that method.


Turns out your statement falls flat because your testing is flawed. I don't mean this as an attack, quite the contrary. It's Absolutely imperative that people "peek behind the curtain" in order to verify other people's claims, but at the same time, humility should lead you to question such results which "shock" experienced users.

 

So no, the numbers are not a shock because they change hugely if you start testing in a more robust way.  You later claim adding a wait artificially changes the results, yet you have debugging enabled everywhere.  There's a post somewhere about how to benchmark code (Here's a good one), it a really hard thing to do because it necessitates taking care of a LOT of otherwise extraneous factors in order to really produce numbers with meaning.

 

A nice phrase in german which doesn't translate as elegantly is "Wer misst misst Mist".  That means, whoever measures, measures crap.

 

So I applaud the idea of testing the various modes of communication, but the implementation can be improved. And none of this post (not a single word of it) is meant in any kind of condascending way. I've been in your position. It's the start of wonderful things.

Message 8 of 53
(5,867 Views)

@Intaris wrote:

There's also a twelfth method Set Control Value by Index. It's actually quite fast.

 

Good catch! Thanks.

 


@Intaris wrote:

And your Sequential code has a race condition where the loops are stopped sometimes before they have processed all items.  The counter index for all consumers should all be equal, but when I run the code, they are not, this skews the results massively.


No, I deliberately do that. I do want to count only fully completed "transactions" in evaluating "performance". So, whatever number the receiver manages to process by the time the sender is done is what constitutes the number of (fully) processed iterations

0 Kudos
Message 9 of 53
(5,804 Views)

@Intaris wrote:

So I applaud the idea of testing the various modes of communication, but the implementation can be improved.


Of course, it can! Please do! The whole thing was meant only as a provocation seed for a meaningful discussion.

0 Kudos
Message 10 of 53
(5,743 Views)