03-15-2007 08:15 AM
Yes tst is correct.
I am working on a "Action engine" Nugget. When I am close I will schedule it in the "Keeping the Nuggets cooking" thread found here.
http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=3194&jump=true
The term "Action Engine" comes from the book ""LabVIEW Applications Development A Course on Advanced LabVIEW Programming Techniques" produce by Viewpoint Systems Version 2.2 August, 2000 page 67. This book was the precursor to the NI "Advanced Applications Development Course".
Ben
06-28-2012 08:50 AM
@Ben wrote:
Thank you Tomi!
I will have to experiment with this approach to get my head around what you have done. I have read that the variant has been put "on steroids" since I last looked and the operations are supposed to be much faster.
Has anyone ever compared the new variant performance and how it matches up with non-variant approaches?
Ben
I know this is an old thread, but I also need to wrap my head around this and understand the mechanism behind the power of using variants.
This might provide the means of writing beautiful code.
06-29-2012 12:45 PM
Hi Ray-R
I found this article related to the Variant Data in LabVIEW.
I hope this helps!
Regards,
06-29-2012 01:16 PM
@Ray.R wrote:
@Ben wrote:
Thank you Tomi!
I will have to experiment with this approach to get my head around what you have done. I have read that the variant has been put "on steroids" since I last looked and the operations are supposed to be much faster.
Has anyone ever compared the new variant performance and how it matches up with non-variant approaches?
Ben
I know this is an old thread, but I also need to wrap my head around this and understand the mechanism behind the power of using variants.
This might provide the means of writing beautiful code.
The power of the method lies not in the variants (nice article in the previous post, nonetheless), but rather in the implementation of the associative array also known as the variant attributes. They are implemented using a special type of binary tree, known as a red-black tree. Binary Trees in general are very powerful data structures which allow you to "rearrange" your data much like you would arrange your desk. Things you use very often are in the top drawer on your right or left side (depending on your handedness), the junk drawer is far away. It takes some extra time to arrange everything this way, but once you have it is very convenient. Arranging your data in a Binary Tree allows you to find an item on average, more quickly than using a linear search. The optimal tree for a given data set might have a few very common items at the top of a tree, and a few items way down at the end of a long chain. The Red-black tree enforces balance so that there is more predictable behavior, ie. less separation between the best and worst cases. Usually a good trade-off, especially when applied to generic datasets. The power in this case is that the insertion operation into a binary tree automatically adds new values or overwrites existing ones to ensure there are no duplicates. More efficient in many cases than performaning a search and conditional append. Doubly so when the values are strings (adding conversions to/from strings is a penalty that sometimes cannot be overcome).
Many of us have asked, pleaded, begged, or otherwise tried to get NI to add a native tree API so we do not have to abuse variants, but that seems a bit far out. The glaring omission in the attribute implementation IMO is that you can not pull the top off the tree directly (should be O(1) + small penalty to rebalance), instead you have to get all N of the names to get the first one so you know what to pull off.
06-30-2012 09:17 AM
Thanks guys,
I tried to stay away from variants in order not to fall into the abusive category.,..
I now think it is high time I explore this. A little "controlled" abuse may not be that bad.
I will certainly experiment..
Thanks!