LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Efficient sizeof function

Hi all,

 

I would like to know if there is a way to get the size of a fixed-size data type more efficiently than simply using Type Cast or Flatten to String primitives.

By "fixed-size data type" I mean any type that has a size fully determined at compilation (numerics, booleans, timestamps, fixed-size arrays of these types, clusters of these types…).

By "size" I mean the number of bytes it has in its internal representation, that is the size of the string produced by the Flatten to String primitive.

 

I have already managed to create a more efficient (100x to 150x) function using malleable VIs and some tricks with autopreallocation of arrays, but I am still limited to integers and booleans, and single-level clusters of up to 8 elements of these types. I tried to support a second level of sub-clusters, but the increasing number of encapsulated VIMs rapidly makes the IDE very sluggish…

 

Here is an instance of my VIM "sizeof" with a cluster of 4 scalars as the input Data Type.
For each element, it allocates a boolean array with the size (in bits) of its internal representation. The "build array" then computes the total size, then the "decimate array" divides it by 8 to get the size in bytes. All the code is disabled to optimize execution time, since all we need is the type-propagated fixed size of the array.

raphschru_0-1678894221937.png

 

Here is the malleable SubVI "Data Type to Fixed-Size Boolean Array".
If the given type is supported, it outputs a U8, otherwise it outputs an I8.

raphschru_1-1678894552914.png

 

Any idea on how to support more types (fixed-size arrays, sub-clusters, …) all by keeping the IDE response time reasonable ?

Attached is my example project.

0 Kudos
Message 1 of 4
(936 Views)

If you truly need this to be blazing fast, my first thought is just a look-up table. You could iterate through all of the standard datatypes (using scripting if you want) to get all of your "base" datatypes' sizes just once to build your table. Then you could use some recursion and/or info from the Type Information palette on arrays, clusters, etc. I believe this would also work with non-fixed-size datatypes (at runtime) and you're not having to reallocate storage for anything.

 

If you are worried about maintaining the look-up table, you could use uninitialized shift registers to recreate it the first time the function is called. That would incur significant overhead for the first call, but you could do that as part of your initialization routines.

 

 

Note that there ARE some issues with the above, especially concerning padding. See this old thread for more information: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Sizeof-Operator/idi-p/1042361

 

You might be able to get around this by simply calling Flatten to String on "complex" datatypes (like mixed clusters) and by calling the look-up table on "simple" datatypes. It depends on what you're actually doing with the information.

0 Kudos
Message 2 of 4
(927 Views)

I would need this function to easily parse received message frames in an RT target. This way I could plug in any data type (for example a cluster representing the header of my frame) and efficiently know how many bytes I need to extract from the received data to rebuild my header.

Here is how I would use it:

 

raphschru_0-1678897739672.png

 

0 Kudos
Message 3 of 4
(910 Views)

Well, after some benchmarking it seems this simple method is just as fast, so I guess I will use this one for my application 🙄 :

 

raphschru_1-1678898604136.png

 

As for the 100x to 150x gain I thought I had in the beginning, I think I was being fooled by the compiler that made some optimizations in my benchmarks (like loop-invariant code optimization)...

0 Kudos
Message 4 of 4
(896 Views)