LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

SGL to I32 Typecast

I have a value on my RT target which is passed as a SGL (unavoidable unfortunately at the moment).

 

I need to extract a 32-bit value from this (all bits, not just the lowest 24 bits).  Using "Type Cast" works but it's horrendously slow.

 

How on earth can I cast a SGL to the equivalent I32 with an identical bit pattern without the major overhead of flattning and unflattening inherent with the Type Cast function?  This is essentially a NOP, so I can't believe there isn't a relatively simple way to do this.

 

Can anyone help?

0 Kudos
Message 1 of 26
(4,328 Views)

I don't have an answer but I am shocked that the Type Cast is slow.  How are you measuring the slowness and what are some numbers associated with the slowness?  It seems quite strange and I too would expect very little overhead.

0 Kudos
Message 2 of 26
(4,320 Views)

How slow is "horrendously slow"?

 

A quick benchmark I did gave me 614 milliseconds to do 1 million conversions for a random number.  Or about .6 milliseconds per conversion.  It didn't seem horrendous.  What kind of speed are you looking for?

 

What is interesting is that the typecast of a string control to a U32, only took .09 milliseconds per conversion.    So about 5-7 times faster.

 

Going from a number to string back to a number took about .6 milliseconds again.  So you're right that something does seem to be going slower with a typecast to the number.  Just not sure what speed you'd like to have.

0 Kudos
Message 3 of 26
(4,317 Views)

Type Cast normalizes endianness so it is going to go in and start byte swapping.  That is why it is so "slow".  

 

When you do not have to worry about things that NI decrees you shouldn't have to worry about, things are great.

 

When you have to worry about things that NI decrees you shouldn't have to worry about, things get very tough.

 

I have yet to find a way that consistently gives more than marginal improvements.  NI would need to provide a new version of Type Cast which leaves the data alone.  (I'd like it, but NI considers Type Cast  advanced and dangerous, this would probably make their head explode).

 

 

Message 4 of 26
(4,308 Views)

@RavensFan wrote:

How slow is "horrendously slow"?



My benchmarking is taking place in the microsecond range.

 

I have a chunk of code which is responsible for managing registrations to RT variables and packing them to be sent back to the host on a "value Change"-like functionality.

 

My code being fed by this value requires approximately (timing gets a bit hairy down here) 40 ns.  The type cast alone (note I'm talking about RT system here) takes 400 ns.  The type cast takes 10 times longer than the code I spent valuable time optimising.......

 

Out entire loop rate is 50us, allowing the majority fo that for FPGA IO we have essentially 20 us available for our functionality.  I don't like giving up 2% of my overall RT budget for a typecast..... That's "horrendously slow" for me since in real terms, doing a typecast from one 32-bit number to another should be free.  Or at least we should have a method to make it free.  In C it's free.

0 Kudos
Message 5 of 26
(4,303 Views)

@Darin.K wrote:

Type Cast normalizes endianness so it is going to go in and start byte swapping.  That is why it is so "slow".  

 


Can you explain what you mean regarding Type Cast and endianness?

 

 

I know that Type Cast places smaller types in the most significant position of a larger data type and only takes the most significant position of a larger data type when converting to a smaller type; it goes in most-signficant-to-least-significant order and then pads/drops the rest.

 

I could see potential for this to cause some disruptions when casting between data types of different sizes, but for a data type represented with an equal size, there should be no manipulation going on at all.

 

 


@RavensFan wrote:

... 614 milliseconds to do 1 million conversions for a random number.  Or about .6 milliseconds per conversion ...

 


Looks like a typographical error - should be 0.6 microseconds, or 600 nanoseconds, per conversion, which is inline with Intaris's reported benchmark.

 



0 Kudos
Message 6 of 26
(4,278 Views)

@VItan wrote:
Looks like a typographical error - should be 0.6 microseconds, or 600 nanoseconds, per conversion, which is inline with Intaris's reported benchmark.

You're right.  I must've miscalculated with a factor of 1000 in there somewhere.

 


0 Kudos
Message 7 of 26
(4,264 Views)

@VItan wrote:
I could see potential for this to cause some disruptions when casting between data types of different sizes, but for a data type represented with an equal size, there should be no manipulation going on at all. 

There is a fine distinction you are missing here.  Type Cast takes a lot input types and converts them to a lot of output types.  A very common input/output type is string.  If there was no concern for endianness, there would be trouble if you type cast to string on one platform and unflattened on another one with different endianness.  For this reason TC will convert everything to big endian.  Now, some individual cases, like the one being discussed here, could go through untouched, but that requires someone from NI to go in and create special cases in the code.  (Anomalies in timing are probably due to a few of these special cases already existing).  That is why I suggested a new node, instead of a case-by-case, wait-for-the-next-release cycle we could decide to be safe or unsafe.  It is like the Always Copy node, rarely needed, but amazingly helpful when necessary.

0 Kudos
Message 8 of 26
(4,237 Views)

Can we do this with a DLL or is the DLL call overhead already slower?

0 Kudos
Message 9 of 26
(4,222 Views)

Are you operating on scalars or arrays?  I usually do not recoup the DLL overhead unless I am operating on an array.

Message 10 of 26
(4,211 Views)