LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

floating point and typecast bug

I've written a small vi to show students how big a flaoting point number is and also how big the difference between the biggest number and the one next to the biggest number is. These differences are huge but I did not succeed for the extended type.

Attached is a vi that calculates the biggest number for single and double and this should also work for extended but does not.

I can be wrong but a string should be typecasted in the same way for all three datatypes except for the exact size.

So the first question is , do I make a mistake?

Second if I don't is this a bug?

 

ps the vi is in lv2011sp1 and uses a simple subvi also attached.

greetings from the Netherlands
0 Kudos
Message 1 of 8
(2,691 Views)

I haven't looked at your VIs yet, but are you aware that LabVIEW flattens extended precision values always to 16 byte? This has historical reasons since the extended precision value is actually floating point unit dependant. x86 uses 10 byte for that, 68k used 12 bytes although 16 bits of it were actually not really used and Sun Solaris used a software library for extended support that used 16 byte entities. In order to allow the LabVIEW flattened data to be interchangeable between the different LabVIEW versions, the Flatten and Unflatten function convert the platform dependant extended number always to and from a 16 byte number in the byte stream. So the binary value you see in the byte stream is what the Sun Solaris extended floating point value would produce and it does definitely not use all of the bits completely.

Rolf Kalbermatter
My Blog
0 Kudos
Message 2 of 8
(2,688 Views)

Thanks Rolf. that could be an explanation as I read that typecasting converts by using flattening and unflattening.

However the typecasting is not the same as flatten unflatten as can be seen on the typecast from string to single and double.

The string length information is available when flatten is used but this length info is not used when typecasting.

 

So although it may have a relation with the old Solaris format I still consider it as a bug, until a real explanation comes.

greetings from the Netherlands
0 Kudos
Message 3 of 8
(2,672 Views)

Albert.Geven wrote:

So although it may have a relation with the old Solaris format I still consider it as a bug, until a real explanation comes.


It is documented here:

 

"The flattened form for extended-precision numbers is the 128-bit extended-precision, floating-point format. When you save extended-precision numbers to disk, LabVIEW stores them in this format."

 

That page does not say anything about fixed point datatypes (FXP), which is another can of worms. 😉

Message 4 of 8
(2,668 Views)

Don't ask me why but, you learn something new every day.

If any one can explain this?  But,

 

aa.png

 

Reversing the string and using unflatten yields the expected result

Analize.png

Shows flatten and unflatten operating in different byte orders

 

Im assuming there really is 128 bits allocated even if only 80 are used- either the first 80 or the last of a 128 bit buffer.

 

 

Edit{Guess I guessed right}

It is documented here:

 

"The flattened form for extended-precision numbers is the 128-bit extended-precision, floating-point format. When you save extended-precision numbers to disk, LabVIEW stores them in this format."


"Should be" isn't "Is" -Jay
0 Kudos
Message 5 of 8
(2,665 Views)

Albert,

 

On the Mac the mantissa is 112 which suggests that all 128 bits are used on this platform. The value generated by your VI is 1.18973E+4932.

 

I added a Flatten to string inside the case structure connected to each of the numeric indicators.  The flattened strings are the same as your strings except on extended where the last 6 bytes are 00 and one other bit switches from 1 to 0.  Prepend size? does not seem to change the strings.

 

No explanation. Just additional observation.

 

Lynn

0 Kudos
Message 6 of 8
(2,654 Views)

Thanks all.

Another subtility that needs at least documentation.

 

we learn each day this way.

greetings from the Netherlands
Message 7 of 8
(2,638 Views)

@Albert.Geven wrote:

Thanks Rolf. that could be an explanation as I read that typecasting converts by using flattening and unflattening.

However the typecasting is not the same as flatten unflatten as can be seen on the typecast from string to single and double.

The string length information is available when flatten is used but this length info is not used when typecasting.

 

So although it may have a relation with the old Solaris format I still consider it as a bug, until a real explanation comes


Actually the Typecast is a subset of Flatten. Flatten can operate on unflat data 🙂 while Typecast can't. Therefore Typecast only works on scalars, clusters of scalars, strings and arrays of scalars as top level type. Anything else is unflat and will cause a bad wire when connected to Typecast. Since Typecast only operates on scalar data it does not have to prepend as much information into the data stream as Flatten will.

 

And I'm not sure if the PPC changed anything (seems some of them support 16 byte floating point, but none the 10 byte one) but the 68k architecture, while technically occupying 12 bytes for extended, still only used 10 bytes, (like x86 architecture) adding 16 unused bits between the mantissa and exponent.

Rolf Kalbermatter
My Blog
0 Kudos
Message 8 of 8
(2,633 Views)