03-07-2013 02:58 AM
Hi,
When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
Cheers,
Steen
Solved! Go to Solution.
03-07-2013 03:21 AM - edited 03-07-2013 03:30 AM
Hi Steen,
no, you can't represent 9223370937343148030 as SGL... SGL only takes 23 bit mantissa or roughly 7 decimal digits!
Your SGL consist of the bytes 5EFFFFFE and is converted to I64 (7FFFFF0000000000), which is the correct conversion: there are no more bits in the SGL.
Edit: Try to input 9223370937343148032 into your SGL control: it will no be accepted...
03-07-2013 07:45 AM - edited 03-07-2013 07:48 AM
Hi Gerd,
9223370937343148032 won't be accepted when entered in a SGL in LabVIEW, but that wasn't the number I wrote. 9223370937343148030 will be accepted though (note the "0" at the end vs. the "2"), which was the SGL I wrote. The former number is the number that SGL is converted into.
But, your reply prompted me to take a look at the binary representation of this SGL in LabVIEW, and when I enter 9223370937343148030 into a SGL constant its binary representation is as follows:
Sign: 0x0b
Exponent: 0x10111101b (0x189d)
Implicit integer: 0x1b
Fraction: 0x11111111111111111111110b
The IEEE 754 single-precision binary floating-point format conversion to real number formula is (-1)^sign*(1+SUM(i,1,23,fraction_bit_23-i*2^-i))*2^(exponent-127). With that I get 1*(1+4194303/4194304)*2^62 which is the integer number 9223370937343148032. And that is exactly the returned number when you convert to an I64 in LabVIEW.
So now the bug is identified as being in the SGL display code (within constants and within controls/indicators) then? Documentation:
The number in the SGL control on the left in the above picture shouldn't be possible, as the binary representation of the SGL ends with the decimal digit 2, which internally makes the number identical with that in the I64 indicator on the right. Therefore you're right that the conversion is correct. but the numeric display of some floats in LabVIEW is buggy then.
Cheers,
Steen
03-07-2013 08:15 AM - edited 03-07-2013 08:18 AM
Ok, here's something similar: http://forums.ni.com/t5/LabVIEW/do-numerical-indicators-display-extended-precision-floats/td-p/72040...
Seems a confirmed bug in the LabVIEW display code for floats, which has probably been around since forever, and has been acknowledged officially about 5 years ago with no fix in sight. Sigh...
At least my GPower Overflow toolset works correctly then, which I feared it didn't as I ran into this weirdness.
/Steen
03-07-2013 01:54 PM
Hi Steen,
That does seem like another insufficient precision problem, but keep in mind that even SGL precision numbers can have an extremely large number of significant digits, and I'm guessing they are not designed to display the full number of siginificant digits for the entire SGL range. If that is the case, I agree that the property page shouldn't allow you to select an undisplayable number of significant digits.
Here's my experiment, for reference. Type the number 9223370937343148030 into an arbitrary precision calculator and I get this in binary:
111111111111111111111101111111111111111111111111111111111111110_2
Taking the top 24 bits for a SGL gives this:
111111111111111111111101
which after rounding up gives us our SGL representation
111111111111111111111110 * 2^39 = 9223370937343148032
Based on the fact that the digit in question is displaying a 0, I'm assuming this isn't an actual conversion problem, but simply not displaying enough decimal significant digits. For example, if you change the display to SI notation, you see even fewer digits and lose the exponent value (!):
9.2233709373431480000000000E
I'll file a separate corrective action request on this if I can't find an existing one.
Jim
03-08-2013 03:27 AM
Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
Is there a compliancy issue with EXT?
While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
Is there a genuine conversion error from EXT to U64?
The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
EXT-exponent: 0x100000000111110b
EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
--> Decimal: 18446744073709549568
The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
EXT-exponent: 0x100000000111110b
EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
--> Decimal: 18446744073709550592
The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
/Steen
03-08-2013 03:44 AM - edited 03-08-2013 03:48 AM
For some reason it also seems a LabVIEW EXT may not have all 1s in the fracton part? Trying to create such an EXT by building the proper string and typecasting that to an EXTmakes the resulting EXT have the MSb of the fraction field replaced with zero. Thus 18446744073709551614 can be represented as EXT (internally, not displayed properly), but not 18446744073709551615 (which is U64_max). Is this intentional or is it a problem with the Typecast primitive?
If you want to try this yourself here is the string (as hex): 403E FFFF FFFF FFFF FFFF 0000 0000 0000 (typecast this to EXT which should have gotten you the number 18446744073709551615). The resulting EXT can then be typecasted back into a string giving you this (again as hex): 403E FFFF FFFF FFFF FFFE 0000 0000 0000. Interesting...
/Steen
03-08-2013 08:41 AM
Ok, definetely a confirmed bug when converting from EXT to U64:
I've attached the VI as well (LV2011 SP1).
@JLewis: Will you report this through the proper channels?
Cheers,
Steen
03-08-2013 11:57 AM
Hi Steen,
Thanks for identifying this. I have confirmed the EXT to U64 conversion problem by simply attempting to round-trip a U64 value xFFFFFFFFFFFFF9B0 to EXT and back, getting xFFFFFFFFFFFFFFFF as a result (and the intermediate EXT value is correct).
The problem seems to be restricted to integer values with all of the top 53 bits set, along with one or more of the lowest 11.
I have reported this (#396305), along with the limited display precision issues (#396337).
Jim
03-08-2013 02:24 PM
Thanks Jim!
/Steen