LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Flatten To JSON Function changes the DBL digits length incorrectly [Bug?]

I found that the Flatten To JSON Function changes the DBL digits length incorrectly, is it a bug? I have already tested it on the different versions' platform. 

 

LV2013-32bit.png

Tested by LabVIEW2013 32bit Version

Actually, the input is only one digit, but the output is 14 digitals. Even modify the Display Format and change the default digit precision, but output still keep the same. 

 LV2013-32bit-2.png

Tested by LabVIEW2013 32bit

 

Tested by LabVIEW2015 32bitTested by LabVIEW2015 32bit

Tested By LabVIEW2015 32bit

 

Tested By LabVIEW2020 32bitTested By LabVIEW2020 32bit

Tested By LabVIEW2020 32bit

 

Tested By LabVIEW2020 64bitTested By LabVIEW2020 64bit

Tested By LabVIEW2020 64bit

 

So, we can find that the LabVIEW2020 and LabVIEW2013/2015 have the different digit precision output. 

 

Colin Zhang
------------------
LV7.1/8.2/8.2.1/8.5/8.6/9.0/2010/2011/2013/2015/2016/2020; test system development; FPGA; PCB layout; circuit design...
Please Mark the solution as accepted if your problem is solved and donate kudoes


Home--colinzhang.net: My Blog

iTestGroup: One-step test solution provider!
ONTAP.LTD : PCBA test solution provider!
Message 1 of 14
(1,543 Views)

@colinzhang wrote:

Actually, the input is only one digit, but the output is 14 digitals.


No, your input is not only "one digit" if you care to display all digits of the diagram constant! DBL precision values are quantized to a mantissa with a finite number of bits. and many simple decimal fractional numbers (e.g. 0.1, 0.2) don't have an exact binary representation.

 

 

altenbach_0-1686410452031.png

 

Trick questions: What is the value of the Output indicator after the VI stops?

 

altenbach_0-1686411276933.png

 

 

@colinzhang wrote:

So, we can find that the LabVIEW2020 and LabVIEW2013/2015 have the different digit precision output. 


I guess the underlying code has slightly changed, but the difference is way beyond the guaranteed resolution so probably irrelevant. 😄 

 

 

If you want exact decimal representation, store it as string...

0 Kudos
Message 2 of 14
(1,512 Views)

@altenbach wrote:

@colinzhang wrote:

Actually, the input is only one digit, but the output is 14 digitals.


No, your input is not only "one digit" if you care to display all digits of the diagram constant! DBL precision values are quantized to a mantissa with a finite number of bits. and many simple decimal fractional numbers (e.g. 0.1, 0.2) don't have an exact binary representation.

 

 

altenbach_0-1686410452031.png

 

Trick questions: What is the value of the Output indicator after the VI stops?

 

altenbach_0-1686411276933.png

 

 

@colinzhang wrote:

So, we can find that the LabVIEW2020 and LabVIEW2013/2015 have the different digit precision output. 


I guess the underlying code has slightly changed, but the difference is way beyond the guaranteed resolution so probably irrelevant. 😄 

 

 

If you want exact decimal representation, store it as string...


Unless it is "for display purposes only" I skip this step.  The moment you need it to be a real number, you end up with the same problem - and potentially less precision than when the number went in.  I guess for an exact representation you would store as bytes.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 3 of 14
(1,498 Views)

Well, the more interesting question would be if the fuzzy LS decimal digits even matter. If you read the json string back into the original datatype and do an equal comparison with the original value, does it match?

0 Kudos
Message 4 of 14
(1,489 Views)

@altenbach wrote:

Well, the more interesting question would be if the fuzzy LS decimal digits even matter. If you read the json string back into the original datatype and do an equal comparison with . the original value, does it match?


Probably, but equal comparisons on floating point numbers without some range is ALWAYS wrong, as you have told so many times in these forums, Christian. 😀


A double has about 16 digits of significant data, which is not just incidentally what the JSON conversion uses as default width. That you showed only one decimal digit of the number in the front panel control has absolutely no meaning for the actual binary value in the wire. It only specifies how to display that number in the front panel control. The JSON function does its own interpretation and without an explicit width specification it simply does the most safe thing: Format as many digits into the string as the number possibly can have in significant data

Rolf Kalbermatter
My Blog
0 Kudos
Message 5 of 14
(1,481 Views)

@rolfk wrote:

@altenbach wrote:

Well, the more interesting question would be if the fuzzy LS decimal digits even matter. If you read the json string back into the original datatype and do an equal comparison with . the original value, does it match?


Probably, but equal comparisons on floating point numbers without some range is ALWAYS wrong, as you have told so many times in these forums, Christian. 😀


Of course! I was just thinking of the possibility that the change in the number of digits in the json string (as reported between LV versions, not verified independently!) does not actually change anything in the bit level when comparing the original and the one processed through json flatten/unflatten. If the decimal string contains a sufficiently high number of decimal digits, it should convert back to a binary identical value in most cases, so what is sufficient?

 

0 Kudos
Message 6 of 14
(1,466 Views)

First of all: Kudos for doing the tests and showing your work!

 


@colinzhang wrote:

So, we can find that the LabVIEW2020 and LabVIEW2013/2015 have the different digit precision output. 

 


I guess they are perfectly allowed to do so. This made me look up the JSON standard document, which should be RFC 8259. The section on numbers suggests that you do IEE 754 double precision and give enough digits to let the receiving party know that you did. 17 digits (which both of your tests do, if you include the integer part) seems perfectly fine.

 

Also note: Floating point numbers map the infinite real number space to a finite number of possible values for binary representation. This mapping is standardized in IEE 754. So the number 222.1 maps exactly to the number

222.19999999999998863131622783839702606201171875 in double precision. Since the next smaller number is

222.199999999999960... (the distance between two double precision floating point numbers is roughly at a factor of 10-16) it is enough to give 17 digits to uniquely distinguish these two.

0 Kudos
Message 7 of 14
(1,458 Views)

@altenbach wrote:
Of course! I was just thinking of the possibility that the change in the number of digits in the json string (as reported between LV versions, not verified independently!) does not actually change anything in the bit level when comparing the original and the one processed through json flatten/unflatten.

For this particular input (222.2) Three different decimal string representations return exactly the same DBL (for other values, YMMV, of course! :D)

 

Since format into string gives yet another decimal representation, I suspect that the JSON tools use some canned foreign libraries and LabVIEW has no direct control over it. That code base must have changed.

 

altenbach_0-1686432046085.png

 

(Curiously, if we change the input to EXT, the json string is exactly "222.2000". 😄 )

 

 

0 Kudos
Message 8 of 14
(1,436 Views)

Thanks for your analysis and discussion. 

 

The first impression was also to consider the problem of DBL accuracy, but after trying different versions, I was even more confused after discovering the differences.  

 

Yeah, I also know how to deal with it (pre-process the DBL number to a fixed digit, convert to inter-number or string), just feel interesting. 

 

While it is safer to keep all the precision/digits, it seems more practical to leave one option to set the width of digit which is similar as the Number to Fractional String function.

 

It is interesting, I compare the Flatten To JSON function with the Number to Fractional String function, but still a different result. Of course, they are very closed. 

 

Compare with format string.png

Colin Zhang
------------------
LV7.1/8.2/8.2.1/8.5/8.6/9.0/2010/2011/2013/2015/2016/2020; test system development; FPGA; PCB layout; circuit design...
Please Mark the solution as accepted if your problem is solved and donate kudoes


Home--colinzhang.net: My Blog

iTestGroup: One-step test solution provider!
ONTAP.LTD : PCBA test solution provider!
0 Kudos
Message 9 of 14
(1,421 Views)

OP, you might want to look into JSONtext.  JSONtext tries to preserve decimal numbers, rather then floating point representations and so will output 222.2.

 

The inbuilt JSON make the alternate choice of trying to exactly preserve floating point values (though it has a known bug in treating EXT as if it were SGL). 

Message 10 of 14
(1,404 Views)