LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

json with type double number

Solved!
Go to solution

How do I avoid the value conversion that occurs when flattening a type double number to JSON? a value of 9999.12 will convert to 9999.12000000008 in the JSON string.

0 Kudos
Message 1 of 17
(13,802 Views)

Hi lhandph,

 

what's the problem?

The conversion looks ok for me… 😄

 

(Create a numeric control. Give it the value "9999.12". Change display settings to show 20 significant digits: It will show "9999.1200…0790"!)

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 17
(13,797 Views)

LabVIEW will maintain the precision of the DBL when converting to a JSON string - to get around this I usually do the conversion to a string myself with the required level of precision (e.g. using %.2f).


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 3 of 17
(13,795 Views)

Thanks Sam. I thought of that too, but my understanding is that the web service I am sending this to requires type double. Can I send all data as string type in the JSON Post or does this depend on the web service?

0 Kudos
Message 4 of 17
(13,783 Views)
Solution
Accepted by topic author lhandph

My understanding is that JSON is pretty loosely type-defined - I don't think the JSON interpreters will detect the difference between MyDBL = 121.000000001 and MyDBL = "121.00". (unless it particularly detects the type, I've definitely replaced a DBL with a String in some of my WebSockets stuff and I didn't need to change any of the Javascript). You'd need to test it though.

 

Edit: Actually, having just tried it with this simple VI, the LabVIEW unflatten from JSON throws an error:

JSONTypes.png

 

If you're communicating with an actual web service that's expecting a DBL, then what's the problem with sending 12.0000999999?


LabVIEW Champion, CLA, CLED, CTD
(blog)
Message 5 of 17
(13,774 Views)
Solution
Accepted by topic author lhandph

The conversion is unavoidable if you are starting with a double value. Remember that the 9999.12 you see in the editor is also a decimal conversion from the actual stored 64-bit value (it has been converted from what you typed in to the closest inexact DBL value, then re-generated and coerced/formatted (if necessary) for display.

 

Both 9999.12 and 9999.1200000000008 convert to the exact same double value--and there are actually an infinite number of valid decimal strings that map to that same value. For display purposes the convention is to choose the shortest decimal string that guarantees an exact round trip. That way when you type .1 you should see 0.1 in the display and not 0.100000000000000006. Although the second string is slightly closer to the stored value, they both end up at the same DBL value if rounded correctly and are therefore exactly equivalent.

 

For the JSON strings, I'm assuming performance is more of a concern than readability, so it looks like we just use 17 digits as the minimum number guaranteed to round-trip correctly in all cases. That avoids all the extra logic required to figure out the shortest possible decimal representation of each data value.

Message 6 of 17
(13,751 Views)

The only concern we had was maintaining the two decimal points of accuracy rather than the inexact approx. value. We were worried about the web service rounding or truncating the long inexact value instead of receiving and using exactly what we sent from our application program.

Thanks for the help. I will discuss with the web service developers and see if we can either send as a string representation of the value and they convert it to a number or keep it as a type double and they round or truncate on their end.

Thanks again.

0 Kudos
Message 7 of 17
(13,743 Views)

That's where your misunderstanding is - a floating point number isn't an exact representation of a decimal number - it's an approximation. If you take your DBL value and display it in an indicator, labview rounds it to something sensible for display. If you turn up the precision to show enough digits, you'll see the 0.0000000000008 difference between the number and how it's rounded/displayed in the indicator. 

 

If I wanted the exact decimal representation of the number, I would probably send as an integer multiplied by 10 to the power of the number of decimals (e.g. 19.12 x 100 = 1912) and then divide back on the other end.


LabVIEW Champion, CLA, CLED, CTD
(blog)
Message 8 of 17
(13,736 Views)

Why is 's reply marked as a solution?  It's not a solution, just a not-particularly-helpful comment about what's happening.

 

The problem with this is that when 12.1 is represented as the string value "12.10000000001", if you have, say, an array of 1000 of these values to write to a file, the file suddenly becomes 3 times larger than it ought to be.  And it becomes alarming to any human reader.

 

My problem is that, for example, 355.47 is represented as 355.468440690442.  This is just one specific example.  I understand that the latter is "more accurate" with regard to what's actually stored in memory, but frankly I don't care.  When the user looks at the file I wrote, instead of a nice list of reasonable numbers, there's this giant block of digits.  Human-readability is severely damaged.

 

And if you're storing a lot of these files, the increased size can be a problem.

0 Kudos
Message 9 of 17
(778 Views)

You could try using JSONtext instead of the inbuilt conversation functions, as it gives the more compact formatting you want.

0 Kudos
Message 10 of 17
(765 Views)