02-10-2015 04:56 AM
Hi All,
I am facing a weird issue in rounding off a double value. I thought LabVIEW always rounds off according to this rule when the digit to be truncated is 5 - Rounding off to nearest even integer
When I round off to two precision points, I get below results.
489.335 -> 489.33
489.355 -> 489.36
Can anyone tell me how the rounding off works in the above cases? Please find the sample VI(2010) attached, if you would like to play with it.
Thanks,
Karthik
Solved! Go to Solution.
02-10-2015 05:11 AM
My best explanation (which I could be totally wrong about!) is that this is just because a floating point number is an 'approximation' of the number. When you view the number to the full precision (as you have done) you can see why 488.335 would round down to 488.33 because 488.33499999999998 is less than 488.335000000.
I also found this that helps explain rounding to integers in LabVIEW: http://digital.ni.com/public.nsf/allkb/7ED5A95B08D7DF37862565A800819D2D
02-10-2015 08:37 PM - edited 02-10-2015 08:40 PM
Thanks Sam for your reply. I was more or less able to figure out a logic.
So it seems that for rounding off to 2 digits after decimal point,
1. If the actual number has 3 decimal places, just take the 1st 2 decimal places, and discard the 3rd(no rounding off)
488.315 - 488.31
488.375 - 488.37
2. If the actual number has lot of decimal places, take the 1st 3 decimal places from the actual value, and then round it off to 2 digits.
488.325(488.32499999999999) - 488.32
488.345(488.34500000000003) - 488.35
LabVIEW should be better handling these things.
So, is it not possible to represent a number like 488.345(488.34500000000003) as 488.345 exactly?
02-10-2015 09:41 PM
02-10-2015 10:32 PM - edited 02-10-2015 10:51 PM
@Dennis_Knutson wrote:
Computers cannot represent all floating points exactly. It had nothing at all to do with LabVIEW.
Oh okay, I didn't know that. So, the same issue exists in all programming languages(say like C)?
Even then, it could have been better if LabVIEW either rounds off, or doesn't round off, while converting a double to string(or better yet, give that as an option to the user).
Rather, it is currently sometimes rounds off, and sometimes doesn't round off, based on how the value is actually stored. Even when it is rounding off, it doesn't take the actual value, and only takes 1 extra digit. It is also not rounding off according to the even rule as said by LabVIEW.
02-11-2015 12:17 AM
Yes it is. Try showing all digits of your double, say 20 digits and youll see the actual double value. The dbl indicator does not by default show the entirety of the value, opting instead for a more readable value. The binary representation in the background follows the IEEE 754 convention as do all modern programming languages.
02-11-2015 02:47 AM - edited 02-11-2015 02:51 AM
@Karthik_Abiram wrote:
@Dennis_Knutson wrote:
Computers cannot represent all floating points exactly. It had nothing at all to do with LabVIEW.So, the same issue exists in all programming languages(say like C)?
Basically, yes, which makes sense if you understand the way it stores numbers. In the common IEEE 754 standard, a double precision floating point value is always stored in 8 bytes. That means that it can represent at most 2^64 different numbers, regardless of the actual format. That's a lot of numbers, but it's still a very limited set, so there will be holes (and in practice it's actually less than 2^64). You can go to extended precision, but that will only make the number of options bigger, not change the basic behavior.
Some languages have additional data types which do represent numbers differently (known as decimal types), but for most applications this isn't needed because floating point numbers work quite well. I would suggest considering whether or not this actually matters in your application, as in most apps the rounding of a value up or down doesn't really matter. The only place where I encountered it where it does matter if the value is not measured, but rather calculated or being given directly by the user with resolution X and then being converted to a lower resolution. In that case, the user has an expectation for that value to be represented exactly and if that's your case, you might want to create your own data type and VIs to handle it, where the number is represented as two integers (an integer part and a fractional part at resolution X).
02-11-2015 04:57 AM
The other place it can catch you out is if you try to do an equality with a floating point number - 0 doesn't necessarily equal 0 (even though it might be displayed as such to a reasonable number of decimal places) because the actual bit pattern of the number isn't the same.
02-11-2015 07:36 AM
Karthik_Abiram wrote:Oh okay, I didn't know that. So, the same issue exists in all programming languages(say like C)?
I first learned that painful lesson about floating point numbers with GW Basic and have seen it in every single language I have programmed in (including Q-Basic, VB, C/C++, Pascal). It is just the limitation of storing numbers in a binary represenation.
02-11-2015 08:50 AM
This is a perfect example of growing pains. Don't worry - everyone goes through these learning experiences. 🙂