01-30-2006 10:14 AM
LabVIEW, C'est LabVIEW
01-30-2006 10:43 AM
Sorry guys, I agree with Anthony in that this is unacceptable. Since it is a problem for everyone, I can't see that everyone should take the time and effort to create their own library, or take the time and effort to add extra code to counteract the innacurate floating point representations. This is a very tremendous duplication of effort amongst the entire community. NI makes Labview, so NI should come up with a library of floating point functions that take this problem into consideration, and give expected results. The makers of VB and C compilers, and others, should do the same thing. Anthony's attached Quotient/Remainder vi is an example of how NI's function should operate. If a known problem exists, why not have NI fix it?
It would result in a huge improvement in Labview, and a good marketing claim too, if Labview had a set of floating point math functions that would give expected results. Instead of accepting this quite known and understood problem, lets ask NI to improve Labview so that we don't all have to spend time fixing the same problems over and over.
01-30-2006 10:56 AM
tbob,
I agree with you. Yes, it's a known problem, but why continue to live with limitations that really do not make sense.
In this example, a routine could be implemented to determine the number of significat digits, then multiply by 10E(#sign digits) before doing the Quotient & Remainder, thus "bypassing" the limitations of Floats. It should be built-in..
Nevertheless, as a practice, I always validate the functionaliity of vi's before they are released.. 😉
Ray
😄
01-30-2006 11:15 AM
Exactly what I mean, JLV. We should have a library of floating point math fuctions that do things like this. Even if the inputs have to be strings so that the number of significant digits could be determined. The functions should do the rounding to give accurate results.
As a practice, I too have to validate everything. It takes an extra effort to do this with floats, an effort that can be eliminated by proper behavior of math functions. A simple calculator can give the correct answer to 1 divided by 0.2, and so can Labview's divide function. So why can't Labview's Q/R function do the same?
01-30-2006 12:00 PM
@tbob wrote:
So why can't Labview's Q/R function do the same?
---- keeps us employed ---- 😉
😄 😄 😄 😄 😄
01-30-2006 12:27 PM - edited 01-30-2006 12:27 PM
LabVIEW does give the correct answer to both 1/.2 and 1.0 Q&R 0.2 with the full accuracy available on the inputs. It is not a bug.
tbob a écrit:Exactly what I mean, JLV. We should have a library of floating point math fuctions that do things like this. Even if the inputs have to be strings so that the number of significant digits could be determined. The functions should do the rounding to give accurate results.
As a practice, I too have to validate everything. It takes an extra effort to do this with floats, an effort that can be eliminated by proper behavior of math functions. A simple calculator can give the correct answer to 1 divided by 0.2, and so can Labview's divide function. So why can't Labview's Q/R function do the same?
Message Edité par Jean-Pierre Drolet le 01-30-2006 01:30 PM
LabVIEW, C'est LabVIEW
01-30-2006 12:51 PM
01-30-2006 02:20 PM
Some thoughts:
1. I keep reusing a simple little "approx equal" function I made years and years ago. It's got 2 inputs for the values to compare plus another input for specifying the precision of the comparison. If the values deviate by less than the precision, the "approx equal" output is TRUE. I made it polymorphic with both a DBL and a SGL version. Each has a default value for precision that's appropriate for the # bits resolution they can represent.
2. The primitives for Q&R, floating-point ==, etc. shouldn't be changed at this point in the game. They already follow the accepted IEEE standards. The tech communicty would never stand for LV suddenly producing different results.
3. A set of additional limited-precision functions might be useful though -- provided they aren't easily confused with the standard ones. At minimum, they should not start on the same palette as the IEEE-compliant functions. The icons would need to be distinct. Maybe they could require a precision-specifying input?
However, having not thought it through very far, the implementation may get tricky. Example: would you sometimes limit the precision of the inputs before applying the operator? Or do you only apply limits to the output? Do you always round? Or would you sometimes truncate?
-Kevin P.
01-30-2006 03:40 PM
My idea is to leave the current functions alone and create a new set of functions for floating point operations, hence a floating point math library. Since the problem we all see is a result of the inputs being represented inaccurately, the rounding should always take place with the inputs. If someone sets an input to 2.0, it should not be 1.999999.... In this special library, the digits of precision would have to be known and could be obtained if the inputs were strings. If the user wanted 3 precision digits, he would type 2.000. If he wanted 6 precision digits he would type 2.000000. This would solve lots of problems and ensure the correct output when using floating point numbers. Even the equal function would output expected results.
Again, its not the functions themselves, they work fine. It is the error between the intended inputs and the actual numerical representation of the input.
01-30-2006 04:10 PM
The number of significant digits would have to be strictly defined...
I tried a quick vi to test out the proposed solution... works great... with small numbers.. When numbers go beyond 3 decimal points and / or has a large integer value, then it throws off the accuracy... and we're back tosquare one (gee... reminds me of the early computing days... 😉 ) Hummm.... this may not be as simple as it seems to accommodate all the strange floating point numbers we could toss at it..
But a fun exercise nonetheless..
😄