LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

quotient & remainder bug

Hey who was that? Just one star?
 
Either give me 5 or let it be Smiley Mad
Using LV8.0
--------------------------------------------------------------------
Don't be afraid to rate a good answer... 😉
--------------------------------------------------------------------
Message 41 of 95
(3,224 Views)
Becktho
Now you have 6 stars...
Chilly Charly    (aka CC)
Message 42 of 95
(3,210 Views)

Now eleven...

For nice guys (incl. frogs)  😉

-- plus an extra --

Message 43 of 95
(3,206 Views)


@David Crawford wrote:
 
Also the answer the formula node returns for R=x-y*floor(x/y) isn't right.


David,
 
Your observation is correct.  Earlier in this thread, it was mentionned that 0.2 is not seen as 0.2 but as (Quoting Gerd "0.2000000...009 (or something similar)" ).
 
Actually, using your vi, I get a value of y = 0.200000000000000011102230246252, instead of 0.2.   Which is where the root of this whole thread 😄
 
(see post # 2)
 
Nice example.  At least we illustrate that it is possible to implement workarounds.  The problem is the limitation on the accuracy using extreme numbers (I gave an example earlier..)  I didn't consider the Ra-Rb approach.  I will have to test that as well... you got me curious, now..  😄
 
But this exercise is also fun...  (for some.. 😉 )
 
😄
Message 44 of 95
(3,198 Views)

I remember reading in one of the trade magazines a long time ago the following:

"Anything perceived by a user of a piece of software to be a bug, is a bug."

I have always tried to keep that in mind when programming or when responding to users of my software and I think it may apply to the quotient & remainder function as well. I dare say that most LabVIEW users, when presented with this specific behavior would wonder what was wrong. A minority of that population would quickly "get it" and go on to deal with it but the remaining majority would continue to be confounded. Does that mean that their opinion is worth less than those of us who understand what's happening? I think not.

Any piece of software that generates this much controversy cannot be considered to be really good.
It certainly doesn't help that the documentation for the function does not come with a warning about this behavior. 

I also find it interesting that the formula node produces the answer most of us would expect if we didn't think about it too much.
How is it the LabVIEW developers didn't make it produce consistent results between the two methods?
I wonder what the LabVIEW developer who designed the Q&R function would think to himself about these results. Would it give him pause, even for just a moment?
Message 45 of 95
(3,180 Views)
I haven't carefully thought through or experimented with the Q-R function in particular.  But as far as being able to program around the inaccuracies, wouldn't it be enough to simply compare the remainder to the dividend?  If it's "approximately equal" (based on methods more-or-less acknowledged necessary for floating point in this thread), subtract the dividend from the remainder and adjust the quotient.
 
Somewhat of a pain, granted, but rougly on par with the pain of handling floating-point equality, no?
 
One other point that I'm not sure is being considered in this thread: there's a lot of discussion based on specific instances such as the floating-point approximation of a value like 0.2   These kinds of specific instances are mainly relevant when feeding in a value that comes directly from a typist's fingers, either a programmer-typed block diagram constant or a user-typed front panel control.  If instead the value is the result of a series of calculations, there will be more unpredictability in the lowest order bit(s).
 
Maybe it seems pedantic to say that the result of Q-R(1, 0.2 + 1.0e-15) should be different from the result of Q-R(1, 0.2 - 1.0e-15), but the larger point, I think, is that the actual number of low-order garbage bits on the inputs cannot possibly be known by the Q-R function.  Only the application programmer can know what they've done to the numbers before feeding them to Q-R, and thereby have a good idea how many low-order bits should be rounded or ignored.
 
I'm intrigued by the scheme that would take in 4-byte SGL inputs, perform the Q-R function with 8-byte DBL's, then round back to a 4-byte SGL output result.  Or similar for DBL inputs, EXT calcs, DBL outputs.  But I'd also want to consider the point altenbach raised about how these "safeguards" would necessarily slow the Q-R function down.
 
I suppose that if NI were to change it, I'd hope for an implementation analogous to the DAQmx functions.  The programmer, at edit time, uses the little pull-down menu to select a specific instance of the function.  The default instance when you drag it from the palette could be the same as today's Q-R.  Any other special floating-point implementation(s) could be other instances.
 
-Kevin P.
 
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 46 of 95
(3,189 Views)

CC and JLV

Thanks for the stars -  glad that there still is justice around.... :):):)

Using LV8.0
--------------------------------------------------------------------
Don't be afraid to rate a good answer... 😉
--------------------------------------------------------------------
Message 47 of 95
(3,153 Views)

Kevin,

About your SGL to DBL  scheme.  I had been thinking along the same lines too.   However, it doesn't always work.  The quantization error that's in the SGL (the last bit), also get's into the DBL, as if it were a significant number.   Often, it might disappear in the devide, but not always.    Try the attached vi.   It works fine for y = 0.2 and x = 1, 1.2, 1.4 etc. 

However, it doesn't work for y = 0.1 and x = 1.3 

It also might be impossible for DBL's.   If I understand the documentation correctly, depending on your system, the EXT doesn't always add extra digits, but just allows bigger exponentials.  In that case, it wouldn't help in getting rid of quantization or rounding errors...

That's why I decided for my own QR vi, that I would take DBL in, convert to SGL after the devide, and then have the output in DBL again.  That's limiting the significant digits on which the IQ is based rather drastically, but at least it will always give the expected  result.  

BTW, I don't think speed should ever take precedence over safeguards. The first requirement is that the function is predictable.  A function that can give either 1 or 2 as the answer, due to simple rounding errors, is useless regardless how fast it is. Only when choosing between different implementations of predictable routines, would speed become an issue. 

Message 48 of 95
(3,155 Views)

Once again.  NI doesn't have to change Q&R.  NI should instead add a floating point math library.  I believe there is such a library for C programmers.  Lets have two Q&R functions and two equality functions, etc..., one for integers and speed, one for floating points.

BTW, what is NI's take on all of this?  They have been curiously silent this whole time.  Will someone from NI respond?

- tbob

Inventor of the WORM Global
Message 49 of 95
(3,143 Views)

I agree with tbob and Anthony.  If the Q&R does not work for floats as would normally be expected due to rounding errors then the Q&R function should be limited (by NI) to integer inputs and a coersion dot should show up when you try to attach a float.  The fact that floats are accepted leads programmers to believe that NI has already taken steps to alleviate the rounding error problem.

I think that having a separate floating point pallete is very appropriate.  For those of you who are interested in speed the existing Q&R function could be sped up because it would only accept integer inputs. 

Just because we understand a problem doesn't mean that the solution should be ignored.

Message 50 of 95
(3,132 Views)