LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

quotient & remainder bug



@marc A wrote:
I've been following this thread, and I just came across something that I would like to add.  Why does Q&R default to a float when you right click an input and say create -> constant?  It seems to me that they expected this to be used with floats.  I agree with the position that this is an integer function and doesn't give an accurate "representation" of what's really going on when used with floats, but maybe the inputs should default to integers to show people that aren't familiar with the underlying base-2 representation of floats that causes all of these problems.

This is because the input of Q&R is polymorph, accepting any numeric datatype, and for such inputs LabVIEW as far as I know
always defaults to double precision floats. NI not exactly expected this to be used with floats but more likely allowed it to be used with floats. I'm not sure NI has some means in a node to tell it what to create as default datatype if the node is polymorph.

Rolf Kalbermatter

Message Edited by rolfk on 02-02-2006 04:37 PM

Rolf Kalbermatter
My Blog
0 Kudos
Message 71 of 95
(3,424 Views)
I figured as much.  It was just something that I saw that could only add to the confusion a first timer would have when using Q&R and getting unexpected results.
0 Kudos
Message 72 of 95
(3,409 Views)


Kevin Price wrote:

In the past where I've seen discussions about new feature requests or changed behavior requests, the NI folks tend to ask for some example "use cases."  So, what are some specific examples where Q-R behavior needs to be different?  What was the app that brought it to your attention in the first place?


Here's the "use case" that brought this up:

I'm using the float QR in a device driver for a pre-amplifier that I'm (re)writing. The gain of the pre-amp is set using a coarse gain, fine gain and vernier gain setting.   However, on the front-panel of the pre-amp, you only see the total gain.  Total gain is simply coarse x fine (x vernier).  The fine-gain ranges from 0.2x to 3.0x in steps of 0.2x     It is set over RS232 by an integer, where -4 = 0.2x, 0 = 1.0x and 10 = 3.0x 

For the utils part of the device driver, I wrote vi's that convert fine,coarse&vernier gain into total gain, and total gain into the coarse,fine&vernier components.  

To calculate the coarse and fine and vernier values, I first determine the coarse gain component.  Then, I only need to devide the remaining gain by 0.2  to get the fine gain component.   (and substract 5...)     That would thus be done with the QR routine.   The remainder could then be used for calculating the vernier part.

Because, it's a device driver, I try to keep the vi's as general applicable as possible. But that's not the only thing.

In the application where this device will be used, the gain number going to the vi will come from two very different sources:
1.   It can be set by the user on the front panel.  This is thus the exact number that we've been talking about.
2.   It can come from a measurement.   I need the pre-amp to amplify a certain signal to a value of 1 volt.    I've written an auto-gain, which basicly measures this signal, and inverts the result.  That is then the required (extra) gain needed.

Ofcourse, I could split off the vernier part, and use an integer gain.  But that would require more vi's in the device driver to set the gain  (one for fine&coarse, and one for vernier), and more calls in the application, when I would want to use the vernier.  (Or I would have to make a polymorphic 'set gain' vi.) Using the QR with reals makes the code easier, both in the application and the driver. 

Ofcourse, this approach completely fails using the Labview's QR, so I used my own version, where I converted the DBL to SGL before the floor.   That's more than enough accuracy for this situtation. 

Message 73 of 95
(3,390 Views)
Here is a Q&R vi that takes the float out of the equation.  Only drawback is that you have to supply an extra input, the number of digits of precision.  So far I have found it to work with all sorts of float inputs.  The only question that remains is what to expect for an output if y is larger than x.  But that would be absurd for inputs.  I'm putting this vi in my library since I don't mind having to wire in an extra input.  Don't know if I'll ever use it because I haven't yet had a use for Q&R with floats.  Of course if I am using strictly integers, I will call the standard Q&R.
- tbob

Inventor of the WORM Global
0 Kudos
Message 74 of 95
(3,389 Views)
quoting myself:
I don't know the field and am too lazy/pressed for time to search now.  But I would guess there are some math library implementations out there that have already done this sort of thing?  Maybe some sort of de facto standard?
Found just a little bit of time.  Appears the relevant standard may be IEEE 854-1987.  Saw some links to math library implementations but didn't attempt to assess them.  Probably couldn't even if I wanted to...
 
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 75 of 95
(3,386 Views)

@rolfk wrote:

LabVIEW does not do any rounding on the actual number at all anywhere! It is only the display routine in a numeric control that has by default a precision of 6 digits (and actually does some smart rounding too)


Do you see the contradiction?  Smiley Very Happy

Labview does the rounding in the presentation of the value.  (Either on screen, on in a text file).    And that's perfectly fine. There no need to do it any earlier... except for the case we're taking talking about!


And if you like it or not, dealing with floating point numbers on a computer is by definition about caring about quantization errors.

Agreed.   So can you please tell me why you want to ignore quantization errors?

I'm not the one who doesn't want to deal with quantization errors!   On the contrary!  I'm fully aware that they exist, and are taking special care that they don't screw up the results of calculations!   It's the current implementation of the QR routine that doesn't take quantization errors into account!  


Your proposed workarounds all have a number of implications. Basically each and every math function that should do the quantization approximation for you would have to have an extra input where you specify as to how many digits you want the results be correct.


Whatever made you think that???   I would certainly not want rounding to occur on every math function!  Doing that would only drastically increase the errors!  

In almost all situations, rounding should be post-poned to that last possible moment (Exactly like Labview is doing). There are only a handful of situations where quantization errors are a problem.  Those are basically situations where you're comparing numbers.  The comparison is hindered because the quantization,approximation and rounding errors introduce differences where there shouldn't be any.

To put it in a more mathematical language:  The problems arise when the operations are discontinuous.  The smallest error around the discontinuity can have enormous effect on the result.  However, almost all other mathematical operations do not have that problem because they are continuous.  Something like sin(x) doesn't change drastically under influence of quantization errors:

sin(0) = 0 
sin(0.00000000000000001) = 0.00000000000000001       

After rounding at the end, both are 0.  No problem with rounding errors.
But things change when comparing numbers:

0 =? 0   gives true
0 =? 0.00000000000000001  gives false.  

Oops...    Suddenly a tiny error has enormous effects on the end result.  The difference between true and false. You can't get more significant than that! Should this effect take place?  
Well...  that depends on whether that last digit was significant or not.   When that last digit is the results of a quantization,approximation or rounding error caused by the computer, than this is unwanted behavior. 

But it's difficult to judge whether this is caused by the finite accuracy of the computer or not.  All operations performed on a variable will increase the errors a tiny bit. Not only the last digit is influenced.   Depending one the amount and complexity of the mathematical operations, the amount of affected digits can grow quite big.  That's exactly why we have so many digits. We can allow a large number of those digits to become inaccurate, before we influence the accuracy of our calculation.  

Doing comparisons on real numbers should logically include an assumption about the accuracy that is still retained in those numbers. If it doesn't, then the result of the comparison will be influenced by quantization errors, and will thus be meaningless.

Message Edited by Anthony de Vries on 02-02-2006 07:35 PM

Message 76 of 95
(3,381 Views)
Sorry tbob, but if I run the VI with 10 digits of precision, 1 and 0.3333333333333 it results IQ=1 and R=0.0961633962

Smiley Surprised

Message Edité par Jean-Pierre Drolet le 02-02-2006 01:38 PM



LabVIEW, C'est LabVIEW

0 Kudos
Message 77 of 95
(3,381 Views)

Leave it to JPD to find a bug in a vi.Smiley Very Happy

Seems like trying to convert 1E10 into U32 causes an overload.  It surpases the upper limit of U32s.  The output of 1E10 converted to U32 comes out as 4294967295, which I think is the upper limit of a U32.  Maybe you can use the LV8 U64 (does that exist?) if you want that much precision. 

Anyway, I don't think I'll ever use that much precision so my vi is still safe for me to use.  I guess I should put an upper limit on the digits of precsion control.

- tbob

Inventor of the WORM Global
Message 78 of 95
(3,376 Views)
This vi looks at a possible overload when converting to U32 and causes an error if so.
- tbob

Inventor of the WORM Global
0 Kudos
Message 79 of 95
(3,376 Views)
Anthony de Vries wrote: So can you please tell me why you want to ignore quantization errors?

I'm not the one who doesn't want to deal with quantization errors!   On the contrary!  I'm fully aware that they exist, and are taking special care that they don't screw up the results of calculations!   It's the current implementation of the QR routine that doesn't take quantization errors into account!  


Your proposed workarounds all have a number of implications. Basically each and every math function that should do the quantization approximation for you would have to have an extra input where you specify as to how many digits you want the results be correct.


Whatever made you think that???   I would certainly not want rounding to occur on every math function!  Doing that would only drastically increase the errors!  

In almost all situations, rounding should be post-poned to that last possible moment (Exactly like Labview is doing). There are only a handful of situations where quantization errors are a problem.  Those are basically situations where you're comparing numbers.  The comparison is hindered because the quantization,approximation and rounding errors introduce differences where there shouldn't be any.

What I was trying to say is that it is not the task of the generic LabVIEW functions to decide when it is appropriate to do rounding.

After all the result of any LabVIEW function can be used as input to yet another LabVIEW function and as you have pointed out correctly, rounding should only be done at the very end of any calculation if at all and that is what LabVIEW does with only doing any roundings at the moment of presentation of data.

LabVIEW however can't possibly know when a programmer does want to have this rounding done in the diagram and to what possible precision to avoid situations as this thread have started, so it is up to this programmer to be careful about the effects of quantization errors and do the rounding when you feel it is appropriate.

Changing the current math nodes is anyhow no option as that would break compatibility in way more ways than you would want to think about. So the only solution would be to create yet another version where you could somehow specify the actual precision wanted, but that would be a tedious project at least, with lots of discussions how and when the rounding should be done specifically for all the different mathematical operators. As such it is likely a project that would never satisfy a large number of users and would be very likely to be critized just as much or more than the current functions by numerous users.

Rolf Kalbermatter

Message Edited by rolfk on 02-02-2006 08:55 PM

Message Edited by rolfk on 02-02-2006 08:56 PM

Rolf Kalbermatter
My Blog
Message 80 of 95
(3,379 Views)