02-02-2006 01:55 PM - edited 02-02-2006 01:55 PM
Message Edited by tbob on 02-02-2006 12:56 PM
02-02-2006 01:59 PM
I just did this with Q&R. Took 5 minutes, plus 2 more minutes after JP found a bug. I don't think it would be very difficult to create the same for other functions In fact, I'm going to do one for equality.
@rolfk wrote:
So the only solution would be to create yet another version where you could somehow specify the actual precision wanted, but that would be a tedious project at least,
02-02-2006 02:39 PM - edited 02-02-2006 02:39 PM
If tst reads this thread, tst will have the Energizer Rabbit as an avatar!!
😮
Message Edited by JoeLabView on 02-02-2006 03:39 PM
02-02-2006 02:41 PM
There will be many more possible bugs lurking and many more border cases where your solution will just as much be non-intuitive in its results.
@tbob wrote:
I just did this with Q&R. Took 5 minutes, plus 2 more minutes after JP found a bug. I don't think it would be very difficult to create the same for other functions In fact, I'm going to do one for equality.
@ rolfk wrote:
So the only solution would be to create yet another version where you could somehow specify the actual precision wanted, but that would be a tedious project at least,
02-03-2006 04:03 AM
There will be many more possible bugs lurking and many more border cases where your solution will just as much be non-intuitive in its results.
I think the only general applicable approach will be to simply round the numbers to a certain number of digits. That will also work with quantization errors that have been introduced earlier. Something that the other more elaborate tricks with multiplicating etc can't handle... Such a rounding approach would be extremely easy to implement. That's only a few minutes work.
I'm sure that NI can find a reasonable default number for the rounding. I'm not a mathematician, but I suspect that the error propagation behaves a bit asymptotically. If that's indeed the case, then you could find a reasonable estimate for the number of digits to remove, so that you're always safe. I wouldn't be surprised if removing for example the last 4 digits works sufficiently for almost all cases.
You could also do some reasoning from the hardware point of view. Most DAQ cards are 'only' 16bit. That means 65000 steps, or some 5 orders of magnitude. Only a handfull of cards are 24 bits, thus 16 miljoen steps. That's some 7 orders of magnitude. So you accuracy of your input is limited to some 8 digits. So you might well argue that setting the default rouding at 9 digits is sufficient.
I could also very well imagine that you could set a user-definied default accuracy for a complete vi.
This whole thing wouldn't really be that difficult for the user. Allready, he/she also has to make a choice for using SGL, DBL or EXT. This accuracy choice is something closely related to it.
And then try to think about doing this for division, multiplication, addition, etc and also about supporting different datatypes such as singles, doubles and extendeds, with the possible complication that there might be some differences in numerical coprocessor implementations for floating point numbers (remember the Pentium bug?) , and even more complicated completely different representations of the extended floating point format for the different LabVIEW platforms.
Didn't you read my reply at all?? I thought I explained quite thoroughly....
The only vi's that need rounding, are the ones doing a comparison between two real numbers.
The results of all other operations should absolutely not be rounded.
The number of vi's that would need a change, or an alternative is thus very small.
02-03-2006 04:30 AM
Changing the current math nodes is anyhow no option as that would break compatibility in way more ways than you would want to think about.
I don't think it would break any compatibility.
As it is now, the 'QR' and 'equal?' give meaningless results when used on reals. Because of that, nobody uses them. (or should use them...) So there is no compatibility to break in those vi's.
The other comparison vi's are used often, but only in situations where the quantization errors don't influence the results in a problematic way, because the input/output is expected to be quite inaccurate anyway. In those cases, removing the quantization errors plus maybe some extra digits won't influence compatibility either.
These are the only vi's that are influenced. Please tell me where any compatibility issue would arise...
BTW.... in Labview 8 there's a whole lot of vi's that have been changed in their behaviour, causing possible compatibility problem. No reason why the reals comparison vi's can't be changed as well.
02-03-2006 10:43 AM
LabVIEW, C'est LabVIEW
02-03-2006 12:11 PM
10-16-2013 12:10 PM
Hi Molly,
its an old thread but I just ran into the same problem.
Jim
10-16-2013 12:46 PM
xband wrote:its an old thread but I just ran into the same problem.
It would help if you could be more specific in the problem description (What you expect to happen, what actually happens).
Running your VI with the default control values under LabVIEW 2012 does not show anything unusual.
What do you see? (Sorry, I no longer have LabVIEW 2009)