05-21-2010 02:40 PM
If one concatenates a double precision array and an extended precision array with the "build array" vi and then saves using "Write to Spreadsheet File" vi any digits to the right of the decimal place are set to zero in the saved file. This happens regardless of the format signifier input (e.g. %.10f) to the "Write to Spreadsheet File" vi.
I am on Vista Ultimate 32 bit and labview 9.0
This is a possible bug that is easily circumvented by converting to one type before combining arrar to a spreadsheet. Nonetheless, it is a bug and it cost me some time.
Solved! Go to Solution.
05-21-2010 02:47 PM
Not sure if you should call that a bug.
The coercion dots indicated you needed to address the issue.
Just my opinion.
🐵
05-21-2010 02:59 PM - edited 05-21-2010 03:01 PM
Hi JL,
no, it's not a bug - it's a feature
Well, if you would look more closely you would recognize the "Save to Spreadsheet" as polymorphic VI. As this polymorphic VI doesn't support EXT numbers internally (it only supports DBL, I64 and String) LabVIEW chooses the instance with most accuracy: I64 (I64 has 64 bits precision, DBL only 53...). So your options are:
- set the instance to use as DBL (by right-click and "Select type...")
- make a copy of this VI, save it with a different name and make it support EXT numbers (don't rework the polymorphic VI as you would break compatibility with other LV installations or future revisions)
05-21-2010 04:12 PM
Thanks for the feedback. I see now that the bug is with the "Save to Spreadsheet" vi in that it doesn't use EXT numbers internally (this should be clear in the context help). Instead it does coercion from EXT to DBL, which, for some reason, truncates at the decimal place. Does this happen universally? Naively I would call this coercion truncation a bug but I'm sure there is some deep reason for it that makes it absolutely necessary. Venture a guess? The "To double precision float" vi doesn't do this truncation. Whatever the answer, it is unexpected behaviour and should be documented better. In fact, I'm back to calling it a bug cause it's just that kinda day.
Cheers,
JL
05-21-2010 04:32 PM
@JL_Ottawa wrote:
Instead it does coercion from EXT to DBL, which, for some reason, truncates at the decimal place. Does this happen universally? Naively I would call this coercion truncation a bug but I'm sure there is some deep reason for
05-21-2010 04:42 PM
Oh right, I64, which makes even less sense as a coercion (although it explains the truncation at the decimal place). I used the toDBL solution to get around the problem. I am using subvi instrument drivers that output EXT, so that is what go me started on this mess.
We're tied at two-two for bug or not!
05-21-2010 04:54 PM - edited 05-21-2010 04:55 PM
Hmmm...
Myself, GerdW & altenbach - Not a bug
You - Is a bug
If that's two-to-two, I better understand the problem now.
Just teasing....
05-21-2010 04:56 PM
05-21-2010 05:57 PM
If you open the polymorphic VI, you can see that DBL instance is listed first.
A coercion should by default pick the fist instance (DBL in this case). The fact that it does not is a bug in my opinion.
(I reported it in the bug thread).
05-22-2010 10:02 PM
altenbach wrote:If you open the polymorphic VI, you can see that DBL instance is listed first.
A coercion should by default pick the fist instance (DBL in this case). The fact that it does not is a bug in my opinion.
I don't agree with this. In this particular case the selection of I64 or DBL is wrong either way, but that does not mean that the DBL instance should be chosen just because it's the first one. What if I64 were the first one? Or even a U8? The attempt to maintain as much precision as possible (i.e., the same number of bits in this case) is what the documentation indicates, and it's exactly what it's doing. In this particular case it's the programmer's responsibility to correct the code, not to have LabVIEW arbitrarily choose the first instance of a polymorphic VI.