04-17-2009 09:05 AM
Wow. You've managed to find even more wierd behavior. In your third formula node, I would have thought that defining "int32 hxcnst;" and then "hxcnst=0xFFFFFFFF" should generate an error, since 0xFFFFFFFF is out of range for an int32. It's also curious that it should make a difference whether the value 0xFFFFFFFF should be treated differently when defined as a decimal, rather than hex value.
Normally, I'd expect such problems to be some oversight on my part. I already went through the business of signed vs unsigned - if I hadn't, I would have expected some ridicule. That's why I was originally hesitant to post - this seems like a pretty glaring oversight on the part of LabView. You know: "This can't be right. I must be doing something wrong." Is this thing on?
Well, thanks for the sanity check and workarounds. I appreciate you taking the time.
-Mark
04-17-2009 09:27 AM - edited 04-17-2009 09:30 AM
It had seemed to me that since it was ignoring anything above 7FFF FFFF thus the highest bit, it was like it was treating it internally like a signed integer rather than an unsigned integer. Like it was coercing FFFF FFFF into an I32 datatype and leaving it as 7FFF FFFF in its internal representation. So I figured I would just try my hand at explicitly forcing it that way and compare.
I think you can make a case that whenever you define a hex constant as FFFF FFFF, it really doesn't have any positive or negative connotation to it. It is just a series of bytes and it is how you define the datatype as to whether it is 2^32-1, or -1 x 2^31. There is no such thing as a -7FFF FFFF.
I would lean towards this being a bug. At best, it is a failure in documentation of the formula node syntax as to how to define a hex constant and what it means.
Other things that would be interesting to do to see the extent of the bug is to try to define the constant as octal 37 777 777 777 or binary (series of 32 1's).