LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Bug: Swap Bytes primitive loses names of Typedef Enums in cluster

Solved!
Go to solution

Using the "Swap Bytes" operation on a cluster containing an enum which is a typedef loses the label of the Typedef so that it cannot be unbundled by name (See unbundle by name nodes in code).

 

In addition, the datatypes returned for enums within a cluster (Typedef or not) are no longer enums but the data type representation instead (See unbundle nodes).

 

Byteswap losing Enum Typedef Labels.PNG

 

Shane. 

Message Edited by Intaris on 08-01-2009 06:43 AM
Message Edited by Intaris on 08-01-2009 06:44 AM
Download All
Message 1 of 5
(4,090 Views)

Intaris wrote:

1) "... enum which is a typedef loses the label of the Typedef so that it cannot be unbundled by name (See unbundle by name nodes in code)."

 

2) "...the datatypes returned for enums within a cluster (Typedef or not) are no longer enums but the data type representation instead (See unbundle nodes)."

 

 

 

Shane. 

Message Edited by Intaris on 08-01-2009 06:43 AM
Message Edited by Intaris on 08-01-2009 06:44 AM

1)  That's a bug.

 

2) Before reading your post I never stopped to think about using  a swap byte on on a mixed data type cluster so I had no preconcieved notion of what of what to expect or if it was even supported. So I put together this very convoluted diagram to get an understanding of what is the swap bytes does on a mixed data type cluster.

 

Cluster_Qs.PNG 

 

A)

The element by element Swap Byte code construct in the circle does NOT show coercion dots going into compare equal node. 

Testing using both U8 and U16 enums gave exactly the same results.

 

B)

I can't fully explain the logic, but maybe someone else has an idea.

A Swap Byte operating on a byte is a "NOOP" on all machines I know about since about 1980 so why is it necessary to loose the enum?

A Swap Byte operating on a larger data type would be a bad thing if the enum drove a case structure with no default case so maybe this is part of the reason we loose the enum.

 

C)

The element wise typ cast constructs yields the same data type as the original cluster as indicated by the lack of coercion dots on the compare.

 

D)

The indicator created from the output of the Swap Byte illustrates the name dropping bug.

 

E)

The two approaches yield the same results whenthe enum is a U8 but differ when the are U16. This could be a subtle "gotcha" if you are going to be using the enum value after the swap byte operation. Although the two results differ, if we assume the Swap Bytes is spec'd to operate as per construct "A" then this is all correct.

 

So.....

 

I don't see a bug in how the enum data types are changed (yet).

 

  if I'm wrong.

 

 

Ben

 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 2 of 5
(4,076 Views)
Solution
Accepted by topic author Intaris

Ben wrote:

B)

I can't fully explain the logic, but maybe someone else has an idea.

A Swap Byte operating on a byte is a "NOOP" on all machines I know about since about 1980 so why is it necessary to loose the enum?

A Swap Byte operating on a larger data type would be a bad thing if the enum drove a case structure with no default case so maybe this is part of the reason we loose the enum.

 

 


 

With the exception of a one-byte enum, the case structure is irrelevent.  Unless you had every possible value defined in the enum, you would create an invalid enum, as all of a sudden you could have values which are undefined.  Even if you did, two enums would not be equivilent since the values (by name) would be swapped.  And since NI requires 0 index, sequential values for their enums, if you didn't have all possible values handled, 0 would stay 0, a 1 would become a 127 (U16), etc.  So, you can see you can "break" the enum which is why it needs to be dropped to the equivalent integer type.  The result of anything but a 1 byte enum is not the same datatype.

 

Now, NI maybe should have had a special case for U8 enums since the value wouldn't change, but I'm sure what you are seeing is the result of necessity to handle U16 and U32. (I never noticed they didn't provide U64 enums - what if I want to type in that many values, NI?)

Message 3 of 5
(4,067 Views)

Matthew Kelton wrote:
...

With the exception of a one-byte enum, the case structure is irrelevent.  Unless you had every possible value defined in the enum, you would create an invalid enum, as all of a sudden you could have values which are undefined.  Even if you did, two enums would not be equivilent since the values (by name) would be swapped.  And since NI requires 0 index, sequential values for their enums, if you didn't have all possible values handled, 0 would stay 0, a 1 would become a 127 (U16), etc.  So, you can see you can "break" the enum which is why it needs to be dropped to the equivalent integer type.  The result of anything but a 1 byte enum is not the same datatype.

 

Now, NI maybe should have had a special case for U8 enums since the value wouldn't change, but I'm sure what you are seeing is the result of necessity to handle U16 and U32. (I never noticed they didn't provide U64 enums - what if I want to type in that many values, NI?)


"break" the enum is illustrated by the difference you get from the two approaches I showed above when the enum is U16 or better. If you have three values defined LV will coerce the non-defined to the range of the enum so as you increment the enum the enums will alternate between matching up and not. So this illustrates why the "type cast" construct is dangerous and should only be used with caution.

 

 

 

Imagine the disk farm required to fully define all of the values of a 64 bit enum. 18 million tera-bit arrays to store the interger values alone. Don't float over that baby with the help screen up and the wire tool selected unless it winter and the heat is out.

 

Ben

Message Edited by Ben on 08-01-2009 10:32 AM
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 4 of 5
(4,063 Views)

Ah, the invalid Enum is an interesting point.

 

My current case involves only U8 enums, so the byte swap should be a NOP, but I see the danger of what I could be doing elsewhere in my code:

 

U8 Array - Cast to Cluster including a U16 Enum -  Byte swap - :(.

 

Casting to an U16 Enum without having all 65536 cases defined (:smileyvery-happy:) can very easily lead to a COERCE being carried out and thus breaking the value BEFORE the required byte swap.

 

That's a point which certainly is bad data handling and not a bug.

 

So I would need to do the following:

U8 Array - Cast to cluster containing a U16 - Byte swap - Cast to Cluster with Enum - :D.

 

The intermediate cluster is a bit annoying (Maybe automate it by taking the cluster with enum, byte swapping and using the result for the first cast 🙂 ) I'm a lazy programmer.

 

But why do we lose the LABELS of typedef enums when swapping bytes?

 

I can appreciate why we lose the enums for any data type EXCEPT for the U8.  It just seems unneccessary for a U8.....

 

Shane.

Message Edited by Intaris on 08-01-2009 11:35 AM
Message 5 of 5
(4,048 Views)