LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
JackDunaway

Enum Datatypes

Status: New

Is there a compelling reason that enums cannot be a signed integer? 

 

EnumDatatypes.png 

 

 

 This would provide the very handy function of assigning them the I32 "base" datatype, eliminating coercion dots on most LV primitives.

 

EnumCoercionDots.png

On that note, how about supporting sparse enums (enums with non-consecutive value-string pairs)?

 

OK, I know rings support both features, but rings have the distinct disadvantage of only showing their numeric value in a case structure.

17 Comments
tst
Knight of NI Knight of NI
Knight of NI

"Is there a compelling reason...?" 

 

You would still get a coercion, since an I32 enum is not an I32 integer. You'd probably also get coercion from an I32 typedef.

 

Now, the idea of sparse enums is something I like and wanted for a long time, and it warrants its own post (which I have now created here).


___________________
Try to take over the world!
Intaris
Proven Zealot

Correct on the typedef, enum and I32 note.

 

Coercion is almost unavoidable in these situations so the graphic shown above (getting rid of the coercion dots) would require more changes than just allowinf I32 enums.

JackDunaway
Trusted Enthusiast
OK, perhaps I was combining three ideas into one: 1. Allow signed enums 2. Allow sparse enums 3. Use something like "constant folding" where you have a smart compiler that can recognize an I32 enum and compile it as simply an I32 constant numeric. (Note that it would not "smartly" fold an enum control into an index... you would still see the coercion there.) Comprende?
JackDunaway
Trusted Enthusiast
I am very surprised that no one but me and a few others like the idea of having a signed enum. Can someone tell me what I'm missing, and why I don't need a signed enum?
RavensFan
Knight of NI

Probably because most everyone who uses enums in their current implentation only uses them to have different items with using descriptive words and a way to have them be consistent when used as typedefs where the underlying numeric value for each item doesn't matter at all.  I would never use an enum where the value I am truly interested in is the underlying number.  I would just use numeric constants in those cases.

 

Now with sparse enums, or even "dense" enums where you are interested in the particular numeric value behind the enum item, then have negative integers would be meaningful.

JackDunaway
Trusted Enthusiast

Well, you're getting into the core reason I suggested the two ideas as a package: we want both sparse enums AND signed integer enum datatypes. Getting either one is a small step in the right direction, but getting both really amplifies the power of each.

 

Probably because most everyone who uses enums in their current implentation only uses them to have different items with using descriptive words and a way to have them be consistent when used as typedefs where the underlying numeric value for each item doesn't matter at all.  I would never use an enum where the value I am truly interested in is the underlying number.  I would just use numeric constants in those cases.

 

OK, what you just said there is flat-out lame. And that is by no means a knock against you, Ravens. Its a knock against the constraints that have been placed on your development by being forced to use consecutive, unsigned, 0-indexed enums. You're right... people in their current implementation do not care about the numbers underneath.... because they can't!!! (Unless the numbers you happen to be interested in are 0, 1, 2, 3...) But a very large percentage of the time, I would be interested in the number underneath! That is, if I had access to sparse or dense, signed or unsigned enums.

Intaris
Proven Zealot

I for one can think of one concrete example where sparse enums would have saved me a LOT of work.

 

Certain hardware can return values which represent pre-defined functions which can be of up to 16-bit in length but not evenly spaced.  Using Enums with this kind of situation is currently a bit of a pain (ever created an U16 Enum with 65536 entries?).

 

The most efficient way of dealing with these numbers in code is a simply Type cast to an enum instead of having some kind of translation function.

 

Sparse Enums would lessen the pain considerably.

JackDunaway
Trusted Enthusiast
For us, one concrete example is it would save a lot by enum'ing the fault codes we receive from a hardware vendor (here both signed and spare is necessary). Another includes enum'ing the fixed hardware addresses of different devices. There are plenty of places in software that could benefit also: delay times, loop rates, indexing "named" elements in an configuration array where -1 would be an appropriate NULL...
Message Edited by JackDunaway (mechelecengr) on 09-03-2009 07:45 AM
RavensFan
Knight of NI
No offense taken.  And I just realized I hadn't given your idea the kudos.  It's true.  Because of the limitations present with the current enums, it does prevent using it in other ways, and once you've learned to use it within its current limits, I think people have yet to realize there are other, better ways that would expand the uses for enums.
JackDunaway
Trusted Enthusiast

Thanks for the new support, Ravens and tst. I would still love to hear from someone who disagrees with this idea.

 

For the past three days, I have been seriously considering posting a new idea without the "coercion dot noise" that I introduced in this original idea, wondering if that is why so few people support this idea. Really, the true idea here is having signed and unsigned datatypes, and if I realized it would have caused confusion, sparse enums and coercion dots would not have been introduced. I am really adament about getting signed and unsigned enums available.