LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LV Unicode on a Mac

Solved!
Go to solution

I can get LabVIEW's (limited) Unicode support to work on a PC, but not on a Mac.

Has anyone succeeded in that?

0 Kudos
Message 1 of 9
(785 Views)
Solution
Accepted by paul_a_cardinale

It's likely not going to work like how you are used on Windows, since Unicode on the Mac works very different than on Windows. In fact the entire User Interface works mostly with UTF8 similar to modern Linux systems. Since LabVIEW theoretically uses whatever user encoding the underlaying OS is using, it should actually work for large parts. Of course there are limitations since LabVIEW isn't fully MBC transparent everywhere, although it seems to support enough for the Asian versions of LabVIEW to support Asian text in the UI. But I never used such an installation.

Rolf Kalbermatter
My Blog
0 Kudos
Message 2 of 9
(764 Views)

Thanks.

Just use UTF-16 LE on a PC, and UTF-8 on a Mac, and it works.

0 Kudos
Message 3 of 9
(741 Views)

Anybody know what Linux uses?

0 Kudos
Message 4 of 9
(733 Views)

@paul_a_cardinale wrote:

Anybody know what Linux uses?


Short answer is UTF-8.

 

Longer answer: As it is very normal for Linux, such things are highly configurable. 😁

But pretty much all Linux distributions switched to use UTF-8 as default character set in their locale since more than 10 years. The libc call setlocale(LC_ALL, NULL) will return something like en_us.UTF8, with the first part before the point indicating the language and region and the part after the point indicating the used character set.

 

But you can't simply assume that everyone is using UTF-8 if she/he is a somewhat skilled Linux user. And for embedded systems it is quite common to use one of the other region specific locales and sometimes even the C locale, which is basically just using the 7-bit ASCII character set for performance and consistency reason. With the C locale you know exactly what the various string functions in libc will do when formatting, parsing and sorting strings and don't get any surprises. With other locales there are many things that 

can and often will go wrong with a simple programming approach. 

 

Unicode character handling is not trivial and does cost extra performance. It also has the nasty (from a C programming perspective) behavior that bytes, codepoints and characters are all rather different things. 😁

 

The difference between UTF-16LE on Windows and UTF-8 on other platforms comes from the fact that Microsoft was in fact an early adopter of Unicode. They implemented one of the first Unicode specifications even before it was officially released at a time where the Unicode tables contained less than 65535 codepoints so 16-bit was assumed to be more than enough code space for representing all the possible Unicode codepoints. The Unicode consortium consequently extended the tables substantially to about a 150000 or so codepoints, including for many extinct scripts. There was even a proposal to add Klingon to it, but the consortium refused that, saying it wasn't in widespread use and most people using that were actually preferring to use the Latin based writing (yes there seem to be people who actually speak Klingon 😁). With that even UTF-16 had to use variable byte length codepoint encoding so there is not a huge advantage in using either UTF-16 or UTF-8 anymore. And while the libc widechar implementation on non-Windows platforms uses in fact UTF-32, this is generally considered quite a wasteful way of storing text as it does increase the used byte space for a normal script on average about 3 times.

Rolf Kalbermatter
My Blog
0 Kudos
Message 5 of 9
(706 Views)

@rolfk wrote:

@paul_a_cardinale wrote:

Anybody know what Linux uses?


Short answer is UTF-8.

 

Longer answer: As it is very normal for Linux, such things are highly configurable. 😁

  ...


Thanks for the info.  Do you possibly have LabVIEW code in hand that reads that setting from Linux?

0 Kudos
Message 6 of 9
(681 Views)

@paul_a_cardinale wrote:


Thanks for the info.  Do you possibly have LabVIEW code in hand that reads that setting from Linux?


Try attached VI. It is in LabVIEW 8.6 and newer and should run in both 32-bit and 64-bit. I tested the Windows and Unix variant which will work independent if the VI is used in a project or not. I added a Mac implementation but can not test it at this point. Most likely the library name needs to be corrected somehow for Mac.

 

There is also support for LabVIEW RT but that will only work if the VI is used inside a project, but that is for an RT application anyhow a strict requirement. I can't vouch for the RT implementations, it should work for Pharlap and NI Linux RT but VxWorks I have no idea.

Rolf Kalbermatter
My Blog
0 Kudos
Message 7 of 9
(664 Views)

@rolfk wrote:

@paul_a_cardinale wrote:


Thanks for the info.  Do you possibly have LabVIEW code in hand that reads that setting from Linux?


Try attached VI. It is in LabVIEW 8.6 and newer and should run in both 32-bit and 64-bit. I tested the Windows and Unix variant which will work independent if the VI is used in a project or not. I added a Mac implementation but can not test it at this point. Most likely the library name needs to be corrected somehow for Mac.

 


The Mac version is broken (I don't know how to fix it).

The Windows version just returns "C".

What does the Unix version return?

0 Kudos
Message 8 of 9
(645 Views)

@paul_a_cardinale wrote:

@rolfk wrote:

@paul_a_cardinale wrote:


Thanks for the info.  Do you possibly have LabVIEW code in hand that reads that setting from Linux?


Try attached VI. It is in LabVIEW 8.6 and newer and should run in both 32-bit and 64-bit. I tested the Windows and Unix variant which will work independent if the VI is used in a project or not. I added a Mac implementation but can not test it at this point. Most likely the library name needs to be corrected somehow for Mac.

 


The Mac version is broken (I don't know how to fix it).

The Windows version just returns "C".

What does the Unix version return?


Don't know what the Mac Version needs. It would need whatever shared library name implements the C runtime library. But just as in Windows this is not the actual implementation but a sort of wrapper around other Mac APIs so its information is likely just a minimal C runtime implementation and not the full GNU implementation. Mac OS being really BDS derived is anyhow not very GNU minded.

 

The Windows version is indeed just returning C. It is how the Microsoft C runtime works as it is not implementing full localization for the C runtime functions. The localization functionality in Windows is in the Windows API and the C runtime is just an emulation layer around that without trying to be fully GNU libC compliant.

 

On my Ubuntu 16.02 it returns "en_us.UTF-8". If you have another region and language configured it would read some other value encoded like

 

language[_territory][.codeset][@modifier]

 

language being the language abbreviation and territory is the country abbreviation. If it doesn't return UTF-8 after the point, your system is not configured to use UTF-8 as codeset. It could be a half zillion things such as CPXXXX with XXXX being a numeric value indicating the codepage but it can be also many other things such as ASCI or ISO-8859-1 or any of many other ISO code sets. Basically there is not an exhaustive list of what it could return. Also you should not expect uppercase for it. While it is customary to use uppercase for the codeset identifier, it can be also lowercase.

Rolf Kalbermatter
My Blog
0 Kudos
Message 9 of 9
(641 Views)