01-17-2024 09:14 AM
Hi,
I'm trying to simplify the following example diagram, especially, _if possible_ getting rid of this case structure.
Here's my situation:
I am getting telemetry (as a binary string) from a UUT. The way the telemetry is received is summarized in a file (.csv) that I load at SW startup, and extract the main data from. Typically: Telemetry name; Data Type; Conversion factors;(+many other stuff, irrelevant here)
Thing is, I don't want to have to create a case structure and copy and arrange every datatype cases whenever I have to deal with it (In this example with unflatten from string)
So I'm trying to find a more programmatical way to extract this kind of information to automatically transform datatype to what it's supposed to be without having to deal all the time with a case structure.
Is that even possible or do I have to do specific programming every time?
Here's the example.
I am looking for something that could look like this, but I don't now where to start:
I'm really not sure this could be possible since it's not possible to mix up datatype on nodes... But never know.
Another possibility is to go berserk, create all cases for every datatypes and every functions that can remotely interest me and save them to be re-used later 🤔
Thanks for your help.
Vinny.
01-17-2024 10:06 AM
@VinnyAstro wrote:
Hi,
I'm trying to simplify the following example diagram, especially, _if possible_ getting rid of this case structure.
Here's my situation:
I am getting telemetry (as a binary string) from a UUT. The way the telemetry is received is summarized in a file (.csv) that I load at SW startup, and extract the main data from. Typically: Telemetry name; Data Type; Conversion factors;(+many other stuff, irrelevant here)Thing is, I don't want to have to create a case structure and copy and arrange every datatype cases whenever I have to deal with it (In this example with unflatten from string)
So I'm trying to find a more programmatical way to extract this kind of information to automatically transform datatype to what it's supposed to be without having to deal all the time with a case structure.
Is that even possible or do I have to do specific programming every time?
Here's the example.
I am looking for something that could look like this, but I don't now where to start:
I'm really not sure this could be possible since it's not possible to mix up datatype on nodes... But never know.
Another possibility is to go berserk, create all cases for every datatypes and every functions that can remotely interest me and save them to be re-used later 🤔
Thanks for your help.
Vinny.
You can use a class wrapper to make a class for each data type since the interface is the same. Then you could make a 'unflatten from string' method that handles each of the data types. In the end you could have a VI that looks close to what you are trying to get at (no case structure). The down side of this is that it is not really better code (and it is more work) it just turns the case structure into a class method.
01-17-2024 12:34 PM - edited 01-17-2024 12:35 PM
It depends on the range of types you want to support and how complex your type descriptor in your CSV can be. I see 3 options:
1. Keeping your "DataType" enum with pre-defined types, for example "Double", "Array of U16", "Error Cluster", "String", "Path", "Time Stamp", "My Class", "My Other Class", etc… Only you can make the link between the enum item name and the actual LabVIEW data type, so you are left with your current solution. You may want to make a subVI out of your case structure to be able to reuse it.
2. Using a more complex type descriptor like the "type string" given by function "Variant To Flattened String". Then with the help of "Flattened String To Variant", you can recompose the variant. This solution is for very generic use cases, as it works for any type:
3. If you are only interested in numeric types, there may be a hybrid solution. Since the type string of all numeric types are basically the same, except for a single byte, you can easily rebuild it from scratch and then use "Flattened String To Variant", followed by a cast to the biggest numeric type you want to support (for example DBL, EXT, CBD or CXT):
Note, I was helped by this documentation about LabVIEW type descriptors.
Regards,
Raphaël.
01-17-2024 03:58 PM
EDIT:
Solution 3 cannot be applied as is because function "Flattened String to Variant" is missing an output "remaining string" (please kudo this idea to change this). So here is a fix that requires additional (but a bit silly) code to measure the size of the current data chunk to allow iterative parsing:
01-18-2024 02:31 AM
Thanks a lot Raph,
I've been trying to use a bit the Type Descriptions with OpenGVariant, but it was more like random actions than anything else. I'll have a look at the docu you linked!
and yes, for #3 I've also noticed that it would require to measure the size of the rest of the chunk.
01-24-2024 11:50 AM - edited 01-24-2024 11:56 AM
Hi,
Sooooo, I've done some work, honestly no idea if what I came up with is a good or bad solution, but it is one that probably need to be improved.
As a comparison, here is more or less my current situation that I want to get rid of in the future:
WHen a payload has 30 telemetries, I let you imagine how painful that is to debug and maintain whenever there is one change in the configuration (which happens)
So after some try and error with OpenG Variants, I came up with the solution attached (LV20).
I am aware it is not perfect especially for the conversion phase:
But I think it is already a better solution, more scalable. But I would love to have some feedback on it.
Thanks in advance.
Vincent.
01-24-2024 04:40 PM
Hi Vincent,
It appears you moved away a bit from your original requirements. Where is the CSV configuration for the elements to decode? Now it seems you have a cluster with the data types known at compile time. And you still have a case structure (plus some convoluted code with variants).
Btw you should try to answer these questions first:
1. Are the data types known at compile time or should they be parsed according to a configuration?
2. How has the input data been flattened in the first place? By LabVIEW (Flatten to String, Variant to Flattened String, Write Binary File, …)? Or by an external device/application ? If external, what is the protocol?
3. What are the possible data types encoded in the input data?
4. In which format do you want to store the data once it is parsed. All doubles? A cluster with each element having its specific type?