08-08-2023 07:48 AM
Hello,
i have a very big JSON-file with 64k rows that i need to process in LabVIEW.
But when using Unflatten from JSON i need to wire a cluster to Type and defaults. But it would be too big to create it manually.
Is there a possibility to process such a big JSON file without creating a big cluster for it?
Thank you!
08-08-2023 08:00 AM
Hi EBeg,
@EBeg wrote:Is there a possibility to process such a big JSON file without creating a big cluster for it?
How should that happen when your big JSON structure results in a big cluster to hold the data of that structure?
08-08-2023 08:02 AM
Can you describe what each of the rows in the file represents? Can you give two or three rows as an example?
If each row is one instance of your data object, you first split the file into individual rows, then pass each row into the unflatten function.
08-08-2023 08:51 AM - edited 08-08-2023 09:06 AM
was the 64k rows json-file created via labview?
I suppose not.
Is your .json file 64k rows when you open it in a text editor?
Or do you store an array in your json file? how big is your json file in bytes?
@EBeg wrote:
Is there a possibility to process such a big JSON file without creating a big cluster for it?
read/write the json file as plain text ?
be careful, JSON data always uses the Unicode character set, so watch out for encoding errors.
Moreover,
the JDP Science > JSONext Toolkit can be helpful to counter check and debug, but I think you don't want to parse the JSON file as a labview cluster?
08-08-2023 10:38 AM
@alexderjuengere wrote:Moreover,
the JDP Science > JSONext Toolkit can be helpful to counter check and debug, but I think you don't want to parse the JSON file as a labview cluster?
This is a great library. I highly recommend it.
08-08-2023 11:17 AM
@crossrulz wrote:
@alexderjuengere wrote:Moreover,
the JDP Science > JSONext Toolkit can be helpful to counter check and debug, but I think you don't want to parse the JSON file as a labview cluster?
This is a great library. I highly recommend it.
The JDP Science library is one that I have had great luck decoding or encoding clusters that I have type definitions for in JSON. It's better than the built-in ones since if there are changes or slight differences then you will get a mix of data and default values, rather than just zero data and an unhelpful error.
That said, if I need to parse arbitrary JSON where the format isn't type definable ahead of time, I have had better luck with this library: