09-09-2010 09:51 AM
One of the Known Issues published by NI for LabVIEW versions 8.x, 2009, & 2010 is that datalog files are limited to 2 GB. What is not explained is that Write Datalog will dutifully append records to a datalog file beyond the 2 GB limit without generating an error but then Read Datalog returns error code 4 (end of file encountered) when you try to access those same records. Unfortunately, this problem went undetected in one of my applications for several weeks and I now have several hundred Production Quality Assurance records which are inaccessible. This puts me in a very bad position with my Manufacturing clients as you can imagine.
Has anyone already developed a solution for recovering datalog records written beyond the 2 GB limit?
A strategy I have been pursuing is to split this datalog file into two parts, with each part being less than 2 GB in size, using the binary file IO vi's which readily read past the 2 GB limit. I can successfully extract the bytes for each variable-length record because I can recognize my sequential Record IDs in the first field of the record clusters, but I have not successfully reconstituted the datalog headers for the files yet due to my ignorance of the datalog file specification. The first reconstituted file is readable using datalog file IO when I simply use the original datalog header and change byte location 0008:000B to hold the revised number of records although I know parts of the header are still "not quite right", But I am receiving error 116 (Unflatten or byte stream read operation failed due to corrupt, unexpected, or truncated data.) when I use this same strategy with the second file containing the latter records. Specifically, I believe I also need to revise byte location 000C:000F which seems to contain information which is a function of the file size. Hopefully I don't have to touch bytes 002C:0227 containing the record type information.
I also thought I might create an empty datalog file (of the correct type) first and then save the recovered records using Write Datalog after applying Unflatten from String or Type Cast to the binary data using my custom record type but this consistently generated error 116.
Is there a better strategy to solve my problem? If not, can an AE or anyone else provide some more details on the datalog file format so that I can properly reconstitute the datalog file header information.?
Thanks in advance.
Larry
Solved! Go to Solution.
09-09-2010 10:06 AM
just sharing an idea (that may or may not help).
The file still gets written because the OS has the file pointer in the proper form.
I THINK you can just read one byte at a time and let the OS track the pointer for you.
Ben
09-09-2010 10:46 AM
Yes, you are right Ben. I can use binary file IO to read the datalog file (> 2 GB in size) one byte or one chunk of bytes at a time, and in fact this is what I have done to recover records as a byte stream, but my goal now is to re-incorporate these bytes into two validly formatted datalog files (each < 2 GB in size) so that the records are once again readable using the datalog file IO in my application. My hang-up is in creating a valid datalog header without knowing the details of the datalog file format specification.
Or if I could convert the array of bytes I read for each record into a valid cluster matching my custom typedef, then I could simply use the LabVIEW datalog file IO routines to build the pair of datalog files. I was dismayed when I received an incompatibility error (116?) when I flattened the byte array to a string and then tried unflattening the string into my custom cluster typedef. When I did this I believe I had wired FALSE constants to "prepend array or string size" and "data includes array or string size". Maybe I should try prepending the byte array with its I32 array size and wiring TRUE constants instead.
09-09-2010 02:48 PM
I found a solution.
These are the steps I took to split a large datalog file (> 2 GB) into two smaller datalog files (each < 2 GB) so that all records could once again be accessed by an application that employs datalog file IO.
Using the above procedure, I was able to circumvent the need to create my own datalog file headers so I did not need an understanding of the datalog file format specifications.