06-28-2012 03:03 PM
no that would not be simple.
Data bases use an index file to find records of interest without having to read the entire file. If you had another file that is written everytime you write more data to the file, it will contain the indexes that you will need.
If you wrote to the file 50 times it will have 50 index values where "index value" is the file postion.
I do not think I can find anyhting fast to back me up but the files size is not the same thing as the the file marker. Your file can be larger than the amount of data it contains. File size speaks to the maount of space that is allocated for the file on disk.
Rather than use the file size set the marker to the end (offset 0 relative to the end) and then find the marker value relative to the begining. look at the hlpe for "Set File Size"
and while I am at it...
I think if you open the file as a data log file you can use a datalog file read.
Also double check that empty array as Christian posted.
LV has some weired definitions of an empty array that could be part of the answer.
Ben
06-28-2012 03:11 PM
I added in all the proper open and close for both the write and read. I know that is the correct way to do it but I had tried that earlier, it had not seemed to make any difference. I went through it again however. Even with a fresh file open and set position, for both the write and read vi's, I still get an error if I feed the refnum into the Read Binary.
I set the Endian thing to Little though I really have no clue if that has an effect. Doesn't seem to matter either way no matter what else is going on.
Starting to struggle........
Doug
06-28-2012 03:13 PM
Didn't even think about the typdef(s)
My bad
06-28-2012 03:31 PM
Datalog file....... That may hold promise. Will be on that first thing in the morning as after 9 straight hours, my eyes are fuzzy.
will report back with results
Thanks.... Doug
06-29-2012 06:57 AM
Whoo Hoo, The datalogging file format works flawlessly. Seems like it would be alittle higher in the menu tree. I usually take it that the deeper in the tree you go, the more complex or specialized the functions. I would expect to see this datalogging menu as a selection under the top level file I/O menu but that's just me.
End result, I'm off to the next task.
Thanks for all the feedback on this.
Doug
06-29-2012 07:32 AM
@dacad wrote:
Whoo Hoo, The datalogging file format works flawlessly. Seems like it would be alittle higher in the menu tree. I usually take it that the deeper in the tree you go, the more complex or specialized the functions. I would expect to see this datalogging menu as a selection under the top level file I/O menu but that's just me.
End result, I'm off to the next task.
Thanks for all the feedback on this.
Doug
Pssttt....
Just between me and you.
NI and LV started out on a Mac and like Mac push all of that dirty techno stuff down and out of sight in the hopes they will not scare off people.
But that point aside I wrote years ago something to the effect;
"Learning LV is like wondering in a beutiful wood, with mysteries and wonders hidning under ever rock."
Get into the habit of flipping over those rocks and poking the old logs. You will be suprised what comes crawling out.
Thanks for the update!
Ben
06-29-2012 07:35 AM - edited 06-29-2012 07:40 AM
Actually the datalog interface to the File I/O functions is in fact a relict from very early days of LabVIEW. While it may seem to work for your specific case it is quite troublesome, limited and rather uneasy to use. For one thing it does not support random access to the records. If you wan't to read the last records in a file, LabVIEW always has to seek through the entire file, since the stored dataformat does not have fixed sized recoreds. This can get a VERY slow solution once your files accumulate a lot of records. If the LabVIEW developers had a say it had gone several versions ago, but once introduced and documented functionality is quite holy in LabVIEW and can't just be removed, unless it causes more troubles than dropping it would.
Having looked a little more closely at your VI, and cleaning it up and all.
While the specific data set you have as default data in the cluster produces always exactly 6 numbers for each of the 4 arrays (and therefore will produce 52 (6 * 8 + 4) bytes per array, have you been sure that you always passed data sets to this VI that produced the same amount of elements per array? The 8 bytes difference you see is exactly one double precision floating point value. It would seem not difficult to pass data to this VI that does not produce fixed size array blocks.
With the cleaned up VI and repeatedly calling the VI with the default data, I had a steady and constant increase of 52 bytes for every file and each execution.
06-29-2012 07:45 AM - edited 06-29-2012 07:46 AM
@Ben wrote:Also double check that empty array as Christian posted.
Empty arrays written to disk as binary stream are n * 4 bytes long, with n being the number of dimensions of the array. And those 4 bytes are an int32 that is set to the number of elements in the array, which doesn't need to be 0 for multidimensional arrays. A 4*0 2D array is fully legal in LabVIEW resulting in 0 data elements but still a size of 4 for the first dimension. So physically it is empty, but logically it isn't.
06-29-2012 07:59 AM
I will take a little time to analyze more closely exactly how much data goes into the write each time. Though I had set it up to write the exact same number of values each time, I did see something that made me wonder if I was getting a different size group of data somewhere along the way.
At this point, the datalog approach is working well and meets my needs for this application but I want to be able to understand why the other approach didn't work as it should have for future applications.
File size is something I will need to monitor and if it becomes too cumbersome, I will have to address that eventually.
Again, thanks for all the feedback.
Doug
06-29-2012 08:07 AM
@rolfk wrote:
@Ben wrote:Also double check that empty array as Christian posted.
Empty arrays written to disk as binary stream are n * 4 bytes long, with n being the number of dimensions of the array. And those 4 bytes are an int32 that is set to the number of elements in the array, which doesn't need to be 0 for multidimensional arrays. A 4*0 2D array is fully legal in LabVIEW resulting in 0 data elements but still a size of 4 for the first dimension. So physically it is empty, but logically it isn't.
So too calls with an empty array could explain the difference between 44 and 52.
If decyphering files didn't tkae so long, I'd try to confirm with the posted data files, but my coffee is n't fully engaged yet so I will pass.
Ben