07-18-2019 02:29 PM
Hi!
I am a bit new to the use of TDMS files in queues and producer/consumer parallel loop structures, and I wanted to see if there is way that I can get some help with solving my problem.
I have a circular buffer that has been modified from the version posted and associated with the white paper (http://www.ni.com/tutorial/3330/en/), wherein a read/write to direct binary files is used. The reason for using this buffer idea is so that the user can save a certain specified amount of data before and after an event window.
I have a data acquisition scheme with multiple channels and different acquisition rates and tasks and as I understand it, using TDMS files for logging this data is not only better for the end-user, but for other reasons as well.
So far, I have modified the circular buffer to use TDMS files directly instead of direct binary along with some other modifications to suit the needs of the scheme, but I seem to be running into the problem that the buffer files (buffer1/2.tdms) keep increasing in size even though I would like them to remain a specific size or only contain a specific amount of samples/channel on each iteration and to be overwritten.
So, my problem is that I can't seem to find an elegant method for either deleting or overwriting the data within the buffer files. I don't really want to use Advanced TDMS file options, so if possible I'd like to avoid their use.
- Do I need to close and re-open the files within each processing loop in the code?
- Is there a way to move the write pointer on the TDMS file such that it overwrites the already written data?
- Is there a way in which I can use the direct binary for the buffer files, and then convert the pertinent data to TDMS files? Is there a high risk of data corruption transfer in this case?
I think there have been other attempts on this subject or at least similar ones, but I never found a satisfactory solution. Any help is appreciated. Thanks.
07-19-2019 08:48 AM
07-19-2019 09:21 AM
Welcome to the world of TDMS. I'm a big fan of the file format and have presented on it a few times. The latest I think is here.
I don't have an example of it posted online but one thing I've used TDMS for is a circular buffer. I have an instance where I want to have a 5 minute log of everything that happened up to a specific event. So what I do is log to a temporary folder into a new TDMS file.
C:\temp\TDMS Buffer\File 1.tdms
After one minute of logging I will close that TDMS file, and create a new one.
C:\temp\TDMS Buffer\File 2.tdms
I continue this until there are 5 files in that folder.
C:\temp\TDMS Buffer\File 1.tdms
C:\temp\TDMS Buffer\File 2.tdms
C:\temp\TDMS Buffer\File 3.tdms
C:\temp\TDMS Buffer\File 4.tdms
C:\temp\TDMS Buffer\File 5.tdms
When I create the next file I see if there are already 5 files in the folder and if there is I delete the oldest file. Now there is files 2 through 6.
C:\temp\TDMS Buffer\File 2.tdms
C:\temp\TDMS Buffer\File 3.tdms
C:\temp\TDMS Buffer\File 4.tdms
C:\temp\TDMS Buffer\File 5.tdms
C:\temp\TDMS Buffer\File 6.tdms
If an event happens that I want to have the 5 minutes of data before I just finish logging to the current file, and then merge all the TDMS files that are in that folder, and then move that single file to where the user can get access to it like <My Documents>\Test Data\Merged Log.tdms. TDMS is like a text file when it comes to the fact that you can append one file to another and get a valid file with data continuing.
I do believe there is a better way to do this through the Advanced TDMS functions but as much as I know about TDMS that part of the API is something I've done little with. I'm pretty sure you could write a set of subVIs that creates a buffer and keeps track of where the writing location is, and then shift the data after the test is done. But I don't know how to do that, but I do know how to do file manipulation like I mentioned earlier.
Unofficial Forum Rules and Guidelines
Get going with G! - LabVIEW Wiki.
17 Part Blog on Automotive CAN bus. - Hooovahh - LabVIEW Overlord
07-22-2019 04:50 PM
Hi @Cy_Rybicki,
Thanks so much for the quick response.
Thanks again for your response. I really do appreciate it.
07-22-2019 04:57 PM
Hi @Hooovahh,
Thank you for your response. Do you by chance have an example of your code that I could perhaps look at and modify for my needs?
For my particular project, we are replicating the procedure mentioned in the white paper article (http://www.ni.com/tutorial/3330/en/). We are taking out a specific number of samples before and after an event from the two buffer files which are being written to one after the other. We don't necessarily want to merge files, but just take out some information and overwrite them, in particular a specific number of samples/channel from each file depending on when the event occurred with respect to the size of the file.
Thanks again!
07-23-2019 08:23 AM
The behavior you described (buffer file continually growing in size) makes sense if you are not changing the Set Next Write position property after reading out the data in the buffer. Otherwise the data will be appended to the end of the file rather than overwriting the previously written data. If you do not want to use the advanced TDMS functions then the other option would be to close the reference to the buffer file and then overwrite/delete the file before re-opening it. This is the process that Hooovahh described, just with more than two buffer files, and is probably the simplest approach. In my opinion the white paper you are following is overly complicated for what you are trying to achieve based on my understanding.
Another point I would make is to be sure you actually need to use hard disk buffering. In your response you mentioned saving 5000 samples/channel as an example, but if that is the sample count you need and there are not an absurd number of channels then this approach may be overkill. Unless you are limited by RAM or are concerned about losing the data in the case of an unexpected shutdown, it is much more straightforward to keep an array buffer of your data in a shift register and then write that out to disk once your event occurs. Hard to say for sure without knowing the details of your application but something to consider.
08-21-2019 03:54 PM
Hi Cy.Rybicki,
Thanks so much for the response and I apologize for the lateness in mine.
As for the purpose of the hard-disk data acquisition circular buffer (HD-DAQCB), it is used so that we may at some point in the future increase the number of channels (currently 😎 and/or number of samples to be logged (currently 5556) each time the buffer files are written to. We are also quite "concerned about losing data in case of an unexpected shutdown", as you mentioned in your comment.
I have gone ahead and implemented a version which uses simulated signals for logging for your implementation (see attached zip file). I have also incorporated the advanced TDMS files so that the write position can be reset each time the buffers are toggled and the next one is to be written to again. As of now, the buffer file sizes are remaining constant.
However, a new problem has arisen, where I can no longer archive the pertinent information from the current read buffer file into the created archive file. It seems that it may be a problem arising from any reason, but the only one I can think of is maybe a race condition between the loops or a set read condition issue in the second loop VIs. I am hoping to get this resolved soon, but any help is much appreciated on this.
Thanks again for all the help.
08-21-2019 04:24 PM
Just throwing this out there. (Probably harder to implement than I am realizing.)
mcduff
08-22-2019 08:33 AM
08-22-2019 11:19 AM
Hi Cy_Rybicki,
Thank you so much for looking into it and for finding the solution. I realized the error last night when I was fiddling with it. I really appreciate all the help. Thank you also for the recommendation on the "wait" suggestion. I will go ahead and implement that asap.
It is still spitting out false information in terms of either the number of events or whether an actual event has occurred, but that is something that I will have to figure out. I really appreciate all your help on this. Thanks again.