LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TDMS file in Circular Buffer

Hi!

 

I am a bit new to the use of TDMS files in queues and producer/consumer parallel loop structures, and I wanted to see if there is way that I can get some help with solving my problem.

 

I have a circular buffer that has been modified from the version posted and associated with the white paper (http://www.ni.com/tutorial/3330/en/), wherein a read/write to direct binary files is used. The reason for using this buffer idea is so that the user can save a certain specified amount of data before and after an event window.

 

I have a data acquisition scheme with multiple channels and different acquisition rates and tasks and as I understand it, using TDMS files for logging this data is not only better for the end-user, but for other reasons as well.

 

So far, I have modified the circular buffer to use TDMS files directly instead of direct binary along with some other modifications to suit the needs of the scheme, but I seem to be running into the problem that the buffer files (buffer1/2.tdms) keep increasing in size even though I would like them to remain a specific size or only contain a specific amount of samples/channel on each iteration and to be overwritten.

 

So, my problem is that I can't seem to find an elegant method for either deleting or overwriting the data within the buffer files. I don't really want to use Advanced TDMS file options, so if possible I'd like to avoid their use.

       -  Do I need to close and re-open the files within each processing loop in the code?

       -  Is there a way to move the write pointer on the TDMS file such that it overwrites the already written data?

       -  Is there a way in which I can use the direct binary for the buffer files, and then convert the pertinent data to TDMS files? Is there a high risk of data corruption transfer in this case?

 

I think there have been other attempts on this subject or at least similar ones, but I never found a satisfactory solution. Any help is appreciated. Thanks.

0 Kudos
Message 1 of 17
(3,839 Views)
  1. You do not need to close and re-open the buffer files, that adds unnecessary overhead. Just as with the example code you posted, you should open a reference to the file when the application launches and leave them open until shutdown. Unless you are talking about the archive file in which case yes you should close the reference after writing the data since you will not be using it again.
  2. Use TDMS Set Next Write Position in the Advanced TDMS palette to move the write pointer. Why do you want to stay away from the advanced functions? 
  3. You can certainly keep using binary files for the buffer and then write to TDMS for your archive file. However, you will need to perform some manipulation of the data since it starts off as a double (waveform), is converted to I16, and then read from the buffer as U8. You will need to perform the inverse of this process to get it back to a double which I assume is how you would want it stored in the TDMS.
0 Kudos
Message 2 of 17
(3,748 Views)

Welcome to the world of TDMS.  I'm a big fan of the file format and have presented on it a few times.  The latest I think is here.

 

I don't have an example of it posted online but one thing I've used TDMS for is a circular buffer.  I have an instance where I want to have a 5 minute log of everything that happened up to a specific event.  So what I do is log to a temporary folder into a new TDMS file.

 

C:\temp\TDMS Buffer\File 1.tdms

 

After one minute of logging I will close that TDMS file, and create a new one.

 

C:\temp\TDMS Buffer\File 2.tdms

 

I continue this until there are 5 files in that folder.

 

C:\temp\TDMS Buffer\File 1.tdms

C:\temp\TDMS Buffer\File 2.tdms

C:\temp\TDMS Buffer\File 3.tdms

C:\temp\TDMS Buffer\File 4.tdms

C:\temp\TDMS Buffer\File 5.tdms

 

When I create the next file I see if there are already 5 files in the folder and if there is I delete the oldest file.  Now there is files 2 through 6.

 

C:\temp\TDMS Buffer\File 2.tdms

C:\temp\TDMS Buffer\File 3.tdms

C:\temp\TDMS Buffer\File 4.tdms

C:\temp\TDMS Buffer\File 5.tdms

C:\temp\TDMS Buffer\File 6.tdms

 

If an event happens that I want to have the 5 minutes of data before I just finish logging to the current file, and then merge all the TDMS files that are in that folder, and then move that single file to where the user can get access to it like <My Documents>\Test Data\Merged Log.tdms.  TDMS is like a text file when it comes to the fact that you can append one file to another and get a valid file with data continuing.

 

I do believe there is a better way to do this through the Advanced TDMS functions but as much as I know about TDMS that part of the API is something I've done little with.  I'm pretty sure you could write a set of subVIs that creates a buffer and keeps track of where the writing location is, and then shift the data after the test is done.  But I don't know how to do that, but I do know how to do file manipulation like I mentioned earlier.

Message 3 of 17
(3,740 Views)

Hi @Cy_Rybicki,

 

Thanks so much for the quick response.

  1. So in my version I am keeping the log files open and closing the archive file references once the pertinent data has been logged, however, the problem of the buffer files increasing in size persists, while the archive files each have a set size. I want to keep that set size for the logging files as well as the archive files, that is, for example, if I want to log 5000 samples/Ch per file, I would like for the buffer files to have the same size.
  2. The reason why I am opting to stay away from Advanced TDMS files is that in the LabVIEW Core training on the file types it was recommended to only use standard TDMS files unless absolutely necessary to avoid corruption of data. If you have a difference experience or different advice, please let me know 🙂
  3. I don't plan on storing the data as anything but the direct waveform information. I am worried that using Direct Binary files might be a bit involved so I was wondering how one would go about making sure that the typical information that can be easily stored in TDMS can be stored in a Binary file and then converted to TDMS. Is there an example that you might be able to share?

Thanks again for your response. I really do appreciate it.

0 Kudos
Message 4 of 17
(3,701 Views)

Hi @Hooovahh,

 

Thank you for your response. Do you by chance have an example of your code that I could perhaps look at and modify for my needs?

 

For my particular project, we are replicating the procedure mentioned in the white paper article (http://www.ni.com/tutorial/3330/en/). We are taking out a specific number of samples before and after an event from the two buffer files which are being written to one after the other. We don't necessarily want to merge files, but just take out some information and overwrite them, in particular a specific number of samples/channel from each file depending on when the event occurred with respect to the size of the file.

Thanks again!

0 Kudos
Message 5 of 17
(3,694 Views)

The behavior you described (buffer file continually growing in size) makes sense if you are not changing the Set Next Write position property after reading out the data in the buffer. Otherwise the data will be appended to the end of the file rather than overwriting the previously written data. If you do not want to use the advanced TDMS functions then the other option would be to close the reference to the buffer file and then overwrite/delete the file before re-opening it. This is the process that Hooovahh described, just with more than two buffer files, and is probably the simplest approach. In my opinion the white paper you are following is overly complicated for what you are trying to achieve based on my understanding.

 

Another point I would make is to be sure you actually need to use hard disk buffering. In your response you mentioned saving 5000 samples/channel as an example, but if that is the sample count you need and there are not an absurd number of channels then this approach may be overkill. Unless you are limited by RAM or are concerned about losing the data in the case of an unexpected shutdown, it is much more straightforward to keep an array buffer of your data in a shift register and then write that out to disk once your event occurs. Hard to say for sure without knowing the details of your application but something to consider.

0 Kudos
Message 6 of 17
(3,666 Views)

Hi Cy.Rybicki,

 

Thanks so much for the response and I apologize for the lateness in mine.

 

As for the purpose of the hard-disk data acquisition circular buffer (HD-DAQCB), it is used so that we may at some point in the future increase the number of channels (currently 😎 and/or number of samples to be logged (currently 5556) each time the buffer files are written to. We are also quite "concerned about losing data in case of an unexpected shutdown", as you mentioned in your comment.

 

I have gone ahead and implemented a version which uses simulated signals for logging for your implementation (see attached zip file). I have also incorporated the advanced TDMS files so that the write position can be reset each time the buffers are toggled and the next one is to be written to again. As of now, the buffer file sizes are remaining constant.

 

However, a new problem has arisen, where I can no longer archive the pertinent information from the current read buffer file into the created archive file. It seems that it may be a problem arising from any reason, but the only one I can think of is maybe a race condition between the loops or a set read condition issue in the second loop VIs. I am hoping to get this resolved soon, but any help is much appreciated on this.

 

Thanks again for all the help.

0 Kudos
Message 7 of 17
(3,578 Views)

Just throwing this out there. (Probably harder to implement than I am realizing.)

 

  1. Make a fix sized queue
  2. Use a lossy enqueue insert
  3. Flush the queue when you have your event and save to disk.

mcduff

0 Kudos
Message 8 of 17
(3,567 Views)
I ran your example, and the problem looks to be that you are not setting the AT and BC/Klyst Samples/Cycle/Channel values in your DAQCB cluster. The OffsetSampleCyclesLeft VI uses these values to calculate the number of samples to read from your buffer, and since it was zero you were not actually reading any data from your buffers. Setting these two parameters to a value results in data being written to your archive file. Also, I would recommend putting a wait in your Acquisition & Buffer Management Loop, otherwise your VI hits the consecutive events limit immediately.
Message 9 of 17
(3,528 Views)

Hi Cy_Rybicki,

 

Thank you so much for looking into it and for finding the solution. I realized the error last night when I was fiddling with it. I really appreciate all the help. Thank you also for the recommendation on the "wait" suggestion. I will go ahead and implement that asap.

 

It is still spitting out false information in terms of either the number of events or whether an actual event has occurred, but that is something that I will have to figure out. I really appreciate all your help on this. Thanks again.

0 Kudos
Message 10 of 17
(3,508 Views)