Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Why does "Reading an fixed number of Samples" help with error -200279?

Solved!
Go to solution

Hi Everybody,

 

This is a question about a tip that can be found in the explanation about DAQ Error -200279. In that explanation it says: "...reading a fixed number of samples instead of all available samples might fix this problem..."

As far as I know the buffer of the DAQ system is a ring buffer, so it can be considered to have a writing position and a reading position "moving" along the ring. As long as the reading position stays ahead of the writing position everything is shiny. But if the writing catches up with the reading, samples will be overwritten and thus -> error -200279.

 

So here is my question:

I do not understand why reading a fixed amount of samples would help with this error. I do understand why the other tips (increasing the buffer size or  the reading frequency) would help as they will reduce the likeliness of the reading position being outrunned by the writing position.

But if I read everything that is available, every time I read the only way for the buffer to fill up should be that the time between two reads take longer than it needs to fill the complete buffer.

Or in other words with each reading I will read at least as many elements as there have been filled up by the writing since the last read, so the only way to fill the buffer would be to fill it completely at once since the last read.

 

That should be much safer than reading a fixed amount, because in that case I would get a problem if I read less elements than the writing fill up in the same time, regardless of how many elements that would be and regardless of the amount of the buffer size that would be.

 

What do I not understand here?

 

Thank you and Regards ,

Sebastian

0 Kudos
Message 1 of 9
(4,328 Views)

My guess: Memory Allocation.

 

By reading a fixed number of samples, the same memory buffer can be used each time the DAQmx Read is called.  When you read all samples, the array size will be different, therefore you have to allocate memory.  Allocating memory takes time.  So by using a fixed number of samples, you keep the memory allocation to a minimum, and you therefore increase your loop rate which in turn will decrease the chances of getting the buffer overflow error.

 

Simply put, have a fixed number of samples will also help the "read more often".  Of course, this is assuming there is nothing else in your loop slowing you down.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 9
(4,322 Views)

Hi Sebastian,

you have the explanation here: http://digital.ni.com/public.nsf/allkb/B2AFF1A3F3E1675C86257DFF0052B47B#numSamples

 

Apart from this, it is generally much more optimal to work with arrays that have fixed sizes. In an application dealing with big sets of data constant reallocations can cause serious delays.

 

Message 3 of 9
(4,320 Views)

My guess: Memory Allocation

 

You were faster 🙂

0 Kudos
Message 4 of 9
(4,317 Views)

 

 

Hi Again,

 

Thanks for your answers so far. The memory allocation could be the problem. I did not think of that.

 

My loop does not contain anything besides the DAQ Read VI. The data is send away by a queue and handled elsewhere.

Unfortunately I am not sure if I can switch to fixed amounts of data without bigger changes to my code, because of the way the data is handled later on.

 

I am thinking about doing it this way now:

http://digital.ni.com/public.nsf/allkb/AB7D4CA85967804586257380006F0E62#numSamples

This page is quite the same as the one suggested by stockson but with slightly different content.

 

I am not sure why NI does suggest this, because on first sight I think it’s the same as wiring "-1" to the Read VI.

Only thing I can think of is that wiring "-1" lead to multiple allocations during one read, while a variable yet fixed number of elements will lead to only one allocation.

 

Sebastian

 

0 Kudos
Message 5 of 9
(4,297 Views)

SBach wrote:  Unfortunately I am not sure if I can switch to fixed amounts of data without bigger changes to my code, because of the way the data is handled later on.

 I would really like to see that code to see why it would matter.  If your code already handles varying sized arrays, why would it care about fixed sized arrays?  And the fixed sizes will actually help the rest of your code with memory allocations as well.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 6 of 9
(4,292 Views)

Hi Crossrulz,

Unfortunately I am not allowed to show you the code because it contains code that is proprietary, but you are right, I took a look at the later code and it should be able to handle fixed sized junks of data.

 

There is just one more thing that might be a problem than. The DAQ part is encapsulated in its own sub VI. This VI is a Queued State Machine , the reading part is done in the timeout case and the cases are used as commands to init , shutdown, and modify the DAQ from outside.

As I cannot tell when such commands will be triggered the time between two reads might not be fixed. So a strictly fixed amount of data might lead to a filling buffer.

 

I think I ‘ll just have to check the amount of data ready to read. If it is bigger than 1.5 times the amount it should get during a normal timeout I read that bigger amount of data and otherwise I read 1.5 times the amount it should get during one timeout. So I get a fixed size most of the time.

Or I read 2 times the correct size every time so the additional buffer size, which became necessary during the command handling, should be reduced with each step until the normal size is reached again.

 

Regards,

Sebastian

0 Kudos
Message 7 of 9
(4,272 Views)
Solution
Accepted by topic author SBach

SBach wrote:This VI is a Queued State Machine , the reading part is done in the timeout case and the cases are used as commands to init , shutdown, and modify the DAQ from outside.

As I cannot tell when such commands will be triggered the time between two reads might not be fixed. So a strictly fixed amount of data might lead to a filling buffer.


Init command: You definately do not have a running task.

Shutdown: You are stopping the task, so who cares there if you get an overrun.

Modify task: You will likely need to stop the task anyways before you can alter the task, so back to not caring.

 

The way I see it, the other commands need to stop the task anyways.  So you can just let the timeout do its job and not do a bunch of extra stuff.

 

And here are a couple of alternatives:

1. Use a shift register to keep track of how long your timeout should be.  In the Read cases, make the timeout be whatever it is you are using.  In all of the other cases, make the timeout 0 so that a read must be performed immediately (once the queue is empty).

2. When you do your check for 1.5x the timeout, if there is too much data, enqueue a read at the front of the queue so that the read will happen next.  The idea is that you really want to keep array sizes the same to keep memory from be allocated.  So just do reads more often.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 8 of 9
(4,267 Views)

Hi Crossrulz,

 

Thank you  for your ideas.

 

I implemented both of them ( the adaptation of the timeout and the re-entry of read in case of too  many elements in the input buffer)

Until now everything works fine. so this topic is solved.

 

Thanks again.

 

Regards,

Sebastian

0 Kudos
Message 9 of 9
(4,188 Views)