07-23-2013 10:42 AM
Hello everyone!
I am using the RT module for neurofeedback: Analyzing brain activity and then giving the user auditory feedback about it.
My RT target is a desktop pc and I use the typical deterministic + nondeterministic loop structure.
What I want is to continuously output a pure tone. The amplitude of the tone is calculated in real time and depends on some feature of the user's brain activity. It should vary smoothly to avoid click-like artifacts.
Since there are no sound VIs in the RT module, I use the DAQmx write VI (analog output) with regeneration enabled. The sample mode is continuous.
Before the first iteration, I generate a sine wave of an appropiate length (some hundred samples) and amplitude 1. Then, on each iteration, I compute the new value of the amplitude and multiply the sine wave with a linear ramp so that I get a smooth amplitude modulation. On the same iteration I write this new piece of data with the DAQmx write VI.
WIth this approach my timed loop gets delayed periodically, even thought the cpu load stays below 20%. It seems that continuously rewriting the device's buffer is the problem.
Is there a cleverer way of doing what I want? For instance: Could I write the pure tone only in the beginning and then control the amplitude by setting a gain? I have seen that a gain property exists ( http://zone.ni.com/reference/en-XX/help/370469AA-01/daqmxprop/attr118/ ) but I fail to find it. I am using LV 9.
I will be very grateful for any ideas!
Thank you!
Solved! Go to Solution.
07-24-2013 03:15 AM
ok, the short version of the above:
I have an AO task in continuous sample mode. Regeneration is enabled. In a timed loop of period ~10 ms I use the DAQmx write VI to update the buffer on each iteration. This doesn't produce noticeable CPU loads, yet the timed loop is periodically delayed.
Is there a better/faster/alternative way to update the AO buffer?
Hopefully someone has an idea... Thanks!
07-24-2013 03:48 AM
... and a simplified VI reproducing the problem:
After about 5s the loop shows its first delay.
07-30-2013 09:36 AM
Hi
I think the problem are the buffer on the daq device.
From LV - Help:
"
For generations, the amount of data you write before starting a generation determines the size of the buffer. The first call to a Multiple Samples version of the Write function/VI creates a buffer and determines its size.
You also can use the Output Buffer Config function/VI to create an output buffer. If you use this function/VI, you must use it before writing any data.
The samples per channel attribute/property on the Timing function/VI does not determine the buffer size for output. Instead it is the total number of samples to generate. If n is your buffer size, setting samples per channel to 3×n generates the data in the buffer exactly three times. To generate the data exactly once, set samples per channel to n.
NI-DAQmx does not create a buffer when the sample mode on the Timing function/VI is set to Hardware Timed Single Point.
"
To your example. I would connect the timing source from my daq device to the timed loop, so both, the DAQ device and the timed loop have the same source.
And I located manually the buffer. In your first example the buffer was only 300 samples big. I made a buffer that was 6 times greater.
07-30-2013 12:35 PM - edited 07-30-2013 12:38 PM
Hey Rennhofer, thanks for the helpful suggestions!
My guess was that a small buffer should suffice, since I would never write more than those 300 points. Could you explain to me what is the advantage of a bigger buffer?
I get a problem opening your vi, it tells me it is version 13.0 (didn't know it was out already!) Could you upload it again in version 12? Thanks!
07-30-2013 01:02 PM
Hi sls,
The advantage of a larger buffer size is a lower frequency of buffer updates required for the same time interval. (Of course, the down side is that you have a reduction in the granularity of control over the waveform.)
If you are experiencing performance problems, a larger buffer size is a good place to start (-- assuming your application can tolerate a slower change of the waveform output). While processor speed is a good indicator of system throughput, it isn't always the only consideration. Loading the AO buffer repeatedly may not require max out your average CPU throughput, but there may be a burst throughput requirement that you are not meeting with your current configuration.
If I were you, I would start by bumping up the buffer size and seeing if that helps. If so, then you can inch it back down to find the best throughput/performance compromise.
BTW, I once worked on a high-speed robot & vision application installed on a custom-built quad-core Xeon system running RT. Although the 4 processors were running at less than 40% most of the time, we were never able to meet the throughput requirements. After upgrading to 8 cores, processor load dropped quite dramatically. Peak load was occassionally around 10% on some processors, but generally less than 5% most of the time. After the upgrade, we were able to meet the throughput requirements, and even had enough overhead to "beef up" the analysis algorithms. The moral of the story: processor load is an "averaged" indication of what is going on under the hood. If your system cannot keep up with the peak loads required by your application, you may experience difficulties. (At least, that was my take-away from the experience...)
Anyway, best of luck sorting out this issue.
--Dave
08-09-2013 12:07 PM - edited 08-09-2013 12:09 PM
@Seeker: Nice to know that I am not the only one experiencing strange behaviours!
Thanks both of you for the helpful comments!
Expanding the buffer has indeed helped solving the problem. It works using a buffer 6-10 times larger than the signal size and setting the "Do not allow regeneration" property. Allowing regeneration slowly consumes the free space. Yeah! Now the available buffer space oscillates at high values - Thanks for the slide, Rennhofer.
... But I don't get why it works. The update frequency is just like before, so I would expect the same load due to writing. I guess that the write VI does something extra when the buffer is getting full?
I have another question! Later in my original code I need to stop the signal generation and restart the task after a while. If I try that, I get an error -200290 (generation stopped) when I try to write to the buffer again. It doesn't happen if I do allow regeneration - but then my first problem would be back. Any suggestions on how to deal with this?
(I can think of two workarounds that I would prefer to avoid: Outputting an array of zeros while the program pauses or using a huge buffer size with regeneration on.)
08-10-2013 02:13 PM
Hi sls
I'm back in the office on monday. So I will take care your question and VI . If I found a solution I will write back.
Sorry for my first VI I was uploaded in LV 2013. I'm an AE at NI so I got a prereleas of LabVIEW.
Kind regards
Bernhard
08-10-2013 05:57 PM
Hi sls_
Your comment:
"I guess that the write VI does something extra when the buffer is getting full?"
My guess is that multiple calls are being made to the memory manager, requesting more memory. As I understand it, LV asks the memory manager to allocate a chunk that is "expected to be adequate". If it needs more, it has to go back again.
If you have allocated a buffer 6-10 times larger than the signal size, and you aren't regenerating, I'm guessing you are avoiding subsequent memeory manager calls (which can be slow). Apparently, when regenerating, it would appear that the memory manager may be called repeatedly.
This would explain what you are seeing -- although only the LabVIEW development team knows for sure...
Best of luck!
-- Dave
08-12-2013 02:55 AM
Hi,
I agree with seeker and his arguments. But to be sure we have to discuss with the NI R&D Team.
To the error -200290. You could use the regeneration mode and use following settings:
With this settings the date in the onboard memory is allways filled and you have also enough speed while the transfer is directly above the DMA. But attention! You have only 8 DMAs on a PC system.
kind regards
Bernhard