LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

trigger timed loop for analog output

I'm stumped, so I turn to you for help & inspiration.  I'm trying to develop a Labview program (using v8.0) with a USB-6251 to accomplish the following biomedical application:
  (1) continuous analog input of three channels (eg, ECG, pressures) at 2kHz (nominal) with pseudo-realtime charting of these (possibly decimated) data streams (chart update frequency could be as low as 20-40 Hz);
  (2) availability of a 0 --> 12V trigger signal (duration @ 12V = 150 ms) indicating the timing of each heartbeat (interbeat intervals are typically between 500-1000 ms);
  (3) upon user initiation (via a 'start' button on the front panel), the VI generates a finite AO pulse train meeting the following specific requirements: (2a) the timing of the first AO pulse of the train is to occur X ms (+/- 1 ms) after the next detected heartbeat; (2b) the timing between AO pulses in the train must be Y ms (+/- 1 ms); (2c) after every Nth AO pulse (nominally, 4th or 5th), the AO pulse amplitude is to be decreased by a specified amount; (2d) the AO pulse train ends either when the pulse amplitude reaches zero or if the user presses a 'stop' button on the front panel; and
  (4) to add to the complexity, during the entire AO train sequence, all acquired AI data needs to be saved to disk.
 
My original approach to accomplishing these tasks has been to:
  (1) use a timed loop for the continuous AI and charting, where the loop period would be the desired chart update period;
  (2) acquire the heartbeat trigger signal via continuous DI and use a 'detect change' event to capture each heartbeat event;
  (3) use 'button down' events to detect the user pressing the 'start' and/or 'stop' buttons;
  (4) use the AO waveform buffer on the 6251 to store a single AO pulse;
  (5) use a timed loop to generate the train of AO pulses with period Y ms;
  (6) within the AO timed loop, modify/rewrite the AO waveform buffer data after every Nth pulse to decrease the pulse amplitude for subsequent pulses.
 
So, first of all, does this approach make any sense?  Or perhaps someone could suggest a better/simpler method?  With respect to the approach outlined above, I'm specifically struggling with how to trigger the start of the AO timed loop so that the first AO pulse occurs exactly X ms after the next detected heartbeat.  Any suggestions?  How can I trigger the start of a timed loop...or alternatively(?) dynamically reset the phase of a continuously running timed loop to match the desired X ms offset?
 
Thanks for any help someone could provide!
0 Kudos
Message 1 of 4
(3,299 Views)
mfishler,

Hopefully I can provide some insight into how you could proceed.  Your approach is very close to how I would proceed with a few important exceptions.  I will proceed with the 6 implementation points that you used:

1)  This is exactly how I would recommend this acquisition.  If the AO is occurring in a separate loop then you could use some local variable to tell the AI when to save to file.  Using the loop to control the chart update rate is fine because this is a software event.

2)  If you connect the trigger to a PFI line (many of which are also DI lines) you can use it to trigger the hardware to perform the AO.  You may want to use some signal conditioning for the 12V trigger because this is much higher than the recommended max voltage to be applied to the DI lines.  The 12V should not damage the board but is not recommended.  This PFI line can be used to directly start the AO without additional software interaction.  This can be done by setting up a start trigger with the DAQmx Configure Trigger VI for the AO task.  The AO task will begin as soon as the trigger is detected.  When the user presses the button you could start the task and the next heart beat pulse will trigger the analog output.

3) Your using an event structure to capture user interaction is a good way to approach this

4) I would actually store the whole series of pulses from the largest to the smallest in the buffer at the beginning.  This will prevent you from having to update the buffer during output.  However if it would require too many samples you may want to split the entire series of pulses into several pieces to be written to the buffer. 

5) Using the loop to control time between pulses can be problematic because it is a physical event.  Windows provides processing resources to the loop when it can but because Windows is a non-deterministic environment these resources may not be provided in a timely manner.  (i.e.  loop iteration time could skyrocket when you are moving windows around on the screen).  By using the onboard sample clock to control time between pulses you will be able to control much more reliably the time between pulses.  You can also use the sample clock to add a delay before the pulse train begins.  To illustrate suppose we define a sample clock rate of 1 kHz.  You desire a pulse train that has the following characteristics: it is delayed by 3 ms after which it outputs a 1 ms pulse followed by a delay of 5 ms followed by another 1 ms pulse.  The pulses need to have an amplitude of 1 V.  You can place the following samples into the buffer: 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0 .  This way the exact pulse train you are looking for is output to the channel.

6) As described in 4, I would store the entire string of pulses into the buffer before outputting to the channel.  This will remove the need to use the loop timing to determine when the AO buffer needs to be output.

Hopefully my suggestions are helpful in understanding how you can use the features of the USB-6251 to create a more robust application.  If you have any further questions please let me know.

Regards,

Neil S.
Applications Engineer
National Instruments
0 Kudos
Message 2 of 4
(3,289 Views)

Hi Neil,

Thanks for the reply.  Regarding placing the AO pulse stream in the AO buffer, this would certainly be the optimal option; however, my pulse stream would far exceed the 8k AO FIFO buffer size on the USB-6251.  Based on a 1 kHz sample clock (which is the slowest I could get away with), my pulse stream could be on the order of 50k points long (assuming ~50 pulses at 1 pulse/sec, with a capability of having each pulse as short as 1 ms long).  That's why I was originally thinking of using a Timed Loop to control output of each pulse, since a single pulse would easily fit within the AO buffer.  But your point about loop control being an OS event is well taken (and of concern, since interpulse timing is a critical parameter for me).

So, I've been brainstorming for alternative approaches.  What about either one of these two possibilities:

  (1) What would happen if I try to write the entire pre-defined 50k-point pulse train array to the AO buffer (with "DAQmx Write" I assume, yes?), and then just start my AO (with appropriate triggering, of course)?  This would be the simplest approach, but would it work?  Would I get an error about a buffer overflow?  If not, how would the extra 42k points of data get transferred into the AO buffer?  Should I be concerned about (ie, what would be the probability of) the buffer running out of data faster than it can be replenished?

  (2) In this hair-brained scenario, I envision using a "CO Pulse Time" task to generate a continuous output containing the high-low timing of the pulse train (but obviously with no embedded amplitude information).  The pulse amplitudes would get stored into the AO buffer as single point values, with intervening single point zero values placed between them.  I would then use the CO output as my sample clock source for AO generation, wherein the "DAQmx Timing (change detection)" instance with both rising and falling edge detection of the CO Pulse train is used to control AO clocking.  In this way, the rising edge of the CO Pulse would trigger AO to output a single pulse value and hold it for the desired high-time, while the subsequent falling edge of the CO Pulse would trigger AO to output the next point in the buffer...a zero...for the desired low-time.  Well...I was pursuing this approach a bit until I read the Help for "DAQmx Timing (change detection)"...it implies that the change detection instance is only for data acquisition, not for data generation.  Can someone confirm this?  If so, then I don't know how to salvage this approach.  Any ideas?  (Of course, if Option 1 would work, then I would definitely prefer that approach.)

Thanks for you continued input and suggestions!

0 Kudos
Message 3 of 4
(3,279 Views)
mfishler,

Option 1 should actually work.  There are actually two different buffers with this sort of task.  One is on the board itself and is limited to 8k.  However, values are also stored in your computers RAM.  The driver will automatically move samples from the RAM to the board as needed (ie when the number of the samples in the buffer are depleted).  You can actually control when this data transfer takes place as well as what to do with the memory that may already exist in the buffer.  These parameters can be effected by various properties through the DAQmx Write Properties node.  You may need to experiement with different buffer settings.  You may not output the whole 50k buffer each time analog output is triggered and thus the buffer will have incorrect values when the analog output is triggered again.  It may be easiest to stop and restart the counter and analog output tasks each time an analog output has completed.  Let me know if you have more questions.

Regards,

Neil S.
Applications Engineer
National Instruments
0 Kudos
Message 4 of 4
(3,255 Views)