LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Error -200018 DAC conversion attempted before data to be converted was available

Hello,

 

I'm currently using a 6062E card to generate analog output waveforms.  The application cycles through different waveforms depending on iteration.  I was trying to solve a missing trigger problem and realized that the hardware was incapable of using the retriggerable property for the analog output task.  I then went the route of using the following workaround:

 

https://forums.ni.com/t5/Example-Code/Finite-Retriggerable-Analog-Input-Using-LabVIEW-with-DAQmx/ta-...

 

There is also a similar set up in one of the old example VIs: 

Multi-Function-Ctr Retrigg Pulse Train Generation for AI Sample Clock.vi

 

I have attached my VI that pretty closely represents one of my attempts, but I am now running into the following error intermittently:

Error -200018:  DAC conversion attempted before data to be converted was available

I'm in a situation where changing hardware isn't an option, so I'm trying to determine if there is any possible solution.


Thanks!

Ryan

0 Kudos
Message 1 of 10
(1,461 Views)

Wirhout having a look in your code. Check if a wait between fetching data VI and start of acquisition is an option. 

Actor Framework
0 Kudos
Message 2 of 10
(1,441 Views)

Essentially, your error is telling you that your device is running out of data in the midst of generation because your system and DAQ driver aren't delivering it fast enough to keep up.

 

So both the good news and bad news is that there's not a lot of room for improvement in the posted code.  You could take the waveform array and graph indicators out of the loop, but your loop iteration rate is slow enough (270k samples / 300 kHz = 0.9 sec) that I doubt they're the problem.

 

A little more likely is just the limitations of your system as a whole.  You're using a very (very very) old E-series device, which almost certainly means you're on PCI rather than PCIe.  Bus bandwidth is *shared* on PCI, so another possible problem could be interference from bandwidth-hogging device on the PCI bus, such as possibly a video card?

 

A quick look at device specs shows a 2048 sample FIFO for AO.  So the device's FIFO gets used up every 6.7 msec, meaning that the driver & PCI bus must be able to keep delivering a full FIFO worth of samples every 6.7 msec perpetually.   If it fails to do that even once, boom you get a buffer undeflow error like -200018.  I kinda suspect that's where the problem is, and I'm not sure that code alone can fix it.

 

Since you say you can't change hardware, about all I can suggest is to try to reduce the system's PCI bandwidth burden.  I'm no expert on that, but I'd be trying troubleshooting steps like reducing monitor resolution down to 640x480 or something like that.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 3 of 10
(1,388 Views)

Quiztus2,


Thanks for the reply, but not really sure what you mean.


Thanks,

Ryan

0 Kudos
Message 4 of 10
(1,378 Views)

Kevin,

 

Thanks for the reply.  I was kind of thinking the same thing that there isn't much else I can do code-wise.  One thought that a colleague of mine brought up was to manually check the buffer at a chosen rate within a while loop to try to fill in samples behind what has just been outputted to try to keep up.

 

I agree with you on the array and graph indicators not making much of a difference, but I'll give that a shot.  I'll also try the monitor resolution idea and try to come up with any other ways to reduce the system's PCI bandwidth burden.

 

My other thought is that the initial issue was with attempting to avoid missing triggers with this setup.  I feel like I've made it worse.  I had previously tried things like manually setting the task state to committed prior to the loop to try to reduce hardware setup overhead since the device isn't retriggerable.

 

Thanks,

Ryan

0 Kudos
Message 5 of 10
(1,375 Views)

@rbarth wrote:

One thought that a colleague of mine brought up was to manually check the buffer at a chosen rate within a while loop to try to fill in samples behind what has just been outputted to try to keep up.

I don't think that'll help any.  That would address problems with your app delivering data to the task buffer fast enough, but you seem to be set up just fine there.   The problem is more likely that the system & driver can't move data from the task buffer over PCI to the device FIFO fast enough, and there's very little you can do about that from the app side.  The only possible thing I can think of is a deep-down DAQmx property that's *probably* already set by default -- the "data transfer request condition".  The best setting for your issue is "less than full".  I think that's normally the default setting anyway, but perhaps not in the case of old old E-series devices?   On a related note, there's also a property for doing transfers via DMA rather than interrupts, but that should already default to DMA.

 

My other thought is that the initial issue was with attempting to avoid missing triggers with this setup.  I feel like I've made it worse.  I had previously tried things like manually setting the task state to committed prior to the loop to try to reduce hardware setup overhead since the device isn't retriggerable.

Can you expand on this?  It sounds like you've been doing a fairly recent change on a (likely) very old system and seem to be hitting a dead end.  I don't know your system & requirements, but maybe there's a whole different kind of approach to consider?  Your reference to task commit is a relatively advanced topic so I don't really expect to find an easy answer here, but hey, you never know.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 6 of 10
(1,322 Views)

Thanks for the reply.

 

I tried changing the data transfer request condition to less than full.  I had been using the half full or less.  However, the change made the UI and everything on the computer extremely slow.  Moving the application window was super choppy.  It appeared to have some potential with the output generation, but eventually also failed. It's tough to tell effectiveness of a change because it's totally unpredictable on how long it'll run before getting an error.

 

I've also tried changing the property for data transfer mechanism.  When I create a constant for the property, it defaults to DMA.  However, if I explicitly set it to DMA with that constant, I get an error that says it isn't allowed due to another property being set.  I can't figure out what that property is that could be causing the issue.  If I don't set the data transfer and read the property, it says it's set to Programmed I/O.  I tried Interrupts, and that also appeared to have some potential, but again eventually failed.

 

One thing (among other things) that I don't have a grasp on is how the samples are written from the buffer to the device.  Does the computer say, "here's your array of doubles to use for your output generation", and then pass it to the device as fast as possible?  I assume it isn't driven by the sample rate that the device is going to be outputting at?  I haven't tried this, but could I write an array of singles instead of doubles to attempt to have a smaller footprint memory-wise to speed things up?

 

Regarding your last point about the original problem:  I was using an approach where the analog output was using a task that I was starting, waiting until done, and then stopping.  It was being triggered with an external clock once per second.  It's running on a laptop with shared video memory, and I was running into instances where I would miss a trigger.  This would happen intermittently without any user interaction.  However, I could pretty easily make it happen repeatedly by moving windows around on the laptop. This is what sent me down the road of trying to use a retriggerable counter, which has appeared to help with that issue.  I can move the windows around and do other things on the laptop without seeing an immediate failure of the output generation.

 

Sorry for being so wordy.  I'm wondering if I'm getting to the point where this just isn't possible with the setup that I have.


Thanks again for the help,

Ryan

0 Kudos
Message 7 of 10
(1,313 Views)

Ok, lots of ground to cover...

 

Here's my working understanding of some implications of the data transfer request property:

- "Less than Full".  Frequent bus access, small amount of data per transfer, maximal latency by keeping device FIFO full, least prone to buffer underflow errors

- "Half Full".  Least frequent bus access, larger amount of data per transfer, medium latency by keeping device FIFO half full, medium prone-ness to buffer underflow errors

- "Empty" - Frquent bus access, small amount of data per transfer, minimal latency by aiming to get data to the FIFO just barely in time, most prone to buffer underflow errors

 

Your symptoms seem consistent with DAQmx and your video card needing to share access to the PCI bus.  In your particular case, "Half Full" seems to be the better tradeoff due to less frequent bus interactions.

 

Even if you can't change out your PCI DAQ device, can you consider putting it into a PC that uses PCIe for video?  It's possible that motherboard-based video could give relief too, but that's an area I have no special expertise in.  It might just use the PCI bus in a hardwired way without a modular slot.

 

When you queried the transfer property and it told you "Programmed IO", odds are the task hadn't yet been started and this property simply hadn't yet been properly configured.  I've never explicitly asked for "Programmed IO", but it appears to be the default behavior when you don't use a clock or buffer -- a mode I've commonly referred to as "on-demand" or "software-timed".   But since you *are* using a clock and a buffer, I think it must be the case that you queried the task before it had set up that aspect of its behavior, and thus simply returned a default value.

 

I don't know every low-level detail of how DAQmx negotiates with actual devices to transfer data.  Any byte-saving such as SGL instead of DBL is not available to you at the app programming level.  (At least not for the transfers between task buffer and device.)  I suspect that in *most* cases, DAQmx transfers integer values based on the bitness of the device's converter.  It will generally also do a one-time query to the device about it's calibration parameters, so it knows how to compensate those integers such they translate into correct calibrated voltages.

 

As to the original problem, I just now realized (due to your reference to a laptop) that you probably aren't using a PCI device at all.  A quick search makes me suspect you have an old PCMCIA card and you should be aware of this caution.

 

Beyond that, I have less confidence in what I can suggest.  I don't think I ever used a PCMCIA card so I have no experience or awareness of its special characteristics or limitations.   The interaction between data transfer settings and general PC responsiveness suggests that you're bumping into some kind of system limitation, which I *suspect* you may be stuck with but don't know enough about laptop PCMCIA stuff to say with any confidence.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 8 of 10
(1,287 Views)

Kevin,

 

You were correct about seeing "Programmed IO".  When I moved where I queried the task, it was set to Interrupts.

 

I agree that I am probably bumping into a system limitation.

 

I appreciate the help!

 

-Ryan

0 Kudos
Message 9 of 10
(1,203 Views)

Sounds like maybe DMA won't be an available option, but let's make sure.

 

Take a simple shipping example for AO that uses a clock and buffer and try to explicitly set the "data transfer mechanism" to DMA before starting the task.  If that throws an error, post the error # and description here.  Odds are, it'll be due to a true limitation of the PCMCIA device and you'll be stuck with interrupts and the corresponding speed limitations.  But maybe someone else that's had experience with laptop-dedicated devices will have some ideas or suggestions?

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 10 of 10
(1,175 Views)