LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Event timeout timing

LabVIEW 2015 (for backward compatibility to Windows XP machines)

Windows 10 (development laptop)

USB-6212 DAQ

Single analog input

Single digital input

Single analog output

 

I am struggling some with event loop timing.

I use a proprietary C-code DLL to generate an array of output values to write to the analog out.

When user clicks 'start' I call the DLL and get the 1-dimension array of DBL (based on user settings).

In the event structure (inside a while loop), there is a timeout event.

 

In the first test, I set the event to timeout at 1 mSec.

During the timeout event, I read the analog in, read the digital in, and write the next analog out.

Unfortunately, the loop doesn't go fast enough.

I logged the 'event time' (1-dim array of dbl indicator) and found the event timeout at regularly 2 mSec, sometimes 1 mSec, and sometimes 3 mSec.

 

So, I decimated the array, kept only 1 in 5 entries, and changed the timeout to 5 mSec.

Things are better, most event timeouts happen in 5 mSec, but every 10 or so, there is a 6 mSec timeout thrown in there.

 

So, I figured that things don't have to go so fast.

I changed the 'array diet plan' and selected 1 out of 10 of the original entries, and changed the timeout to 10 mSec.

Things still aren't consistent.  Most of the events happen in 10 mSec, but every 10 or so, there is an 11 thrown in there.

 

In one example, I need the loop to run consistently at 120 cycles per minute.

With the 10 mSec delay, and the 1/10 array size, I get 110 cycles per minute.

 

Any thoughts on getting a consistent number of cycles per minute?

0 Kudos
Message 1 of 8
(1,637 Views)

Don't use a desktop PC 😉

 

The timeout (and similar tools, e.g. Wait (ms) or High-Res Polling Wait) effectively specify a "minimum" time.

Timeout (on EvStructure) in particular is unreliable for timing, because any other event will reset the timeout (that may not be likely for you, and you may only have one event (Timeout), but still worth knowing and usually avoiding Event Structures whose only purpose is to timeout - just use a While loop if you have no events).

 

Reading the context help for "Wait (ms)" gives:

Waits the specified number of milliseconds and returns the value of the millisecond timer. (Windows) The actual wait time may be up to 1 ms shorter than the requested wait time.

 

So your variation can be -1ms, +inf (I think...) because Windows might choose to schedule something else in the time it would have otherwise updated.

 

If the goal is to reliably output values at a specific rate, use hardware timing (via DAQmx Timing.vi, Sample Clock etc) to control the sampling and write an array of points into a buffer.

You can tie in the acquisition with the output to get simultaneous sampling in most cases.


GCentral
0 Kudos
Message 2 of 8
(1,584 Views)

Any thoughts on getting a consistent number of cycles per minute?

Short answer (and I'm not just being a wise guy here): target your code to a true Real-Time OS instead of running under Windows.   

 

There must be somewhere between 1000 and 44 million prior threads around here about timing inconsistencies under Windows.  Yours is not a new discovery.  All the rest of us have long been stuck with it too.

 

On the other hand, many DAQ devices offer methods to acquire or generate samples with very consistent timing at rates that far exceed 1 kHz.  Buffers are made and timing can be set with hardware clocks, outside of Windows' inconsistent timing tyranny.

 

On the other hand, what you describe sounds like a control loop, where you must run code that makes decisions or calculations in response to inputs before determining the next desired output.  Now you're back *inside* Windows again where both speed and consistency are compromised.

 

It's time to re-negotiate your demands.  If you're still stuck supporting XP machines, it seems highly doubtful you have a realistic option of moving into LabVIEW Real-Time.  So you're probably stuck with Windows.  Now what?

 

<time passes, while I briefly look at some of the code>

 

Well, good news, my earlier guess was wrong!  You do not seem to have a control loop after all!  The output signal sequence appears to be predetermined and you are merely capturing (apparently) a response to that predetermined output waveform.

 

So NOW the problem is that "you're doing it wrong."  There's no need to rely on Windows here.  You could (and definitely *should*) configure your AO task to use hardware-clocked, buffered output.  Same for the AI task.  While you're at it, why not sync them in hardware by using a shared sample clock?  And once you've gone that far, share the same sample clock with the DI task so it can be hardware-clocked too.  (Digital tasks on M-series devices like the 6212 can be hardware-clocked, but they have to "borrow" it from elsewhere.  They can't make one of their own.)

 

Consider the following partially-relevant example as a head-start for how to accomplish this kind of sharing.  You could configure your AO task to *also* use that counter output as its own sample clock, much like AI and DI are doing.   Or you could just substitute your AO for the illustrated CO, and query DAQmx properties for its sample clock terminal.

 

The main thing is, you'd get sampling defined by hardware clocks which would make it very precise and consistent.

 

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 3 of 8
(1,579 Views)

Hi Cbutcher,

 

Thank you for the reply.

I've always known that events were a 'minimum time', I was just hoping there had been some improvements over the years so that it would be better at the timing thing.

 

Anyway, in my 'production' app, I calculate the waveform the way I want it, and tell the DAQ to replay it until I tell it to stop.  The timing is very reliable there.

 

This is a research project, and I need to be able to adjust the output value, and values that follow, based on the digital signal input.

Once I figure out what we need to do I can worry about optimizing it.

Even when I run 1 in 10, at 10 mSec event, the performance hit is noticeable to the users accustomed to the hardware timing of the production application.

 

Thanks,
Jeff

0 Kudos
Message 4 of 8
(1,557 Views)

Hi Kevin,

 

Thank you for the time and careful consideration of my quandary.

 

Unfortunately, I do need to adjust my output based on the digital signal input, in real time.

And, since this is a research project, I know MOST of what the user wants, but they haven't really decided what the response will be yet.

Once I get this running in a 'reasonable' manner, the user will decide what they want, and then I'll worry about optimization.

 

The user will notice the lack of performance on the Windows platform, and I was hoping something had changed over the years so that there was better control over events under Windows.

 

Oh well, I'll demo the solution, they'll talk about it, and then I'll worry about improving things.

 

Thanks,

Jeff

 

0 Kudos
Message 5 of 8
(1,555 Views)

Hi Jeff,

 

I've always known that events were a 'minimum time', I was just hoping there had been some improvements over the years so that it would be better at the timing thing.

 

In 2018, "High Resolution Polling Wait.vi" (HRPW) was added to the Timing Palette.

HRPW can give better timing but doesn't solve the windows issue (turning off services can help to a degree).. 

It also comes with a performance penalty  - the "Polling" happens at high speed.

Note that it claims to be compatible with Win XP (see comments inside).

 

I made a windows only version of HRPW and back saved it to LV15.

This file and a test program using it are attached.

Note that the "ms Timeout" is set to run at high priority.

 

steve

----------------------------------------------------------------------------------------------------------------
Founding (and only) member of AUITA - the Anti UI Thread Association.
----------------------------------------------------------------------------------------------------------------
Download All
0 Kudos
Message 6 of 8
(1,533 Views)

Ok, I gotcha.  The code showed software timing to deliver an invariant analog output, so I figured we were in newbie territory.  Now I get that you're not one, that was just your starting point to focus on the timing alone.

 

I've not used the High-Res Polling Wait, but it sounds like one of the things you should try.  You might ought to put it in a separate loop and then fire a Notification or something to signal your other code when to send its latest known output or whatever.   The reason for the separate loop is to give LabVIEW the opportunity to dedicate one of your CPU cores to the thread that loop is in, because HWPW will be quite CPU-intensive.  Having it in a separate loop could help protect the rest of your app from its side effects.  Can't say for sure though, haven't really given it a spin myself.

 

Additionally, I've done a little bit of *dabbling* with DAQmx methods to get fast and mostly-pretty-consistent loop timing in Windows.  I didn't want to burden a newbie with it, but now I know you aren't one.  Please note the emphasis on *dabbling*.  I haven't built up a whole app around any of these things, it's just stuff where I made MWE's (minimal working examples) in the course of prior forum threads.

   Unfortunately, your use of a USB device will *probably* prevent you from replicating (or perhaps even *approaching*) the results I'm about to link you to, where I did my dabbling with a PCIe X-series board.

 

First, here's a thread where I made slight mods to a shipping example and produced rather consistent 5 kHz AO update loops under Windows.  Note that the rather nice-looking results in msg #9 over there were fairly typical, but there *were* some runs that produced significantly bigger timing outliers. 

   Unfortunately, your USB device likely won't support that technique, which uses Hardware-Timed Single Point DAQ mode and uses a special DAQmx function to wait for the next sample clock.  Both will probably produce errors with a USB device.  But maybe this gives some incentive to try out desktop DAQ options, hint, hint.

 

Then, over in this thread over here, I have a simple control loop that manages about 10 kHz effective rate most of the time, but with more excursions down to the realm of 2-5 kHz effective rate (and somewhat rarely, significantly slower).  The attached example uses the pipelining "cheat" such that the *effective* control loop rate is half the measured loop rate.  The thread explains it.

   Once again, it relies on the same DAQmx mode and function as the previous one, so once again it probably won't work with a USB device.  HINT, HINT.   (I've got a fairly long history of strongly preferring desktop DAQ to USB DAQ, *especially* when it comes to high-performance limit-pushing work.  There are some nice conveniences with USB and there are a lot of devices suitable for moderately-demanding apps.  But when you're really up against it, USB is not gonna be your best option.)

 

Finally here's a very brief thread with no LabVIEW code examples, just a verbal report (trust me?).   This one focused only on data acq input latency, with performance results somewhere in the realm of ~35-100 microsec (10-30 kHz).  Once again, you guessed it, desktop board not USB.

 

 

-Kevin P

 

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 7 of 8
(1,519 Views)

Hi Jeff,

 

You mention this is a research project, and Kevin has suggested a different DAQ device might give you better performance.

Are you able to use (buy?) different hardware, or is the 6212 a fixed item that can't be reconsidered?

 

Would a cRIO be an option for you?

I'm not sure what the relative cost of the cRIO vs a PCIe DAQ card would be (I think this is what Kevin means?) but I'd guess the cRIO is more expensive in general. However, being able to put code on an FPGA and a Real-Time host gives you a lot more options for control loops and timing, etc.

If you consider this, check that your DLL can be compiled for an appropriate target - I believe most (all?) of the modern cRIO systems are 64-bit Linux, so you'd have to recompile your DLL for a new platform. Switching to a PCIe card would be more straightforward, but you'd still have to content with vagaries of Windows in that case...


GCentral
0 Kudos
Message 8 of 8
(1,514 Views)