LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Robust and Reliable Digital IO with DAQmx

This is not a newbie question.  I have a large LVOOP application with around 3500 VIs.  I'm replacing a digital IO card from another vendor with the PCI-6515 from NI.  The implementation must be robust to future hardware changes, the IO must be fast so it does not slow the thread of the process, we're manufacturing with a machining process, so an extra 1mS multiplies up to hundreds of minutes per year.  Most importantly it must be robust, it will run 24/7 and fails are very expensive for us, so it must report true-false accurately and never return an error, for years on end. 

 

How should I be implementing it?  the existing system uses a simple express node, there are many similar digital IO VIs dotted about throughout the project.

 

NI-6515digitalIO.JPG

0 Kudos
Message 1 of 11
(3,032 Views)

@bmann2000 wrote:

This is not a newbie question.  NI-6515digitalIO.JPG


Then don't use the Dreaded DAQ Assistant, and especially don't use its Evil Twin, the Dynamic Data Wire.  Read the excellent NI White Paper Learn 10 Functions in NI-DAQmx and Solve 80 Percent of your Data Acquisition Applications.  You should be able to use 4-5 simple DAQmx functions along with Task Constants that you define in the Project File (my second method) or in MAX (my first method) or "from first principles" (what I'm currently doing, using DAQ functions to "find" a connected device).  It's simple, it's "visible" (and hence quickly debugged, if necessary, yet it can be "hidden" by being encompassed in a sub-VI that you write (like Create Timer that configures a Counter-Timer to deliver a specified Pulse Train on a specified Ctr Out line with a specified Trigger In line).

 

Bob Schor

0 Kudos
Message 2 of 11
(3,008 Views)

A 100 minutes per year is only 0.01% of the total number of minutes in a year (assuming 24/7 operation).  So while 100's of minutes sounds like a long time, it really isn't when you put it in perspective.

 

What hardware are you running this code on?  Is it a Windows PC?  A device with a real-time OS?

 

If millisecond timing is that critical, I wouldn't use a regular windows PC.  They are know for taking breaks on the order of seconds from time to time to do virus scans or other background stuff.  Even if isn't taking breaks, the resolution of the internal timing isn't as good a a millisecond.

 

Be sure to use a real-time OS on a PC, or better yet, use an FPGA for reliability.

0 Kudos
Message 3 of 11
(2,995 Views)

With a big code architecture in place, you're probably needing to balance *execution* efficiency (where every msec counts) with *code change* risk (where you consider the magnitude & complexity of changes, and how far-reaching their effects are).

 

For the sake of execution efficiency alone, you'd probably want to create a distinct task for each digital line that needs to be controlled.  These tasks would all be started once when the program initializes (and stopped during shutdown).  Thereafter you'd be writing digital states to tasks that are already running, which is very much more efficient than configuring or even just starting a task each time you want to write a state.   Using things like the DAQ Assistant or the "auto-start" feature of DAQmx Write tend to add extra overhead.

 

I don't know if such an implementation is very *practical* for your existing code base though.  You'd need to propagate task refnum info to various nooks and crannies of the code where DIO occur.  You may not be structured to propagate such "global" info all over the place cleanly, and it may not be worth the code risk.

 

Suggestions like Bob_Schor's to create predefined named tasks in MAX can help you be robust to future hardware changes.  They make a layer of indirection between your released code (which refers to the task by *name*) and the hardware used to implement the functionality (which is defined when you configure the task in MAX).

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 11
(2,986 Views)

Maybe I should re-phrase the question.  Is it safe to do this? code snip below.

Tasks are all defined in MAX Data Neighborhood.  These VIs are peppered all through a large application.  Will it consume resources that's lead to memory leak type crashes?

No start/stop/clear type code is used anywhere else, just what you see below.

 

NI-6515digitalIO-code.JPG

0 Kudos
Message 5 of 11
(2,975 Views)

That should work, and will not "consume resources".  For a single Digital Read, it is probably "safe" to assume that the Task has auto-started (particularly if it seems to work!).  However, I think I'd add a "Start Task" function "just to be sure".  Is this called multiple times?  You might consider carrying the Task Wire in a Shift Register, doing Start Task "early", doing "Stop Task" and perhaps "Clear Task" during shutdown.

 

Bob Schor

0 Kudos
Message 6 of 11
(2,965 Views)

It does work when you press true/false in the VI, how it'll behave in the full application in the long term is the worry.  I did think about creating a VI that starts every every task to put in the Initialisation sequence, then a stop-clear in the Close sequence.

I have around 30 digital Iines, they toggle high/low tens of thousands of times over months of continuous operation.

The reason I'm reluctant to add the other VIs, is that I do not want to over-complicate it and add a failure mechanism.  For example, the application is stopped by an error from another part of the code, close does not happen, so then when the app is restarted, the task does not start because it's already active, the start that's trying to execute then produces an error saying resource in use, and the app stops, you only discover this after the application is released to production, then you need code to handle this error case, more testing, new release of app etc.

 

I want simple, reliable code without the hassle of discovering all the possible fails.

0 Kudos
Message 7 of 11
(2,958 Views)

The key thing then is that if you are going to close the app somewhere due to an error, that before you end the app, you execute a subVI that closes the tasks.

0 Kudos
Message 8 of 11
(2,949 Views)

Hi bmann,

 

These VIs are peppered all through a large application.

Instead of calling DAQmxRead (to read just one sample) "all over the large app" I would create a (parallel running) subVI to read that DAQmx task at regular intervals and provide the current state of your DO(s) in a notifier…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 9 of 11
(2,947 Views)

I have a Re-Use Library that was developed to replace DSC tag reads. It runs Daemons that repeatedly read the DI states and keep Action Engines updated with the current value. In tht application that automates this factory...

Tesla_Coil.jpgBreaker.jpg

 

there are about 300 DIO spread across about 6 Ethernet cDAQ chassis and they run 24X7.

 

SO I would like ot suggest you consider a background task to repeatedly read the DI lines and keep an AE updated with the current state of all of the DI lines. AE can be wicked fast. YOu can even squeeze more performance out of them if you wrap the AE in a re-entrant wrapper that caches the previous value and the AE call is marked as "Skip if Busy".

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 10 of 11
(2,935 Views)