Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Just what the title says!

I would like to see a version of the 4-slot cDAQ-9185, but where the 4 slots are arranged the long way, and the whole thing fits in a 1U slot on a standard 19" rack.

Hello,

 

How often you have build Labview applications using simulated DaqMx boards ...

And how often you were limited by the default behaviour of simulated boards ... ( Sinewave for analogic inputs, Counter square signal for digital inputs ... )

 

It would be nice to integrate in DaqMx simulated boards, the abilty to modify the default behaviour of simulated inputs ... thru dedicated popups

 

It would be nice, for each task linked to a simulated daqMx board, to launch a popup window ...

 

  • For digital input, give the abilty to modify for each configured channel , the current binary value.
  • For analog input, give the ability to choose between a fixed value, a sine wave, a square signal ... white noise ...
  • For digital output, give the ability to view the current setted values
  • For analog output, give the ability to view the current simulated output value on a waveform chart ...

 

A more powerfull tool could also integrate a simulated channels switching mechanism ... A simulated output could be linked to a simulated input 

 

This feature could be a good way to create an application which could simulate a complete process ... this application could be used to validate a complete system

(such a kind of SIL architecture)

 

Other idea .... A complete daqMx simulation API ...

 

  • Creation of an API which could instanciate a simulated daqMx board (Wich could be seen via MAX)
    • Takes place of the actual limited daqMx simulated board
  • This device could then be accessed by other application thru daqMx
  • This API could have access to all channels of this simulated device.
  • This API could force, programmatically, the value of the simulated input channels according to a realistic process model

 

Something like this ...

 

 

 DaqMxSimulatedAPI.PNG

 

I've been in many threads and seen many, many more where the root issue stems from confusion about the way DAQmx Timing and DAQmx Read interpret the meaning of "# samples" very differently for Finite Sampling vs. Continuous Sampling mode.   (For example, here's just one of the times I tried to address that confusion.)

 

First, here's what causes the confusion:

 

  • The 'samples per channel' input to DAQmx Timing is *crucial* for Finite Sampling tasks and usually *ignored* for Continuous Sampling tasks.
  • The 'number of samples per channel' input to DAQmx Read has a default value of -1 when left unwired.  However, the *meaning* of this default value is *VERY* different,  resulting in very different behavior depending on whether the the task is configured for Finite or Continuous sampling.  (See the first link I referenced.)

While the relevant info is findable in the help, it also often clearly remains unfound.  I got to wondering whether some changes in the DAQmx API could help.

 

I'll describe one approach, but would definitely be open to better solutions.  The goal is simply to find *some* way to reduce the likelihood of confusion for rookie DAQmx users.

 

I picture adding more polymorphic instances to both the DAQmx Timing and DAQmx Read vi's, so there can be distinct instances for FInite vs Continuous sampling.

 

Further, I picture that the task refnum would carry sufficient type info related to this timing config, such that that downstream DAQmx functions can "know" what kind of Timing was set up -- Finite, Continuous, on-demand (the default if DAQmx Timing was never called at all), etc.

 

Then when that task refnum is wired into DAQmx Read, the most appropriate instance of DAQmx Read would show up.  And the corresponding input parameter names, help, default values, and default value *behavior* can all be tailored to that particular instance of DAQmx Read.  For example, perhaps the "# samples" input should become a *required* input for Continuous Sampling tasks, to force a decision and encourage further inspection of the advanced help.

 

Don't know how feasible something like this is, but it's definitely something that regularly trips up newcomers to DAQmx.

 

 

-Kevin P

We really need a hard drive crio module for long term monitoring and reliably storing large amounts of data remotely.

 

Hard-Drive-Module-Concept.png

 

Options:

 

1. Solid State Drive: Fast, reliable, and durable. Extremely high data rates. It would be a very high price module but it could be made to handle extreme temperatures and harsh conditions. It should be available in different capacities, varying in price.

 

2. Conventional Hard Drive: This would give any user the ability to store large amounts of storage, in the order of hundreds of Gigabytes. This type should also come in varying storage capacities.

 

For this to be useable:

 

1. It would need to support a file system other than FATxx. The risk of data corruption due to power loss/cycling during recording makes anything that uses this file system completely unreliable and utterly useless for long term monitoring. You can record for two months straight and then something goes wrong and you have nothing but a dead usb drive. So any other file system that is not so susceptible to corruption/damage due to power loss would be fine, reliance, NTFS, etc.

 

2. You should be able to plug in multiple modules and RAID them together for redundancy. This would insure data security and increase the usability of the cRIO for long term remote monitoring in almost any situation. 

 

 

Current cRIO storage issues:

We use NI products primarily in our lab and LabVIEW is awesome. I hope that while being very forward about our issues, we will not upset anyone or turn anyone away from any NI products.  However, attempting to use a cRIO device for long term remote monitoring has brought current storage shortfalls to the forefront and data loss has cost us dearly. These new hard drive modules would solve all the shortfalls of the current storage solutions for the crio. The biggest limitation of the cRIO for long term monitoring at the moment is the fact that it does not support a reliable file system on any external storage. The SD Card module has extremely fast data transfer rates but if power is lost while the SD card is mounted, not only is all the data lost, but the card needs to be physically removed from the device and reformatted with a PC. Even with the best UPS, this module is not suitable for long term monitoring. USB drives have a much slower data transfer rate and are susceptible to the same corruption due to power loss.

 

When we have brought up these issues in the past, the solution offered is to set up a reliable power backup system. It seems that those suggesting this have never tried to use the device with a large application in a situation where they have no physical access to the device, like 500 miles away. Unfortunately, the crio is susceptible to freezing or hanging up and becoming completely unresponsive over the network to a point that it can not be rebooted over the network at all. (Yes even with the setting about halting all processes if TCP becomes unresponsive). We would have to send someone all the way out to the device to hit the reset button or cycle power. Programs freeze, OS' freeze or crash, drivers crash, stuff happens. This should not put the data being stored at risk.

 

I would put money on something like this being already developed by NI. I hope you guys think the module is a good idea, even if you don't agree with all the problems I brought up. I searched around for an idea like this and my apologies if this is a re-post.

 

 

I need to frequently check the existence of an overtemperature or any health parameters of the NI card PICe-6323 for my application.

I am using the DAQmx ANSI C library for my application.

 

I have tried using the DAQmxGetReadOvertemperatureChansExist function but found that's for C Series Devices.

 

Please help me out.

For me it is very hard to spot channel that is connected on MAX schematic below:

 

image.png

 

By changing the color of connected channel it is way easier to spot:

 

image.png

Just ran into a situation where I need to stream a lot of data to TDMS.  The only problem is that I need to store additional metadata with the channels.  I could go through all of the generated TDMS files and insert them after the fact, but this is kind of tedius.  I propose a way to add metadata to the channel.  My first thought was to use a variant input on the Create DAQmx Channel, but some of the polymorphics already have really fully connector panes.  So I am now thinking to just add a property to the Channel Property Node that is just a variant.  When logging to TMDS, the variant attributes can be put in the metadata of the channel.  Do something similar for the group so that we can have additional group metadata.

 

Metadata that I'm currently thinking about could include sensor serial number and calibration data.  I'm sure there is plenty of other information we would like to store with the TDMS file.

Hello,

 

For those of us who develop using DAQmx all the time, this might seem silly.  Nonetheless, I'm finding that users of my software are repeatedly having a tough time figuring out how to select multiple physical channels for applications that use DAQmx.  Here's what I'm talking about:

DAQmxChannels.png

 

Typically a user of my universal logger application wishes to acquire from ai0:7, for example.  They attempt to hold down shift and select multiple channels, only to assume that one channel at a time may be aquired.  For some odd reason, nearly everyone fears the "Browse" option because they don't know what it does.

 

 

While, as a developer, I have no problem whatsoever knowing to "Browse" in order to accomplish this, I was just asked how to do this for literally the fifth time by a user.  Thus, I'm faced with three choices: Keep answering the same question repeatedly, develop my own channel selection interface, or ask if the stock NI interface may be improved.

 

I'm not sure of the best way to improve the interface, but the least painless manner to do so might be to simply display the "Browse" dialog on first click rather than displaying the drop-down menu.

 

Please, everyone, by all means feel free to offer better ideas.  What I do know for certain, though, is that average users around here continually have a tough time with this.

 

Thanks very much,

 

Jim

 

NI Terminal block layout should be designed so that wiring can be done straight from terminal to wire trunking.

 

For example TBX-68 has 68 wire terminals aligned to inside of the terminal block. This causes that each wire should make tight curve to wire trunking. Another problem with TBX-68 is that wires are heavily overlapped because of the terminal alignment.

 

Also the cables from terminal block to DAQ device should be aligned to go directly to wire trunking (not straight up).

 

terminalBlocks.jpg

 

 

Many CAN protocols require a byte in a cyclic message to be incremented each time the message is sent (this is often byte 0). I might have read somewhere that this is possible with VeriStand but I am not using it. So when using only LabVIEW and the NI-XNET API, the only way to achieve this is to call the XNET Write function to manually set the value of this byte. But having to call the API each time the message should be sent removes all the benefits of cylic messages... Moreover LabVIEW can't guarantee the same level of speed and determinism (if the message is to be sent every 5ms for example).

Being able to configure a signal to be an auto-incremented counter would be a huge improvement. To me, this is a must-have, not a nice-to-have...

I often use one DAQ device to test the basic functionality of another device and like to be able to quickly do this through test panels.  Unfortunately, MAX does not allow the user to open more than a single test panel at once.  The current workaround for this is to launch the test panels outside of MAX (see this KB).

 

It would be nice to have the same functionality when opening test panels in MAX.  Specifically, I would like to be able to do the following with a Test Panel open:

1.  Be able to navigate through MAX to do things like check device pinouts, calibration date, etc.

 

2.  Be able to move and/or resize the original MAX Window (it always seems to be blocking other applications that I want to view alongside the Test Panel)

 

3.  Open a test panel for a second (or third...) device.

 

It is nice that there is a workaround in place already but I think it would be nice if MAX had this behavior to begin with.

Based on this question, I would like to add a new category of events to LabVIEW: Max-events.

 

This category could contain the following events:

-Hardware Added

-Hardware removed

-Configuration changed

    -Scales

    -Channels

    -Tasks

 

If you know other events, please post them.

After consulting the community and raising a ticket with NI’s support team, we have determined that there is no good way of programmatically removing old or disconnected remote systems.

DeleteDisconnectedSystems.png

I propose that Ni Sys Config pallet be expanded to include a “Delete Disconnected Systems”. This would clear MAX’s cache of disconnect remote systems. Just like the manual method available through MAX.

Hello,

 

I recently discovered that the SCXI-1600 is not supported in 64-bit Windows.  From what NI has told me, it is possible for the hardware to be supported, but NI has chosen not to create a device driver for it.

 

I'm a bit perplexed by this position, since I have become accustomed to my NI hardware just working.  It's not like NI to just abandon support for a piece of hardware like this -- especially one that is still for sale on their website.

 

Please vote if you have an SCXI-1600 and might want to use it in a 64-bit OS at some time in the future.

 

Thanks,

Doug

 

 

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

I'd like to know if a DAQmx Task has been triggered.  Perhaps via task property?

 

triggered.png

 

Thanks,

 

Steve K

I love simulated devices, but one major drawback is the static nature of the simulated data.  It would be awsome to have the ability to import real world data for playback in the simulated devices.  Essentially analog input channels could take a waveform in or waveform file, digital in could take a digital file or even a popup probe for the input where the user can turn on/off digital lines or turn a nob on an analog line would be very nice to have.  This would allow the programmer to capture real data that a system might expect to recieve and then run the daq application in simulation using daqmx simulated devices with the exact real-world data.

 

 

When I drop a DAQmx Task constant on my LabVIEW block diagram, I have the right-click menu option to Generate Code >>Configuration 

 

This scripts out a sub.vi that creates the Task by adding each channel with channel specific properties sequentially, one channel at a time, to repeated calls of DAQmx Configure Channel.vi.

 

I have a hard time saving that vi output!  I have not really learned how to name it properly without Using:  "BLEEP_BLEEP_BLEEP_Configuration_that_cannot_be BLEEPING_scaled.vi"

 

I believe that an autoindexing loop would be much nicer.

 

My grandmother thanks you for improving my manner of speach.

Hi!


PCI Express bus became more and more popular.
PCIe has a new type of interrupts - MSI/MSI-X.
Today VISA driver is able to work only with old style interrupts (Legacy).

 

Let me explain the main difference:

Legacy interrupt mask intA/B/C/D signals (these signals was in PCI bus, but not exist in PCIe bus).
These signals (for PCIe actually packets with Message setA/B/C/D) are shared between all PCIe devices.
So VISA driver spend a lot of time when it looks who produced this interrupt.

 

MSI interrupt is actually a Memory Write packet to preallocated address.
In this case VISA should already know which device produced this interrupt.
Also MSI interrupt can contain different interrupt vectors inside of Memory Write packet. So it would be very helpful to get access to vector value too.

 

Requesting MSI/MSI-X interrupts support to VISA driver.

 

Thanks a lot!