LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

dropping packets when hard drive light starts, NI Vision

I'm testing a 10GigE camera using NI-MAX version 14 with NI vision 14.5.  I'm running at ~80 frames per second, 2048x2044 pixels, 10 bit depth.  It will run fine for about 4 minutes and then the hard drive light lights up, packets are lost, the image goes to pieces and after a few more seconds communication is lost and the program crashes.  I've disabled paging without any affect.  The same problem is not observed when using the vendor's test software (SpinView, Point-Grey now FLIR).  Any ideas?  It looks like some sort of memory/buffering problem but this doesn't happen when using the other software to run the camera.

0 Kudos
Message 1 of 12
(3,547 Views)

Hey Jim,

 

NI Max isn't meant for long term use.  NI Max works fine for setting up, configuring and discovering hardware, but isn't meant to be used for long use.  I'm not saying 4 minutes is long, but compared to taking a couple seconds of grab or a few snaps, running for 4 minutes is most likely just causing NI Max to not run in the way it's intended.

 

I'd recommend trying the same test within LabVIEW or VBAI and run a simple Grab.vi example from the Example Finder.  If that reproduces the issues it could be a couple things.  Most likely LabVIEW would run out of memory if you are keeping the images in memory and not using 64 Bit.  Could be a networking issue.  Could also be a software corruption of some kind, but that's low chance.  Again, the simplest explanation is you are using NI Max in a way it isn't intended for currently and need to setup an application that meets your needs.

 

-Bear

Regards,

Ben Johnson
ʕง•ᴥ•ʔง
Message 2 of 12
(3,505 Views)

Thanks for the reply.  I run into the same problem when using LabVIEW.  I was using NI Max to troubleshoot thinking that it was one less layer of code (i.e. if the bug was/is in my LabVIEW code using NI-MAX would eliminate my code).  I’m running on a 64 bit machine with a solid state drive.  The camera full frame is 2048 x 2448 pixels 12 bit ADC.  If I try to run full frame at ~80 fps, the hard drive lights up in about 2 minutes, I lose packets and then lose all communication with the camera.  If I reduce the  frame size to 348 x 2448 it will run indefinitely at 200 fps.  At 356 x 2448 it loses communication within minutes.  My working hypothesis is that some buffer in the underlying LabVIEW camera code gets filled, it goes to write to disk and the delay caused by the slow write operation causes the code to lose track of the network traffic.  At the lower data transfer rate this doesn’t happen.  But, I don’t know how the underlying code works.  When I try the same thing using the camera vendors software, I don’t have this problem but it never appears to write to the drive.

0 Kudos
Message 3 of 12
(3,499 Views)

Sounds like a performance/resource issue to me.  I like the check in NI Max now though!  Do you have jumbo frames enabled?

 

Are you directly connected to the network to limit it's influence?

 

-Bear

Regards,

Ben Johnson
ʕง•ᴥ•ʔง
0 Kudos
Message 4 of 12
(3,489 Views)

Hi Jim,

 

I'm a software engineer for Vision R&D and have successfully tested one of FLIR's 10 GigE Oryx cameras with our software.

 

My first recommendation is that you absolutely need to upgrade Vision Acquisition Software, preferably to the latest version (18.0), or at least to 16.1 or newer. In the 16.1 release we did a complete overhaul of our universal driver (the only one that will work with 10 GigE NICs) to dramatically improve performance. Without this driver update, I would not expect that the 14.5 VAS driver can sustain an acquisition with a 10 GigE camera. It works for you when you decrease the frame size because you're not transmitting as much data. That older driver was just not made to deal with > 1 GigE cameras, and struggles to keep up with higher data rates.

 

With the update I think things should work on par with SpinView without you having to jump through any extra hoops. Be sure you also follow their setup tips (like how to configure your NIC for optimal performance) for your system, which I found to be super helpful. Also note that all the parameters it shows in SpinView should also be accessible in MAX, so you can just take the same steps there instead.

 

If you update and you're still having trouble sustaining an acquisition, please respond here with more details like:

  • Overall CPU load
  • Which NIC you're using and its settings

I'd like to echo Bear's post above that MAX isn't really intended for long term acquisition pushing that kind of data rate. There are some parameters that are fixed for MAX (it's using a Grab with only 10 internal buffers) that are not optimal for the kind of acquisition you're describing. It's good for configuration, but if you're testing the viability of using the driver with the camera, you should try to sustain the acquisition in more ideal settings. Namely:

  • Ring acquisition (if possible based on your pixel format needs)
    • In this acquisition type, the user allocates the internal buffers IMAQdx is using and then can access them directly without copying
    • Since Ring is saving a copy, it's saving valuable CPU resources which are really the bottleneck for 10 GigE acquisitions
    • See the Low-Level Ring example for how to use that
    • Pixel format is required to be one that the driver doesn't have to decode or unpack
      • Mono, BGRa, or Bayer with decode disabled
  • Lots of buffers - start with a couple seconds worth based on your frame rate
    • Having a low number of buffers (like how MAX has 10) can starve the acquisition if the driver gets behind and isn't recycling buffers fast enough for the camera to acquire into
    • You just need enough to account for worst case processing jitter in your application

Let me know if you have any questions -- hope this helps.

-Katie

Message 5 of 12
(3,485 Views)

I didn't study your code, but are you using a Producer/Consumer Design Pattern to get the data acquisiition from the Camera and the Data Writing to disk in separate, parallel While Loops?  

 

The other thing is the transfer rate.  2048*2448*80*12 (assuming your transmission is only 12, not 16 bits wide) is almost 5GBits/second!  I hope you're not using Ethernet to connect to the camera ...  Incidentally, even 348*2448*200*12 is over 2Gb/s.

 

If the Camera is set up to DMA into memory, then the bandwidth calculations might not matter.  It then becomes a task of handling the buffers expeditiously -- Producer/Consumer can be your friend ...

 

Bob Schor

0 Kudos
Message 6 of 12
(3,472 Views)

No luck.  I've updated to NI vision 17.5.  Ring buffers were already implemented.  No change in behavior.  It still drops packets then disconnects

 

Windows 7, 64 bit, Xenon E5-2650 V4, 2.2 Ghz

64 GB RAM

solid state drive

no other network connection

no firewalls

jumbo packets enabled

Tehuti NIC TN9710P

saw same behavior with a StarTech ST10GPEXNPI NIC

 

Same behavior using NI-MAX or custom LabVIEW interface

 

0 Kudos
Message 7 of 12
(3,437 Views)

Hi Jim,

 

That's too bad you're still having the issue.

 

Did you update the Rx Buffer Size to the max value in the Tehuti NIC settings?

 

What does your CPU utilization look like? If you look in Performance Monitor do you see a spike in CPU utilization that correlates to your dropped packets? Are you seeing that you don't have issues at all (no dropped packets) but then some event happens and things go haywire, or is it more that you're dropping packets the whole time and eventually things grind to a halt? If things work well in SpinView, does the Performance Monitor graph look much different in that case? Do you see any packet loss or packet resends in SpinView?

 

What are your power settings? I'm wondering if there's some power saving mechanism happening that is triggering this issue. Try setting your plan to high performance if it's not already. If that doesn't help, try going to your advanced settings for power options and tweaking the processor power management settings.

 

Hope this helps,

Katie

0 Kudos
Message 8 of 12
(3,422 Views)

hi Katie,

 

you wrote

'My first recommendation is that you absolutely need to upgrade Vision Acquisition Software, preferably to the latest version (18.0), or at least to 16.1 or newer. In the 16.1 release we did a complete overhaul of our universal driver (the only one that will work with 10 GigE NICs) to dramatically improve performance.'

 

What is the name of the Vision Acquisition Software universal driver? does it work with any 10 GigE NIC, or does it need a particular card? for Windows 10 system with latest Vision Ac Software as of 7/11/2023...

thanks for your time and efforts.

0 Kudos
Message 9 of 12
(1,121 Views)

I believe the phrase (by NI) "our universal driver that will work with 10 GigE NICs" refers to IMAQdx, the driver that works with third-party Cameras that (more-or-less) follow the GenICam standards.  This includes the camera in your PC and most WebCams.  It works with the USB bus, TCP/IP, and FireWire (or did a few years ago).

 

Bob Schor

0 Kudos
Message 10 of 12
(1,092 Views)