04-29-2020 08:36 PM
I recently wanted to buy an Nvidia graphics card for FFT acceleration. Does anyone know which models the LabVIEW GPU Analysis Toolkit supports. The picture is the workstation configuration I chose. There are two Nvidia graphics cards in it.
04-29-2020 10:25 PM
The Quadro 2200 appears to be a good GPU for CUDA as it is rated pretty high by NVIDIA for its "Compute Score" for CUDA. But it is not clear to me that the LabVIEW GPU Analysis Toolkit is even really supported any more. I put in a support request to NI on the subject and got no response. After trying to get it working for a week or so, I have switched to trying to learn to write C++ CUDA code instead.
04-30-2020 01:42 AM - edited 04-30-2020 01:44 AM
From what learned you should be able to use any hardware comes out after 2016, Nvidia made their gpu have backwards compatible for earlier SW.
04-30-2020 02:01 AM
Hi,
If you have Python programming skills, you can use the different GPU accelerated librariries from the Python eco-system and call the Python script from LabVIEW (from LV 2018).
I have had decent results with the Numba librairie (http://numba.pydata.org/) which is quite well documented. You can find quite a few exemples on GitHub.
Probably it is not as fast as C++ Cuda but it can be handy for prototyping.
04-30-2020 08:44 AM
This overlaps with some discussion I had over here:
The P2200 is a good card. I think it's great for getting you started.
In considering a new card, you'll first want to think about how much GPU memory you'll need. The CUDA FFT documentation has some details here. (However, you can have the input/output data reside on page-locked CPU RAM, and/or setup the FFT workplan areas in CPU RAM, and/or use multiple GPU cards. I've personally been able to use the first two workarounds to process large 3D FFTs for image volume deconvolution.)
Secondly, you'll want to think about what version of CUDA you'll need (older cards won't have the latest features, or be supported by the latest version of CUDA). and means looking at what Compute Capability you need. Especially important for FFT work will be the floating point operations.
In general the TITAN cards are specially designed for lots of CUDA operations. But they are also expensive. Clock speeds and processing power for different data types are listed here. Also factor in some overhead for transferring the data to the card, and transferring it back.
Thirdly, if you may want to make sure you can use a higher level library (like that python library), if you'd rather abstract all of the CUDA calls to something easier to start with.