GPU Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

Medical Image Processing on a GPU from LabVIEW?

Hi Mathguy,

     I would like to see if you would mind commenting on a use of the GPU toolkit 2012 with Labview 2012 for medical image processing: http://choi.bli.uci.edu/software/realtime_lsi.html

I have used this program extensively but modified for hardware other than what it was originally programmed.  Unfortunately the authors would not divulge the source code for the dll that runs the image processing kernel on the GPU so I now have an excellent opportunity to try to work my way to 1) understanding how they wrote the dll and 2) learning how to use the GPU toolkit to accomplish the same thing.  Really, learning how to perform ANY simple operation on the matrix using the GPU would be outstanding at this point.

In the article that they wrote that generally covers what they did, they talk about doing a memory allocation, memory copy, and then memory deallocation.  Andrey Dmitriev seems to cover a similar vein here using Labview CVI but some of the code is compiled in CUDA and some is not and it is an extension of the CUDA sample code.  Not being a C programmer, trying to dig into that has been somewhat challenging.  I have at least learned how to compile a dll in VS2010 here and then call it with Labview so I am making some progress.

One very nice benefit to the image processing code at the MTI lab is that there is no need for hardware to be installed as a random speckle image can be selected to test if everything works. 

Thanks so much for your time.  If I make some progress I will be sure to include it.

0 Kudos
Message 1 of 2
(7,633 Views)

I moved your original post here as others have asked this question too. I'll comment on the specifics of your post first and then add my own ideas/recommendations. For those who aren't familiar w/ my background, I worked on the NI Vision team for several years and am (co)author on at least 10 of NI's image processing patents. For the last decade, I've worked on LabVIEW's core development team concentrating on numerical computation and mathematical solutions.

- While I'm not familar with MTILab's real-time solution, I would need to know more about the performance of their kernel(s) to offer a firm opinion on what they mgiht be using internally. Knowing the frame/s processing they can achieve w/ given hardware could give some indications on what is being done.

- They're reference to memory management is insufficient as image processing on a GPU is done optimally w/ pixel data (integer or single-precision). Pixels are managed using textures and that determines how (a) the memory is managed between host and device and (b) how blocks of operations are performed. In the case of (b), more complicated numerics can be accelerated by formulating the expression in terms of intrinsic operators which can be incredibly fast for textures but relatively slow on normal data buffers.

- To this date, the solutions I've seen using LabVIEW CVI cannot be reproduced in LabVIEW G w/out using the GPU Analysis Toolkit. This is because of how GPU kernels exported from external libraries are called from the environement. In CVI, you have full control over which thread is used to call the kernel. This is critical for NVIDIA CUDA as the device's context must be active on that thread for the device call to work.

In LabVIEW (using the Call Library Function node or CLN), the function call is made on any number of threads and you have almost no control over that. The choices you are limited to (e.g. configuring the CLN to use the UI thread) have dramatic performance implications that are not easily uncovered w/out extensive careful benchmarking. This is a primary reason why the GPU Analysis Toolkit is valuable. The LVGPU SDK which is part of the toolkit offers an architecture for making GPU kernel calls safe and reliable from a G diagram.

Now for my more general comments:

  • The texture is the key to efficient and effective image processing on a GPU. Without it, you just won't get the real performance. Textures use designated hardware inside the GPU chip to optimize performance and they are very good at what they're designed for.
  • The GPU Analysis Toolkit does not supply out-of-the-box support for a texture data type. The reason is indirectly tied to the CUDA array data type which does not support double-precision floating-point. The most common floating-point type used in LabVIEW in DBL. (see below for more details)
  • A texture is not a block of memory but rather an abstraction for how to interpret a given block of memory. You can bind a texture to any device memory buffer but it is more efficient to use a CUDA array. As textures exist to service image data, there's no reason to have a double-precision texture or CUDA array.
  • It is possible using the LVGPU SDK to add a texture data type to LabVIEW and any image processing functions (either from CUDA libraries or your own custom CUDA functions). Since all of the CUDA function wrappers for FFT and BLAS functions are examples of how to call a 'custom' GPU function, the more difficult task is adding the texture data type.
  • The KB Customizing GPU Computing Using the LabVIEW GPU Analysis Toolkit (http://digital.ni.com/public.nsf/allkb/82D5EDC19AF79F0786257A4C007417B1) includes support documentation and template code to help add new functions and/or data types using the toolkit.

You mention that you are not a C programmer. For CUDA, this is a big disadvantage because you cannot source a GPU function in G. This is not because CUDA is complicated for a C programmer. It is in fact very well suited for those who understand C. Your hurdle is on the initial C implementation. That can/will significantly impact the deployed performance once the code is compiled for a GPU.

I wish you luck!

Darren

0 Kudos
Message 2 of 2
(5,650 Views)