GPU Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

How to perform an easy 100+100=200 by LabVIEW GPU?

Hello:

     I am very interested in LabVIEW GPU, but I do not know how to get started.

Could somebody tell me( or give ne an example) how to perform the easiest task.100+100=200, by GPU?

Any reply would be greatly thanked.....

     My version is LabVIEW 2012, CUDA 5.0...

Best wishes...

0 Kudos
Message 1 of 3
(9,691 Views)

Although you're question is clear, the answer is quite involved. I'll attempt to summarize:

  1. Processors like the GPU are not suited to operations which involve small amounts of data. For this reason, users typically perform computations on GPUs where inputs are large arrays (e.g. having a million elements). The computations on the data must be both significant and able to run in parallel. This means that algorithms requiring random access to data elements are not suitable or have to be redesigned to deploy on a many-core processor like a GPU.

  2. The GPU Analysis Toolkit is designed to facilitate calling an *existing* GPU function exported from an external library. It is not possible to define a function in G that gets compiled to run on a GPU.

  3. You can learn about creating a GPU function for execution on an NVIDIA GPU by visiting NVIDIA's website (http://www.nvidia.com/cuda). NVIDIA's SW/HW architecture for doing this is called CUDA. It includes OS binaries, code examples and documentation for doing GPU computing using their HW.

  4. The examples that ship w/ the GPU Analysis Toolkit perform computations on NVIDIA GPUs using CUDA. To run the LabVIEW GPU examples, you will need CUDA installed on your system. The prior website link will help with this also.

  5. The GPU Analysis Toolkit does not include functions for all functions from NVIDIA's publically available libraries. Currently the toolkit ships with support for BLAS Level 3 functions in CUBLAS and the 1D & 2D FFT functions from CUFFT. Each of these function wrappers are examples of how to call GPU functions in an external library. So, if you wanted to call a function in CUBLAS that was not already available, you could use a wrapper for a similar function to build your own function wrapper.

This covers the more general information related to your question. Now let's get a bit more specific. To simply discussion, let's suppose your question was how to add two arrays of numeric data on an NVIDIA GPU. Because array addition is not already available as a function in a CUDA library (at least, not for floating-point data types), it would be considered a custom GPU function.

You would need to write the function based on CUDA (see #3 above). Once you had your function exported from a library and callable from LabVIEW (i.e. using a C function call API), then you could write a function wrapper in G to call that new function from a LabVIEW diagram.

Documentation for how to do this is posted online. The discussion thread https://decibel.ni.com/content/thread/13771 provides links to the documentation you would need and includes useful information on the topic.

I certainly don't mean to discourage you in attempting to do GPU computing but the GPU Analysis Toolkit helps you integrate your function into a LabVIEW application. It does not let you create a GPU and deploy it exclusively from G.

Good luck!

Message 2 of 3
(7,894 Views)

Thank you very much for your kind reply, I will figure out.....

0 Kudos
Message 3 of 3
(7,894 Views)