04-12-2013 05:17 AM
Hello:
I am very interested in LabVIEW GPU, but I do not know how to get started.
Could somebody tell me( or give ne an example) how to perform the easiest task.100+100=200, by GPU?
Any reply would be greatly thanked.....
My version is LabVIEW 2012, CUDA 5.0...
Best wishes...
04-18-2013 12:56 PM
Although you're question is clear, the answer is quite involved. I'll attempt to summarize:
This covers the more general information related to your question. Now let's get a bit more specific. To simply discussion, let's suppose your question was how to add two arrays of numeric data on an NVIDIA GPU. Because array addition is not already available as a function in a CUDA library (at least, not for floating-point data types), it would be considered a custom GPU function.
You would need to write the function based on CUDA (see #3 above). Once you had your function exported from a library and callable from LabVIEW (i.e. using a C function call API), then you could write a function wrapper in G to call that new function from a LabVIEW diagram.
Documentation for how to do this is posted online. The discussion thread https://decibel.ni.com/content/thread/13771 provides links to the documentation you would need and includes useful information on the topic.
I certainly don't mean to discourage you in attempting to do GPU computing but the GPU Analysis Toolkit helps you integrate your function into a LabVIEW application. It does not let you create a GPU and deploy it exclusively from G.
Good luck!
04-18-2013 08:49 PM
Thank you very much for your kind reply, I will figure out.....