10-17-2016 05:19 AM
Hi,
I am starting a project which involves least-squares fitting using Labview 2016.
It is important that I use GPU-enabled operations, since there are several million points that need to be fitted simultaneously.
(I record around 40 images, each with around (2000x2000) pixels, and then the consecutive images need to be fitted on a pixel-by-pixel basis).
I had the idea that I could use the toolkit LVCUDA, since this is basically a straightforward (but tedious) set of basic operations.
However, after looking through the documentation, I do not think I can compute Hadamard products (necessary to calculate x^2, x^3 etc).
So, I believe I need to go down the route suggested by Andrey back in 2009
So far, I can successfully query the GPU card, but do not have access to CVI, so am trying to proceed using Visual Studio.
Could anyone please give some advice on how to go about this?
As far as I can see, I need to write the program in C++, and build the dll library.
Thank you in advance!
Carl
10-17-2016 09:29 AM
During my PhD, we followed the method of Andrey Dmitriev, and published our code here: https://engineering.ucsb.edu/~saleh/
If you have specific questions or get stuck, don't hesitate to ask. No guarantees that I'll be able to answer though. There is a lot of wrangling to get the Visual Studio stuff to work. I remember digging through guides of various types online. It might be quicker to just take our published code, delete all the functions, and just keep the project settings, to save time. That is, if you can get it to compile in the first place Then there will also be wrangling to make sure the DLL in Labview is expecting the right number and type of parameters.
Good luck!