GPU Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

Considering GPU Computing - "What is the core efficiency gained?"

Hi All,

I've have a project that involves intensive image analysis on numeric arrays.  I'm considering the beneifts of using GPU processing within the Labview code.  Can anyone assist with the following questions?

1) What is the core-to-core CPU vs GPU efficiency ratio I could expect?

     a) my current CPU is: Intel Core I7 4800MQ @2.70 GHz (32 GB of memory)

     b) my current graphics board is : NVIDIA Quadro K2100M at 405 MHz Clock, Memory is GDDR5 and 2048 Mbytes

2) Does GPU computing in Labview scale?

3) Can Labview be setup to use multiple GPU's?

The project is in the early design stages but some intitial testing on processing is very slow (i.e. minutes and I need sub 1 second).

Any input or experience would be very much appreciated before I buy the tool kit.

0 Kudos
Message 1 of 7
(10,673 Views)

I think #1 is a bit more complicated and depends on what you application is and how you implement it.

As far as #3 yes multiple GPUs can be targeted.

Not sure what you mean by #2?

Message 2 of 7
(10,392 Views)

To add some information to #1 question; our design has a requirement to statistically analyze subsets of large arrays of data.  Simple standard deviation calculations on iterative For Loops with nested iterations in the thousands of each of two levels.  We are looking for "the needle in the hay stack" where a standard deviation exceeds a limit.  The process is an X, Y, Z approach were Z is the important measurand for groups of continous X, and Y sets of various sizes.

To clarify #2, the better wording is can Labview utilize multiple GPU processors.

Thank you!

0 Kudos
Message 3 of 7
(10,392 Views)

Yes, in that case the GPU computing does scale. For example the K2100M has 576 cores that can be accessed.

0 Kudos
Message 4 of 7
(10,392 Views)

I expect not that much gain if you use GPU vs Multithreaded CPU. The structure you dscribe contains a lot of "reductions" that are essential for stat calculations you need. those algorithms are not that efficient on the GPU.

The embedded FOR loops are also translate poorly from CPU to GPU, unless you have a way to unfroll them.

However if you can change the algoritms to find your extremes, and go for a nice new Pascal GPU, you will be surprized with the gains possible:  it may map into 100x es if your data can be partitioned into thousands of subsets that can be processed individually, in parallel.

Message 5 of 7
(10,392 Views)

Thank you.  That is also helpful.

0 Kudos
Message 6 of 7
(10,392 Views)

This is a good. point. The more parallelized your application the more gains you will see.

If a calculation is dependent on others the GPU will need to wait to process and this is slower than a CPU. If tasks can be parallelized (independent of each other) they can be performed in batch and hundreds of GPUs can be targeted at the same time. This is where GPUs are advantageous.

0 Kudos
Message 7 of 7
(10,392 Views)