GPU Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

Is there a way a huge matrix data which can not create from LabVIEW due to memory into GPU?

Hi there,

I think this community is really helpful forL a user who want to use CUDA by LabVIEW.

After installing CUDA(from nVIDIA site) and GPU toolkit(from LabVIEW site), I've checked my GPU information by devicequery.

My GPU information is as below:

-------------------------------------

> NVS 160M

> CUDA Capability Major/Minor version number: 1.1

> 8 CUDA Cores(1 multi processor x 8 CUDA cores/MP)

> Max texture dimension size(x,y,z)

  1D: 8192

  2D: 65536, 32768

  3D: 2048, 2048, 2048

> Max Layered texture size <dim> x layer x 512

  1D: 8192 x 512

  2D: 8192 x 8192 x 512

> Max sizes of each dim. of a block: 512 x 512 x 64

> Max sizes of each dim. of a grid: 65535 x 65535 x 1

-------------------------------------

From LabVIEW, it is impossible make a huge matrix such as 10,000 x 10,000, 2D matrix with SGL because of the memory overflow.

However, it may be possible that down the huge matrix data column by column(using for loop or something) into GPU because max texture size of 2D is enough to store over 10,000 x 10,000.

43243.png

These can make this at LabVIEW, but it is hard to find the example or something for helping stuff.

My questions are

1) is it possible that down the huge matrix data into GPU by using LabVIEW GPU analysis toolkit?

2) if it is possible, the above Vis make it possible?

Thank you in advance.

Albert

Message 1 of 1
(4,434 Views)