05-19-2012 10:55 AM
I work in an optics lab. The ultimate goal is to take images of the eye and build a 3D volume. The way we do this is by taking multiple 1D arrays and stitching them together to form a 2D area. Then we take those 2D areas to form a 3D volume. Right now though, we have the code taking a 2D area and transforming it into the appropriate 2D area. I feel it would be quicker to take a 1D and transforming that into another 1D and then stitching everything together at the end.
Therefore, the problem lies in that we would like to see what's happening in order to debug the hardware before putting actual subjects into system.
Robert
05-21-2012 09:07 AM
Hi,
If you are using an area scan camera then the images are already coming in as 2D arrays, so you should just work with it in this form. If you are using a line scan camera then the images could come in as a 1D array.
What do you mean when you say "you want to see what happening"?
Regards,
Greg H.
05-21-2012 09:41 AM
We are using an area scan camera I believe, but can set the DCF to give us a line scan. Plus the scanners produce a line scan image (i.e. they only give data in one line of pixels).
When taking in raw spectral data, we cannot tell if what we are trying to image is actually being imaged. This could be due to hardware setup or many other reasons. Therefore we process that spectral data into something meaningful. That is what we would like to see as "real-time" as possible.
Rob
05-22-2012 03:24 PM
Hi,
How fast are you trying to take images? Also the less image manipulation you do the faster your code will execute. So the more pixels you get from your camera at one time the better. If you have to take several arrays and piece them back together then your code is going to run slower.
Regards,
Greg H.
05-23-2012 10:03 AM
We are taking images at approximately 60 fps. We currently are getting a 2048x601 array from the camera.
Is that last part really true? It wouldn't be faster to process multiple 2048x1 arrays than it would to process 1 2048x601 array? I didn't think stitching them together would be that memory intensive.
05-24-2012 10:53 AM
Hi,
What I meant is it would be faster to acquire 1 2048X601 Image than 601 2048X1 images and then put together 601 arrays that have 2048 elements each.
Greg H.
05-24-2012 10:57 AM
Hello,
Ok I agree it's faster to acquire the image. The question lies in manipulation of that huge array vs the manipulation and stitching of smaller arrays.
I do believe though I have to completely start from scratch with my code. At this point, I'm just trying to plan for what I would have to implement.
Rob
05-25-2012 09:02 AM
Rob,
I still think it is going to be faster to just do the entire picture as apose to one line at a time. Unless you are able to acquire and process at the same time, but even then you can be acquiring while processing the entire image. Adding the array back together is just adding another step.
Greg H.