Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

3D position from from stereo x,y and disparity

Solved!
Go to solution

Hi all.

 

I wonder if someone familiar with the stereo vision toolkit can help? I have a calibrated stereo system (calibrated using the 'Stereo Vision Example.vi' that ships with the toolkit, then saved the calibration to re-load in my application). The next 'conventional' step in stereo vision seems to be to generate the disparity image for the scene as a whole - feature matching between the left and right images. This is highly processor intensive, and not necessary for my application as I know what I'm looking for in each image - its a circular feature.

 

 

So, my code currently does this:

1. Initialise - load the stereo calibration (IMAQ Read Binocular Stereo file)

2. grab the image pair from the camera

3. recify the images using IMAQ Get Rectified Image From Stereo

4. Locate the circular feature in each rectified image

 

I then have the (x,y) position of the feature in each image, and I have what I think is the disparity of that feature (the difference in the x positions between the left and right images). The feature lies on the same y value as expected for a correctly calibrated system.

 

The question is how to convert this information to real world coordinates? The documetation for 'IMAQ Get Binocular Stereo Calibration Info' mentions the 'Q matrix' which I can get using that VI and says that "Q Matrix can be used to convert pixel coordinates along with the disparity value into real-world 3-D points." but gives no further information. Is the Q matrix relevant? Anyone know how to use it or what it is? My googling is drawing a blank.

 

Thanks for reading!

Regards,
Chris Vann
Certified LabVIEW Architect
0 Kudos
Message 1 of 10
(27,403 Views)
Solution
Accepted by topic author Chris_V1

Hello,

 

to reproject a point to 3d space you need its disparity value, its coordinates on the disparity image and the perspective transformation matrix (Q matrix).

 

Please take a look at this explanation:

 

http://answers.opencv.org/question/4379/from-3d-point-cloud-to-disparity-map/

 

and "reprojectImageTo3D":

 

http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

 

Hope it helps in some way.

 

Best regards,

K


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
Message 2 of 10
(27,395 Views)

Many thanks. This seems to be working nicely!

 

Chris

Regards,
Chris Vann
Certified LabVIEW Architect
0 Kudos
Message 3 of 10
(27,372 Views)

Hello,

 

no problem. I am happy it helped you.

 

Just one question - you are using the images from left and right camera and search for your feature (circular object) in both. From this you then calculate the disparity. For the position of your feature on the disparity image you search the left rectified image ( i think this is the default coordinate setup)?

 

Best regards,

K


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 4 of 10
(27,363 Views)

HI Klemen. Sorry - I missed your reply. Yes I am searching the left image for the (x,y) value. Thanks for checking, that could be a gotcha!

 

Regards,
Chris Vann
Certified LabVIEW Architect
0 Kudos
Message 5 of 10
(27,211 Views)

Hi guys,

 

Maybe be I'm a little bit dummy Smiley Embarassed, but how can I get Ix and Iy? As well as Crhis, I'm trying to understand how to obtain the 3D coordinates of a stereo pair of images for only for 1 specified point on each image. I have some essential doubts:

 

After obtaining the rectified images from "Get Rectified Image from stereo" for both, left and right camera", the disparity value is measured in pixels? Because at that stage I don't know the scaling factor (mm/pix) for vertical and horizontal directions. Assuming that the disparity "d" is defined by XR-XL, measured in pixels, what I understood is that I need to compute Ix and Iy of the point from the disparity image. Then with the Q matrix, obtained from the stereo calibration, I can compute 3D coordinates. But, how can I compute the disparity image position of the analysed point? I don't know how I can get the disparity image.

0 Kudos
Message 6 of 10
(27,091 Views)

Hello,

 

 you need to calibrate your stereo system first. Then rectify your images from the left and the right camera in order to obtain so called "horizontal scan lines" setup. This means that the epipolar lines fall along the horizontal scan lines of the images.

 

Your images are now aligned in such a way that the pixel difference between the same object on both images manifests only in the horizontal direction - disparity.

 

Now, you need to find the same object (pixel) on both images and calculate the difference in the horizontal direction, i.e. disparity. Now you can use Q matrix to reproject the 2D point to 3D space. Labview uses the left image as the default coordinate system.

 

If you want the disparity image, Labview has a .vi for this.

 

Best regards,

K


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 7 of 10
(27,080 Views)

Thank's for your reply, I got the system calibrated (0.88, is not very good but...) I have the rectified images and I have the disparity. I know that there is a VI to compute the disparity image, but since I'm looking for a specified point, easy to find by filtering the image, I don't really want to obtain the disparity image, but the values of Ix and Iy seem to be the pixel coordinates of the point in the disparity image.

 

The question is: Can I determine the Ix and Iy values if I know the points (xR,yR) and (xL,yL), in pixels, of the rectified images?

 

i.e:My point is located at coordinates (pixels) for both rectified images:  Left (150,200) Right (147,200), disparity will be d=3 and Ix and Iy? May be Iy is 200? And Ix?

 

Best regards

0 Kudos
Message 8 of 10
(27,073 Views)

Hello,

 

the equation for 3d reprojection can be written (in homogeneous coordinates) as:

 

reproject.png

 

So, you need the Q matrix (calculated from calibration), the pixel coordinates (x,y) and the corresponding disparity d(x,y).

 

Remember that your left image holds your reference system!

 

The steps you need to take:

 

1. Calibrate stereo system (obtain perspective transformation matrix - Q matrix),

2. Rectify left and right image,

3. Find the same object (pixel coordinate) on both rectified images,

4. Calculate the disparity d(x,y), where (x,y) is the point (pixel location) on the left rectified image,

5. Take the equation above and calculate the 3D point at (x,y).

You need to normalize the values (X/W, Y/W, Z/W, 1) in order to obtain the global 3D measurements.

 

Calibration quality - it is stated by National instruments, that if the quality is less than 0.7, the system should be recalibrated.

 

Hope this helps.

 

Best regards,

K

 


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 9 of 10
(27,058 Views)

How can I get real world coordinates by using image calibration?

I could get the cursor position from vision acquisition but I couldn't take it out  from the express vi is there any way to get the coordinates out?

I want to get coordinates of the dust present in the image...?

Please help me.....

0 Kudos
Message 10 of 10
(16,940 Views)