Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Calibration of the robot relative to the camera's field of view

Hello everyone

 

I work with a 2D vision system and robot: universal robot.
I have a workspace where the elements are supposed to appear, then they are to be recognized by the camera and the position of the element to be pick is to be sent to the robot.
The work area is not parallel to the mounted robot.
The first thing I do is calibrate the camera relative to the work area - this is what I have already done.
The next step is to calibrate the robot relative to the work area.
Unfortunately, I do not know how to do it mathematically. I came up with the idea of ​​defining 3 points (the plane) which the camera sees -> saves the coordinates of these 3x points (x_cam, y_cam) and then I get the robot position (really the tip of the gripper that is on the robot) to each of these points and I write the coordinates of the tool center point robot (x, y, z).
At the moment I do not know what to do next ...
I have information about the connection between 3x camera points and the robot, but how to use it now?
The vision system will return to me the position of the found element, but I do not know how to transform it into the robot's coordinate system.
Has anyone ever worked on such a problem?
Has anyone have any suggestions on the method what I use, maybe I should change it?

 

Thank you in advance for your help!

0 Kudos
Message 1 of 2
(1,937 Views)

Hi MirekEK,

As far as I understand, what you want to achieve is to transform the coordinates.

In LabVIEW you have the Geometry VIs pallete, which you can use for that:
https://zone.ni.com/reference/en-XX/help/371361R-01/gmath/geometry_vis/

If I didn't get your questions correctly, feel free to explain.

Hope this helps.

Regards,

Patrik
CTA, CLA
Helping (sharing) is caring!

If the post was helpful - Kudo it.
If the post answered your question - Mark it as Solution.
0 Kudos
Message 2 of 2
(1,864 Views)