LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

NI Vision Assistant Transformation Matrix

Two questions as I am quite new to NI Vision Assistant and just got the distortion model calibration working.

 

1. Is there a way to retrieve the transformation matrix between the original and final images from NI Vision Assistant?

 

Here is why I am asking -- alternative approach suggestions welcome!

 

1. An image is taken with a camera and lens system

2. Image is corrected using the distortion model (may need to move to the camera model in the future if I can get it to work) grid calibration technique

3. Output image is a corrected image used for processing

4. The processed image pixels need to be correlated back to the original pixels 

5. Stages need to align the sample under a laser system to perform procedures

 

I believe the corrected image should be the most representative of "real locations" for the stage movements if the calibration is done well. However, if it is not, I would like to start thinking about an alternative method to correlate pre and post calibration pixel location.

 

Both the laser and camera will be unavoidably off axis (the camera will be as close to perpendicular as it can be, while allowing the laser to hit the sample within the FOV).

 

2. Is point or grid calibration more desirable in the case described above?

 

My grid is rather large compared to the pixel size of my camera (each dot is roughly 100 pixels x 100 pixels) and cannot be made smaller due to the resolution of the printer (NI provided grid PDF on 10% scaling). The dot edges are fuzzy due to printing. The Vision distortion model polynomial (K1) gives a % Distortion of 3.11463. How "good" are these % if I am aiming for a minimum of 0.1mm accuracy? Going above K1 gives unreasonable error values (9+ mm) -- making me think this is a poor calibration for the algorithm.

 

The higher order unreasonable Max and Mean error values make me concerned about the reality of using this calibration method. I am also concerned about properly setting the calibration axis to match the movement axis of my XY stages. The only improvement to this calibration I can reasonably provide is a more consistent focal length and thus imaging plane more parallel to the sample surface. The image would be crisper, but still limited by the fuzzy edged dots.

 

CatDoe_0-1667940871402.png

 

I do have a geometric calibration marker (a cross with width and a center point) which could be used for point calibrations. I should get a crisper edge than the dots but would not have nearly as many points distributed through the entire ROI. The majority of calibration points would be along the X and Y axis with no points along the 45 deg and -45 deg directions. I cannot rotate the calibration marker at this time. Would this be a better calibration technique in this case? Or is there some way I can incorporate two images if I modify my marker to allow rotation to 45 deg? I have not yet finished upgrading the system to try this calibration marker yet.

 

This algorithm will be put into a subvi and used as part of a distributable.

0 Kudos
Message 1 of 1
(767 Views)