Seeing post about image stitching for IC's, has me thinking about an automated image stitching rig. Use lasers at points across the face to evaulate surface shape and create a more seemless automated stitch. Thoughts on how this would work?
I get the feeling this is a problem we can solve with (better) code. Planar stitching isn't very hard. Might want a calibration target to determine the lens distortion parameters first and then it should be easy.
-
-
thats what hugin attempts to do, but there are still a lot more variables than you'd think. Consistent movement in the x-y plane would help a lot, but you also have microwarps in the surface etc that can be serious issues for image stitching.
-
hugin tries to take 2 images that you tell it are adjacent, and use a set of key pairs of points to distort, deblur, etc using excess information. You can get somewhat decent results, but not better than what you can get with a hi-res scanner, which should absolutely be doable.
- Show replies
New conversation -
-
-
Agree that lens calibration is a thing. Also determine angle of camera to stage axes. I ignore lens calibration by cropping the borders (so pincushion isn't so bad) and just search for a reasonable perspective transform for the camera angle.
-
I think a lot of this could be solved by writing a better control point auto finder for Hugin. Hugin knows how to optimize lens parameters, but its default auto finder does a poor job on ICs.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.