Calibration is important for accuracy in 3D reconstruction. In particular it is a critical task for stereovision analysis.
Calibrating stereo cameras is usually dealt with by calibrating each camera
independently and then applying geometric transformation of the external
parameters to find out the geometry of the stereo setting.
Existing camera calibration methods can be roughly divided in:
A well known method for calibrating a camera has been proposed by Tsai (1986-88).
The method is based on the knowledge of the position of some points in the world
and the correspondent projections on the image.
It required the camera to be pointed to a [nonplanar] calibration grid
(that must be accurately prepared).
The first step is to solve a linear equation to find the external parameters R, T (except for Tz) [and the scale factor ax], and then a nonlinear optimization is performed over Tz,f,k1. Finally Ox,Oy are determined with another nonlinear optimization step.
The procedure can be iterated in order to improve accuracy.
Every calibration technique for radial distortion, that is based on the knowledge
of the position of points in the world, needs to search for
the overall set of parameters at the same time, even
though it is possible to separate the parameters in two groups, so that
nonlinear optimization is performed only on a subset of the parameters [Tsai].
The calibration procedure requires the user to
Another class of calibration methods includes those that do not need any knowledge
about the position of points in the world.
In this case it is possible to separate internal calibration from external calibration,
and the optimization step can be performed for instance
over the four internal parameters k1, Ox, Oy, ax.
An example of this method for internal calibration has been proposed by Devernay and Faugeras (1995). It is based on minimizing the curvature of segments in the world determined with edge detection and polynomial approximation from scenes of structured environments.
The optimization step can be very time consuming, as the solution space has dimension 4. So the overall time needed to compute the distortion of an image must be as low as possible and a godd search strategy is required.
The technique we are proposing here allows for the calibration of the internal
parameters of the camera (k1, Ox, Oy, ax) by simply point the camera to a
drawing containing only straight lines.
The method has been used for calibrating a pair of stereo cameras for a mobile robot. Disparity and range from stereo have been used for evaluating calibration accuracy.
The undistorted image coordinates are
x = ( id - Ox ) * ax * Px * ( 1 + k1 * rd^2 ) y = ( jd - Oy ) * Py * ( 1 + k1 * rd^2 )where (id,jd) are the frame coordinates of the distorted image, and
rd^2 = ((id-Ox)*ax*Px)^2 + ((jd-Oy)*Py)^2
In the ideal case they would be
x = ( i - Ox ) * Px (1) y = ( j - Oy ) * PyTherefore the relation between distorted and undistorted frame coordinates is
i = ( id - Ox ) * ax * ( 1 + k1 * rd^2 ) + Ox j = ( jd - Oy ) * ( 1 + k1 * rd^2 ) + OyBy knowing k1, Ox, Oy, ax, (Px and Py) it is possible to undistort the image so that it can be considered as coming from a pinhole camera, and the equations (1) are valid.
In the following pictures the distorted and undistorted images of left and right camera are shown.
How to find the calibration parameters?
Typical values for the internal parameters are
Determining the internal calibration parameters with a good precision
is a difficult task, and we found out that the disparities are very sensitive to them
(even a little modification leads to bad disparities).
Furthermore evaluation of calibration accuracy is not easy. Accuracy in camera calibration is often measured by how well it can measure the 3D world.
In the following k1, Ox, Oy, ax and k1', Ox', Oy', ax' are the optimal parameters of the left and
the right camera respectively, while k1~, Ox~, Oy~, ax~, k1'~, Ox'~, Oy'~, ax'~
are the correspondent approximate parameters.
When yd = 0 the absolute disparity error between an approximate and the ideal solution is
eD = D~ - D = ax~ * id + (1-ax~) * Ox~ + k1~ * ax~^3 * Px^2 * (id-Ox~)^3 - ax'~* id' - (1-ax'~) * Ox'~ - k1'~* ax'~^3* Px^2 * (id'-Ox'~)^3 + ax * id + (1-ax) * Ox + k1 * ax^3 * Px^2 * (id-Ox)^3 - ax' * id' - (1-ax') * Ox' - k1' * ax'^3 * Px^2 * (id'-Ox')^3Now suppose that the only error is in the determination of k1 and k1', that is ax~=ax, Ox~=Ox, ax'~=ax', Ox~'=Ox', and, for simplify formulas, that ax=ax'=1 and Ox=Ox'.
eD = Px^2 * [ a * c^3 - b * (c + D)^3 ]
Range relative error is
Z~ - Z Px^2 * [ a * c^3 - b * (c+D)^3 ] eZ = -------- = - ------------------------------------------ Z Px^2 ( a * c^3 - b * (c+D)^3 ) + D
Theoretical error analysis generates the following table, by using the values f=5 mm, b=100 mm, Px = 0.01 mm, ax =1
|Z [m]||D [pix]||a||b||c [pix]||eZ [%]||Z~ - Z [mm]|
The table shows that significant errors (up to 10%) are present when close objects are in the image board. However if the distortion coefficients are known with an approximation within 10%, then this error becomes less then 1%.
Moreover error analysis on eD and experimentation have shown different kinds of distortion,
depending on which paramters are badly calibrated.
Suppose that the only error is in the determination of k1 (that is Ox~=Ox, ax'~=ax', Ox'~=Ox', and k1'~=k1'), then the error will be
D~ - D = Px^2 * (ax * k1 - ax~ * k1~) * (id - Ox)^3resulting in a cubic distortion over id.
While an error in determining Ox (ax~=ax, k1~=k1, ax'~=ax', Ox'~=Ox', and k1'~=k1') corresponds to a quadratic distortion
D~ - D = Px^2 * ax * k1 * (Ox~ - Ox) * [ 3 * id^2 - 3 * (Ox+Ox~) * id + Ox^2 + Ox*Ox~ + Ox~^2]
In this example the camera has been put in front of a door, as shown by the
first two (undistortedd) images below.
The other images represents the range data mapped on the Saphira environment, in the following situations:
In this example the relative error in the determination of the distance from the door (that was 1.5 m), was above 30% at the borders of the distorted images, while even an approximate correction (within 10%) gave us an error around 5%.
Let D = i - i' be the ideal disparity values (when optimal parameters are used) and D~ = i~ - i'~ be the disparity computed with approximate parameters, an error function can be defined as
eD = SUM_i~ ((D~ - D)^2)If the stereo camera is perpendicular to a straight wall and sufficiently far away from it, and Ox~ =~ Ox, then D can be approximate by the value of D~ when i~=Ox~, and eD can be actually computed.
Summarizing, after approximately good values for the internal parameters are independently found for the left and the right camera, a further step for refining these values is peformed by adding constraints on properties of the environment seen through the stereo camera.
As the variations of the parameters in this second step are usually very small, there is no need (in general) to come back to the first step.
We assume in the following to deal with a stereo camera in the standard setting (i.e. with parallel optical axes and coplanar image planes), and that small angle approximation is valid (angles determining R are small).
We choose the world reference system to be the left camera one, and the parameters to be found are the translation vector T and rotation matrix R of the right camera with respect to the left one.
Usually a measure of the distance between the center of the cameras is sufficient for determining Tx (that equals the baseline b).
As for Ty and Tz, we consider them to be 0.
Rotation around X axis (phi)
The angle phi can be easily found by looking at the uniformity of
the disparity map. Indeed, as the matching algorithm looks for corrispondent points
in the same rows of the two images, when the images are not vertically aligned
(phi != 0) there is a significant noise in the disparity map.
An automatic procedure for finding out the vertical displacement Yoff of the two image has been realized. phi can be easily derived from Yoff.
Rotation around Y axis (theta)
The angle theta can be derived by the Z component of the fixation point (Z0).
Z0 can be found by imposing the disparity of a point at infinity to be zero.
This can be easily obtained by looking at a far object (as in the picture below), and by horizontally shifting one image to the other, until the disparity of that point is zero.
Z0 and theta are then computed from the horizontal displacement (Xoff).
Moreover, when looking at a corridor (with parallel walls), bad values in Z0 lead to disparities that arise convergence (or divergence) of the reconstucted walls. The correct value for Z0 is the one that allows for the reconstruction of parallel lines as walls.
Rotation around Z axis (psi)
The angle psi can be again found by looking at the uniformity of
the disparity map. Indeed rotation of one image with respect to the other
leads to significant noise in the disparity map.
F. Devernay and O. Faugeras. Automatic Calibration and Removal of Distortion from Scenes of Structured Environments. In Proc. of SPIE'95. 1995.
R. Tsai. An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision. In Proc. of CVPR'86. 1986.
R. Lenz and R. Tsai. Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology}, IEEE Trans. on Pattern Analysis and Machine Intelligence Vol. 10, N. 5. 1988
Multiresolution Stereo Vision System
Hough Internal Calibration