Camera calibration background - julian-steiner/Waveshare-Stereo-Camera GitHub Wiki

Camera Calibration

In the process of snapping a picture, the 3D world is projected onto a 2D image plane. This projection can be interpreted mathematically and be optimized further to minimize the distortion of the lens. The parameters are divided into intrinsic and extrinsic factors.

Camera parameters

Intrinsics

The intrinsics are tied directly to the camera and lens you are using. Same cameras have equal intrinsic matrices. Intrinsic parameters are the camera matrix and the distortion coefficients. The camera matrix consists of the focal length and the displacement of the image center. The distortion coefficients are optional but include factors like fisheye lenses which tend to have a large distortion on the edges or macro lenses e.g. They are represented in a 4 to 6 dimensional vector depending on the lense.

In this project the same intrinsic coefficients are used for both cameras because they are the same.

if (leftError <= rightError)
{
    computeNewCameraMatrix(intrinsics.left, imageSize);
    intrinsics.right = intrinsics.left;
}
else
{
    computeNewCameraMatrix(intrinsics.right, imageSize);
    intrinsics.right = intrinsics.left;
}

Extrinsics

The extrinsic factors of a stereo camera are the displacement and rotation of the right to the left camera. This information is important for computing the disparities and later calculating depth.

Implementation

The calibration of a stereo camera include more than just calculating the calibration matrices. Computing the rectification transform and the stereo maps is a crucial step when calibrating a stereo camera.

void CalibrationAssistant::computeCalibrationMatrices(const std::string& outputFilePath)
{
    // Variable definitions (skipped in this snippet)

    // Generating the image and object points for the calibration
    loadImages(config, images);
    generateDefaultObjectPoint(config, objp);
    generateImagePoints(imagePoints, objectPoints, images, config, objp);

    // calibrating the camera
    double error = calibrateStereoCameraSetup(objectPoints, imagePoints, intrinsics, extrinsics, images.at(0).image1.size(), flags);
    computeStereoRectification(intrinsics, extrinsics, rect, images.at(0).image1.size());
    computeStereoMap(intrinsics, rect, images.at(0).image1.size(), stereoMap);
    saveCalibrationData(outputFilePath, intrinsics, stereoMap, rect);
}

The first step of the calibration is to load in all the images from the folder specified. After this, you have to generate the image and world points (more detailed explanation on the opencv documentation) for the calibration. With them you can compute the projection matrices. The next step is then to calibrate the camera setup.

double CalibrationAssistant::calibrateStereoCameraSetup(const StereoImageObjectPoints& objectPoints, const StereoImageImagePoints& imagePoints, StereoCameraIntrinsics& intrinsics, StereoCameraExtrinsics& extrinsics, const cv::Size& imageSize, const int& flags)
{
    // Calibrate both cameras
    double leftError = calibrateIntrinsics(objectPoints, imagePoints.imagePointsLeft, intrinsics.left, extrinsics.left, imageSize, flags);
    double rightError = calibrateIntrinsics(objectPoints, imagePoints.imagePointsRight, intrinsics.right, extrinsics.right, imageSize, flags);

    // Generate new, refined camera matrixes 
    // Choose the calibration of the camera with the lower error for the whole setup because the cameras are the same (in code snippet up the page)

    // Calibrate the camera as a stereo setup
    double error = calibrateStereoCamera(objectPoints, imagePoints, intrinsics, extrinsics, imageSize, stereoFlags, stereoCriteria);

    return error;
}

The first step is to calibrate both camera intrinsics. After this you can begin with the calibration of the whole setup and in this step you only determine the extrinsic matrices of the right to the left camera.

Then you can compute the rectification transform and the stereoMap which is used to rectify and undistort both images.