GWT - qtec/build-qt5022-core GitHub Wiki

Introduction

Qtechnology provides an easy interface for working with the cameras.

The camera interface is a web based solution built using GWT. It is accessible through the port 80 (default web port) at the camera IP address. The camera runs a web server and any client on the same network can connect to it using a web browser.

The camera interface allows the user to:

  • View a live stream from the camera
  • Download JPEG, PPM and RAW images from the camera
  • Access to all Video4Linux controls
  • Easy setup of different settings like: framerate, exposure time, image size (and cropping area), adjust white balance, etc

It also has some utilities for analyzing the images: histogram, color mapping, sharpness (focus) measurements, etc.

Moreover, it also allows for some simple image manipulations like contrast enhancement and image subtraction.

Accessing the camera interface

Open a browser and type the camera IP address on the address bar.

The Qtec cameras have a default setup which expects a DHCP address at ETH port A and have a fixed IP address (10.100.10.100) at ETH port B.

The following page will load: GWT Camera Interface

Interface Overview

The camera interface is divided into tabs/pages which group certain functions together.

Interface Overview Camera Interface Overview: highlighting the different areas of the interface

Header area

The name of the current tab is displayed in the middle of the header area. The available tabs are displayed in the menu bar below the header area, and the sub-tabs of the selected tab are displayed below the menu bar (in the sub-menu bar).

General information about the camera is displayed on the right side of the header area:

  • Camera name
  • Sensor Type: for example in the picture above 4Mpx CMOS with Bayer filter
  • MAC address
  • IP address

Lastly, on the right side of the menu bar there is a box for the selection of the video device to be used (relevant for cameras with multiple image sensors).

Tabs/Pages

As mentioned before the different tabs/pages group certain functions together. They are meant to provide an easier interface for adjusting different camera parameters (instead of having to operate directly on Video4Linux controls/settings).

Most of the tabs/pages available have a similar layout:

  • An Image Viewer area to the left: showing a live stream from the camera
  • And a control panel to the right: where different camera settings can be adjusted (the available controls depend on the chosen tab)

IMPORTANT

Note that all tabs fetch the current values of the video4linux controls when loading, however they do not update themselves automatically. Therefore if multiple tabs are open simultaneously the changes made to video4linux controls in one tab won't be reflected in the other tabs unless the browser page is reloaded (by pressing F5 on a keyboard or the browser reload button).

Also note that the Video4Linux standard dictates that a capturing video device can only be opened by one process at a time. The GWT camera interface handles this restriction on the server level, allowing several tabs to be open simultaneously (simultaneously streaming images). However outside applications (user applications) won't be able to read frames out of the video device while any tab with a Image Viewer is opened (since it claims the video device). The camera interface will release the video device a short time (~5 seconds timeout) after the last browser window with a Image Viewer is closed.

Image Viewer Area

Shows a live feed from the camera and has some built in functionality:

  • Zoom control bar: allows the user to zoom in and out in the image as well as 2 buttons for quickly adjusting the zoom level to either 100% or a view fit (scales the image so that it completely fills the allocated area)

  • Frame number counter: shows the current frame number

  • Color mapping bar: allows the user to enable color mapping of the image. It substitutes the existing colors with an “artificial” color encoding based on the pixel intensity (grayscale values). Areas with higher intensity will be reddish and lower intensities will be blueish (with orange, yellow, green and cyan in between). It can be used to highlight certain features in the images. It is possible to adjust the limit values (the minimum which will be blue and the maximum which will be red) manually or automatically (based on the image limits).

Color Mapping example Color Mapping example: Red and Blue values automatically adjusted based on the minimum and maximum pixel intensities of the image by using the "Get Range" button

  • Download button: allows saving a PPM file (uncompressed image format) of the image

  • Converted to either RGB 24 bit (P6) or Grayscale (P5) in 8 bit or 16 bits

  • Channel mapping bar: allows the user to change the mapping of the different image channels (red, green and blue). Different channels can be for example isolated, if desired.

Channel Mapping: isolating blue channel Channel Mapping example: isolating the blue channel

  • Image manipulation bar: allows for some simple image manipulations
  • Image subtraction: allows subtracting a reference image from the current camera image
  • Image division: allows dividing the current camera image by a reference image
  • Contrast Stretching: allows doing contrast enhancement by different methods:
    • Linear methods (image normalization)
      • Absolute: the minimum pixel intensity of the image will be shifted to zero and the maximum to 255 (values in between are linearly interpolated).
      • Histogram based methods: use a histogram in order to discard certain pixels in order to avoid outliers having too big of an influence
        • Extremes cutoff: similar to Absolute, except that it discards the 5% brightest and 5% darkest pixels
        • Histogram peak cutoff: similar to Absolute, except that it discards pixels below 5% of the histogram peak
    • Non-linear methods (image equalization)
      • CDF: uses in cumulative distribution function in order to equalize the image
      • SQRT: similar to CDF, except it uses the square root of the histogram values (gives a smoother equalization)
Contrast Stretch: Off Contrast Stretch: Extremes Cutoff
Contrast Contrast
Contrast Stretch example: Without contrast stretching vs with "Extremes Cutoff" contrast stretching method
  • Histogram button: opens a histogram window
  • Can be setup to update the histogram manually or automatically.
  • Can show the histogram of the different color channels
  • Choose between linear, log scale or cumulative distribution
  • Selectable number of bins (must be multiple of 2)
  • Information about peak value
  • The histogram box can be moved around freely

Histogram Histogram example

Camera Features

The Qtec cameras have a lot of special features which allow for advanced functionality.

Pixel Formats

The cameras support a variety of pixel formats for outputting the images (different applications require/benefit from using different formats):

  • Grayscale 8 or 16 bit (Big/Little Endian)
  • If using a sensor with a Bayer filter
  • Decimated color formats (output images with half the sensor resolution by picking the color pixels directly from the bayer filter, no interpolation)
    • RGB and BGR 24 bit (8 bit per color)
    • RGBA and BGRA 32 bit (8 bit per color + alpha channel)
  • RAW Bayer formats (can then be interpolated in order to generate full resolution color images)
    • BGBG/GRGR
    • GBGB/RGRG
    • GRGR/BGBG
    • RGRG/GBGB
  • UYVY and YUYV 4:2:2
  • HSV color space formats: Hue, Saturation, Value (8 bit per channel) instead of RGB (very useful for image processing)

The pixel format can be easily changed through the Camera Settings Page

Image Cropping

Smaller images allow for a higher output framerate (as well as reducing the load on image processing applications).

Therefore it is possible to select a smaller area of the sensor from which the images should be produced, this is referred to as cropping. With Qtec cameras the user is able to select multiple cropping areas to compose the image: up to eight regions with the same width but varying height.

See Image Size Tab for more information.

Line skipping

It is also possible to periodically skip a certain number of image lines: read one line and then skip 1, 2, 3, ... lines.

See Image Size Tab for more information.

Binning

The cameras also allow the user to perform Binning, which can be though as a sort of downscaling. When binning is activated a certain number of pixels will be added together (note that they are by default added and not averaged). It is possible to select binning in both horizontal and vertical directions.

Note that cropping and line skipping both produce smaller images while increasing the framerate (since less data needs to be read out of the sensor). Binning in the other hand will produce smaller images but won't improve the framerate (since all the data still needs to be read out of the sensor in order for it to be added together). The advantage of binning is that since the pixels are added and not averaged it will increase the "light level" (pixel intensity). Therefore it can be useful in situations where the light level is low.

If the resulting pixel intensity after binning is too high (and causes saturation) it is possible to use the "output scaler" which will lower the intensity of the input pixels by multiplying the pixel intensities by a specified value.

Binning can currently only be performed by interacting directly with the relevant Video4Linux controls:

  • Horizontal Binning: how many pixels should be binned together in the horizontal direction
  • Vertical Binning: how many pixels should be binned together in the vertical direction
  • Output Scaler: multiply each input pixel intensity by a value (in order to avoid saturation)
  • Range: from 0 to 32767, where 16384=1x (so 8192=0.5x and 32767=2x)

![Binning controls](img/gwt/v4l2_params binning.png) Binning controls in V4L Parameters Tab

Note that in order for the binning to be accepted the image size must be first properly set using "Frame Width" and "Frame Height" through the Video4Linux interface.

![Image Size controls](img/gwt/v4l2_params image size.png) Image Sizecontrols in V4L Parameters Tab

Note that cropping values are affected by binning. Therefore when setting binning it is necessary to start with a full image (whole sensor, no cropping), adjust the binning and then perform cropping as the last step

See V4L Parameters Tab for more information on interacting directly with Video4Linux controls

Example

Setting binning to 3x2:

  • Make sure the Binning controls are set to their defaults (no binning)
  • Go to V4L Parameters Tab
    • set Horizontal Binning to 1
    • set Vertical Binning to 1
    • set Output Scaler to 16384 (1x multiplier)
  • Make sure the image covers the whole sensor
  • Go to Image Size Tab
    • click on the "Full sensor size" button

![Before binning](img/gwt/before binning.png) Before binning: full image

  • Adjust the image size so that it fits with the desired binning
  • Go to V4L Parameters Tab
    • divide Frame Width by the desired horizontal binning
      • In this case the value was 1024 and my desired horizontal binning is 3. So I set it to 1024/3 = 341
    • divide Frame Height by the desired vertical binning
      • In this case the value was 1024 and my desired vertical binning is 2. So I set it to 1024/2 = 512
    • set Horizontal Binning to 3 (or otherwise desired value)
    • set Vertical Binning to 2 (or otherwise desired value)
    • set Output Scaler to 16384/(horizontal_binning x vertical_binning) = 16384/(3 x 2) = 2731
      • this ensures that the pixels have in fact been averaged (we have divided their intensities by 6) so there should be no saturation (unless saturation was present in the original image)
      • now the Output Scaler can be adjusted in order to result in the desired pixel intensity (higher values = brigther images)
        • in case case I have set it up to 4800
Output Scaler = 2731 Output Scaler = 4800
<img src="img/gwt/binning3x2%20output_scaler=2731.png" width="100%" alt=3x2 binning, output_scaler=2731> <img src="img/gwt/binning3x2%20output_scaler=4800.png" width="100%" alt=3x2 binning, output_scaler=4800>
Adjusting the Output Scaler: 2731 vs 4800
Before Cropping After Cropping
<img src="img/gwt/binning3x2%20before%20crop.png" width="100%" alt=3x2 binning, before cropping> <img src="img/gwt/binning3x2%20after%20crop.png" width="100%" alt=3x2 binning, after cropping>
Binning + Cropping : before cropping vs after cropping

Trigger and Flash

The camera has 2 very useful IO (input/output) signals: input trigger and output flash. The trigger is related to frame generation while the flash can be used to drive external light systems (or synchronize several cameras together).

Trigger

The camera allows for advanced control of the frame generation timings. This will be referred to as the "trigger mode".

In the default mode (Self timed) the camera will produce frames based on the selected frame rate and exposure time.

It is possible, however, to make the camera produce images based on an external signal (input trigger signal).

This is useful, for example, when it is desired to synchronize several cameras together. In this case one camera will be the master: it will run in the "self timed" mode and generate a trigger signal for the slave cameras. The slave cameras have then to be setup to use one of the 2 external trigger modes: "External trigger" or "External exposure".

If "External trigger" is selected the slave camera will produce frames based on the input trigger signal and the selected (in the slave camera) exposure time. In this way the slave camera will start a frame capture at the same time of the master but with the possibility of a shorter or longer exposure time.

If "External exposure" is selected the slave camera will produce frames based on the input trigger signal and the signal duration will be used to govern the exposure time (instead of using the slave camera exposure time). This will effectively cause the slave camera to completely "mimic" the master camera.

It is also possible to add a delay to the frame generation ("External trigger delay") when using one of the 2 external trigger modes.

The default expected input trigger signal is +24V active-low. But it is possible to invert its polarity by using "Invert Trigger Polarity".

Moreover it is also possible to set the trigger mode to "Idle" which will prevent the camera from producing images. This mode can be used in together with the "Manual Trigger" video4linux control in order to trigger a single frame capture.

The trigger settings can be easily adjusted in the "Trigger and Flash sub-panel" inside the Camera Settings Tab.

Output Flash

The flash is an output signal the camera generates whenever it is producing a frame. It can therefore be used to synchronize external hardware: for example driving an external light system (so that the light is only on during frame capture) or synchronizing the frame capture of several cameras together.

The default output flash signal is +24V active-low. But it is possible to invert its polarity by using "Invert Flash Polarity".

It is also possible to disable the signal by using "Disable Flash".

The flash settings can be easily adjusted in the "Trigger and Flash sub-panel" inside the Camera Settings Tab.

Electrical connections

See Camera backside connections under Setup

The input trigger and output flash signals are available in the IO and PWR connectors located in the back of the camera. The input trigger is pin 4 in the PWR connector and the output flash is pin 2 in both the IO and the PWR connectors.

In order to synchronize 2 cameras together connect the output flash signal of the master camera to the input trigger signal of the slave camera. Remember to add a pull-up resistor (between the +24V and the flash) in order to guarantee that the signal will be a stable +24V unless the master camera drives it low (when generating a frame). Otherwise the signal will be sensitive to noise.

Backside Camera back plate

Xform

Qtec cameras also offer an interface which allow for having image transformation kernels implemented directly in the internal FPGA. This allows for very fast image manipulation. This interface is referred to as Xform.

There are 3 types of image transformation currently available: pixel re-mapping, gain mapping and colorspace transformation.

Note that this functionality is targeted to very advanced users.

Xform Gain Map

A simple image transformation where a per-pixel gain is applied in the FPGA, after the image was read from the sensor, and before it is available to the application.

It consists of a per-pixel 8-bit gain (0-255 = 0-1x) plus a general (not per-pixel) 8-bit offset (0-255 = 0-1x) and a extra multiplier (not per-pixel) (4.12 format => 0-262144 = 0-16x, where 16384 = 1x) which is applied to gain+offset.

final_pixel_intensity[x,y] = original_pixel_intensity[x,y] * (gain_map[x,y] + offset) * Multiplier

Usage

A generated gain map file (of size 8-bit x img_width x img_height) with the per-pixel gain values must be copied into the xform gain device.

  • Command line:

cat xform_gain_map > /dev/qt5023_xform_gain0

or

  • C program:
char gain_file[64] = "xform_gain_map";
char gain_device[64] = "/dev/qt5023_xform_gain0";

//open gain map file
FILE* src_file = fopen(gain_file, "rb");

// obtain file size:
fseek (src_file , 0 , SEEK_END);
lSize = ftell (src_file);
rewind (src_file);

// allocate memory to contain the whole file:
buffer = (char*) malloc (sizeof(char)*lSize);

// copy the file into the buffer:
result = fread (buffer, 1, lSize, src_file);

fclose(src_file);

//open gain map device
FILE* device_file = fopen(gain_device, "wb");

//write to device
result = fwrite(buffer, 1, lSize, device_file);

fclose(device_file);
free (buffer);

The xform gain offset and multiplier must be setup using the v4l2 controls (in V4L Parameters Tab):

  • "Extra Gain for Gain Map" (offset, range: 0-255 = 0-1x)
  • "Xform Gain Scale" (multiplier, range 0-262140 = 0-16x). Note that the multiplier is applied to both the gain map and the gain offset.

Now the xform gain map can be enabled by activating the checkbox v4l2 control called "Gain Map" (in V4L Parameters Tab).

Note that if the size of the gain map file and the size of the xform gain device (which is equal to the image size of the corresponding video device) don't match the gain map won't be accepted. Also note that in case down-scaling is used (through the distortion map) the size of the gain map needs to reflect the final size (size after up/downscaling).

IMPORTANT: if both distortion and gain map are desired it is necessary to first apply the distortion map (Since the distortion map is applied first in the image pipeline), then take a picture with it active, and use this picture to calculate the necessary gain map.

Also note that the video device should not be in STREAM_ON (there should be no pages with an Image Viewer open) during the copy (might work in some situations).

Lastly note that the xform gain device is write-only, it is not possible to read-back. In case read back is desired a read-only device can be accessed: /dev/qt5023_xform_gain_readback0

Xform Gain Generator

Tool for generating a xform gain map that "corrects" a given input image. You can install it by running

apt-get install xform-tools

It basically "inverts" the input image in order to generate the gain mapping that will result in uniform pixel intensity.

In practice it takes care of a lot of details like finding the optimal scale and offset.

It analyzes the input image measuring some metrics (min, max, avg, stddev, ratio) and tries to raise all pixel to the intensity of the max pixel.

xform-gain-generator [-o offset] [-s scale] ref_img out_file [result_img]

Remember to use white balanced images which are not over saturated.

Xform Distortion Map

Pixel remapping function applied in the FPGA, after the image was read from the sensor, and before it is available to the application (applied before the Gain map).

It was originally created in order to correct lens distortion (barrel and pincushion), therefore the name, but it can be used for any desired pixel remapping: rotation, perspective, stereo rectification, down-scaling etc.

The way it works is that the resulting image is "built" by interpolating 4 neighboring pixels from the original image.

Fx: pixel (x=5,y=10) of the resulting image is built using pixel (x=20.3,y=30.7) of the original image.

Note however that there is a limit to how much the pixels can be moved (it is not possible to make a vertical flip unless the image is very small). The amount of buffered lines can be calculated by looking at the "Distortion Buffer Size" v4l2 control and the image width (in V4L Parameters Tab).

Fx: buffer size of 94208 and a image width of 1024 give of 92 lines of buffer. But since the first and last lines can't be used (they are being written into) we end up with 90 lines of buffer.

Moreover there is also a constrain (necessary for memory optimization) in how much the pixels can move in relation to the previous pixel: [x-4, x+3] and [y-4, y+3].

Fx: if pixel (x=5,y=10) of the resulting image was built using pixel (x=20.3,y=30.7) of the original image, the pixel (x=6,y=10) of the resulting image must come from the region ([x=20.3-4, x=20.3+3],[y=30.7-4, y=30.7+3]) of the original image (so we can't do completely random images).

Note however that the first pixel of each line is an exception to this rule and can come from anywhere as long as it respects the amount of buffered lines.

Down-scaling

The xform dist also allows for down-scaling. This can be done by telling the xform to skip some lines (by marking setting both x and y to -1)

Example
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <linux/qtec/qtec_video.h>

void use(char *name){
	fprintf(stderr,"%s in_cols in_lines out_cols out_lines\n",name);
	return;
}


int main(int argc ,char *argv[]){
	unsigned int in_lines,in_cols,out_lines,out_cols;
	int i,j,next_line,next_col;
	struct qtec_distortion dist;

	if (argc!=5){
		use(argv[0]);
		return -1;
	}

	in_cols = atoi(argv[1]);
	in_lines = atoi(argv[2]);
	out_cols = atoi(argv[3]);
	out_lines = atoi(argv[4]);

	if (out_lines>in_lines || out_cols>in_cols){
		fprintf(stderr,"Xform cannot scale up images\n");
		return -1;
	}

	if (out_cols % 4){
		fprintf(stderr,"Xform needs output cols multiple of 4\n");
		return -1;
	}

	next_line = 0;
	for (i=0;i<in_lines;i++){
		long aux = (1L * next_line * in_lines * 0x10000) / out_lines;
		dist.line = aux>>16;
		dist.line_res = aux&0xffff;
		if (aux < i*0x10000 || next_line>=out_lines)
			dist.line = -1;
		else
			next_line ++;

		next_col = 0;
		for(j=0;j<in_cols;j++){
			long aux = (1L * next_col * in_cols * 0x10000) / out_cols;
			dist.col = aux>>16;
			dist.col_res = aux&0xffff;
			if (aux < j*0x10000 || next_col>=out_cols)
				dist.col = -1;
			else
				next_col ++;
			fwrite(&dist,sizeof(dist),1,stdout);
		}
	}

	return 0;
}

Usage

A generated distortion map file (of size 2 x 32-bit x img_width x img_height) containing each pixel coordinates must be copied into the xform dist device.

  • Command Line:

cat xform_dist_map > /dev/qt5023_xform_dist0

  • C program:
char dist_file[64] = "xform_dist_map";
char dist_device[64] = "/dev/qt5023_xform_dist0";

//open gain map file
FILE* src_file = fopen(dist_file, "rb");

// obtain file size:
fseek (src_file , 0 , SEEK_END);
lSize = ftell (src_file);
rewind (src_file);

// allocate memory to contain the whole file:
buffer = (char*) malloc (sizeof(char)*lSize);

// copy the file into the buffer:
result = fread (buffer, 1, lSize, src_file);

fclose(src_file);

//open gain map device
FILE* device_file = fopen(dist_device, "wb");

//write to device
result = fwrite(buffer, 1, lSize, device_file);

fclose(device_file);
free (buffer);

Now the xform distortion map can be enabled by activating the checkbox v4l2 control called "Distortion Map" (in V4L Parameters Tab).

Note that if the size of the distortion map file and the size of the xform distortion device (which is equal to the image size of the corresponding video device) don't match the distortion map won't be accepted.

Also note that in case down-scaling is used the size of the gain map needs to reflect the final size (size after up/downscaling).

Also note that the video device should not be in STREAM_ON (there should be no pages with an Image Viewer open) during the copy (might work in some situations).

Lastly note that the xform dist device is write-only, it is not possible to read-back. In case read back is desired a read-only device can be accessed: /dev/qt5023_xform_dist_readback0.

Xform Distortion Map Generation

Note: OpenCV has pixel remapping (and camera calibration) functions which can be used to generate xform distortion maps.

Distortion Map format

2 x 32-bit (16.16) for x and y coordinates

Note: when creating a xform dist map use the struct qtec_distortion

Xform Colorspace

Performs colorspace conversions: RGB->HSV/RGBH

Transforms the colorspace from RGB to HSV or RGB+Hue.

Wikipedia link on the HSV colorspace

Channel limits: H(0-179 = 0-359 degrees), S(0-255 = 0-100%), V(0-255 = 0-100%)

Note that in the future there will be the option to have H from 0-255 or 0-179.

The HSV colorspace is very useful for image processing.

Usage

This transformation is activated by choosing HSV as the Pixel Format

Camera Interface Tabs/Pages

Overview of the different available pages and their functions

Camera Settings Tab

The default tab is called Camera Settings and it is meant to handle most of the basic camera functions. It is made of an Image Viewer and a control area.

Control Area

The controls present in the control area of the Camera Settings Tab are the ones related to basic camera functionality:

  • Pixel Mode: the format of the output images.

  • Note that independent of the pixel format the images are always rendered as 24bit RGB (converted to JPEG) in the live image viewer

  • Sensor bit mode: 10 or 12 bits

  • Image Size: shows the current image size

  • Contains a shortcut button to the Image Size Tab where is possible to adjust the desired cropping area inside the sensor

  • Frame rate: in frames per second (limited by the image size)

  • Exposure time: in micro seconds (limited by the frame rate)

  • Analog gain: the CMOS sensor allows for adding an analog gain to the images

  • Image flipping: allows for flipping the image horizontally/vertically (built-in CMOS sensor functionality)

  • Color Settings: shortcut button to the Color Calibration Tab where white balance can be done

  • Trigger and Flash sub-panel: allows for adjusting the trigger and output flash signal functionality

  • Trigger functionality: governs the frame generation

    • Trigger Mode
      • Self timed: will produce frames based on the selected frame rate and exposure time
      • External trigger: produce frames based on the input trigger signal and selected exposure time
      • External exposure: produce frames based on the input trigger signal using the signal duration to govern exposure time
      • Idle: don’t produce images
    • Invert Trigger Polarity
    • External trigger delay: delays the frame generation by a desired number of micro seconds (when using one of the external trigger modes)
    • Manual trigger button: causes the sensor to produce a frame
  • Output Flash signal functionality: the camera generates an output flash signal when it generates a frame. This is normally used for driving a light system or synchronizing different cameras together

    • Invert Flash polarity
    • Disable Flash

Image Size Tab

Present as a sub-menu of "Calibrations".

Allows the user to adjust the desired cropping area inside the sensor as well as using line skipping.

It is made of an Image Viewer and a control area.

The image viewer contains the camera image, which might be padded (default setting) or not. A padded image will have the size of the full sensor and the "unused area" will be filled with gray. Padding the image is useful so that the user can visually see where in the sensor the image is originated from.

With padding Without padding
cropping, cropping,
Image Cropping : with and without padding

One or more rectangles are overlayed on top of the image (one rectangle per desired cropping area, with a maximum of 8, selected using the "Number of image areas" drop-down box) in order to show the user the selected cropping area (selected using the available sliders). The currently active rectangle (which is selected using the "Selected image area" drop-down box) is shown in red while the other ones are shown in green.

![Image cropping, multiple areas](img/gwt/cropping multiple areas.png) Image cropping: multiple areas selected (not yet applied)

The desired selection then needs to be applied ("Apply Changes" button) in order to take effect.

![Image cropping, multiple areas, after applying](img/gwt/cropping multiple areas applied.png) Image cropping: multiple areas applied

Note that the different cropping areas area allowed to have different heights but must have the same width and horizontal placement. Note that the regions must also be ordered respective to their vertical placement in the sensor (for example region 1 must be placed higher in the sensor that region 2).

It is possible to quickly revert to either the full sensor size or the max active sensor area by using the "Full sensor size" and "Max active area" buttons. Note that the max active sensor area might be smaller than the full sensor size if the sensor contains special areas like for example black level columns/lines.

Moreover there are also controls present for line skipping.

![Image cropping, multiple areas + line skipping, after applying](img/gwt/cropping line skipping.png) Image cropping: line skipping applied

Control Area

The controls present in the control area of the Image Size Tab are the ones related to image cropping:

  • Pad image checkbox: controls if the image shown in the image viewer area should be padded or not. When active the image will be padded with gray areas, so that the user can visually see where in the sensor the image is placed. Otherwise only the resulting (cropped) image is shown.

  • Frame information area: contains information about the current maximum framerate and an estimation of what the maximum framerate will be in case the selected (not yet applied) cropping area is applied (since as previously discussed smaller images allow for higher framerates).

  • Cropping Areas Selection panel:

  • Center selected image area checkbox: when active it will automatically adjust the values of Start X and Y, based on the selected Width and Height, in order to ensure that the resulting cropping area is in the center of the sensor.

  • Start X slider: select the starting horizontal point of the cropping area

  • Width slider: select the desired width of the cropping area

  • Area Selection panel: adjusts how many separated cropping areas are desired and which one is to be currently modified with the position sliders.

    • Number of image areas drop-down box: adjusts how many separated cropping areas are desired (between 1 and 8)
      • One rectangle will be overlayed in the image representing the placement of each cropping area
        • the currently active rectangle is drawn in red while the other ones are drawn in green
    • Selected image area drop-down box: selects which of the cropping regions is to be currently modified with the position sliders (it's corresponding rectangle is drawn in red while the other ones are drawn in green).
      • Note that this only refers to the Start Y and Height sliders as the Start X and Width sliders are common for all cropping regions (since they different regions can have different heights but must have the same horizontal position and width).
  • Start Y slider: select the starting vertical point of the selected cropping area

  • Height slider: select the desired height of the selected cropping area

  • Image subsampling panel: contains the line skipping controls

  • Number of lines to skip box: tells the sensor to only use every [x+1]th line.

    • 0 will use the whole image
    • 1 will skip every other line
    • 2 will use 1 line and skip 2...
  • Resulting height: shows the resulting image height after line skipping

  • Apply changes button: applies the desired cropping areas on the sensor

  • Max active area button: reverts back to the maximum active sensor area

  • Full sensor size button: reverts back to the full sensor size

Color Calibration Tab

Present as a sub-menu of "Calibrations".

Allows the user to adjust the color settings of the camera. Mostly used in order to achieve white balanced images.

It is made of an Image Viewer and a control area.

The image viewer contains the camera image and an overlayed red rectangle. This rectangle defines a measuring area in the image. The size and position of the measuring area can be adjusted by using the "Box Controls" collapsible panel.

![Color Calibration, Box Controls](img/gwt/color calibration box controls.png) Color Calibration: Box Controls

This measuring area is used in order to calculate some useful information:

  • minimum, maximum and average values for each color channel (present in the table in the control area)
  • calculate a color histogram
  • measure the intensity profile, in either the horizontal or the vertical directions (useful for analyzing the light distribution across the image)
  • perform automatic white balancing (if a neutral gray target covers the entire measuring area)

![Color Calibration, Histogram](img/gwt/color calibration histogram.png) Color Calibration: Histogram

![Color Calibration, Intensity Profile](img/gwt/color calibration intensity profile.png) Color Calibration: Intensity Profile (horizontal direction: x-axis)

Control Area

The controls present in the control area of the Color Calibration Tab are the ones related to color settings:

  • White Balance Controls: adjust the video4linux parameters related to color settings (used to achieve white balancing in the image)
  • Analog Gain: the CMOS sensor allows for adding an analog gain (shouldn't increase noise level) to the images (all channels)
  • Red Digital Gain: red channel gain (digital, will increase noise level)
  • Green Digital Gain: green channel gain (digital, will increase noise level)
  • Blue Digital Gain: blue channel gain (digital, will increase noise level)
  • Auto White Balance button: performs automatic white balance (will adjust the color sliders above) by using the color measurements for the selected area (requires a neutral grey target to cover the entire measuring area).
  • Auto White Balance settings: select which value the white balance function should use as the target
    • "Use red channel as Auto WB target value" box: will automatically fill "Auto WB target value" with the red channel average value
    • "Auto WB target value": manually adjust the target value for the white balance function

Performing White Balancing

White balance is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in an image. Proper camera white balance has to take into account the "color temperature" of a light source, which refers to the relative warmth or coolness of white light.

It is possible to perform white balancing in the camera by adjusting the individual gains of each color channel (in this way compensating for both uneven color response in the sensor itself as well ad uneven color distribution of a light source).

In order to perform automatic white balancing:

  • Make sure that a neutral gray target covers the entire measuring area (neutral grey cards are normally available in photography shops).
  • click on the "Auto White Balance" button

The values of the 3 color sliders will be automatically adjusted in order to make the average values of each color channel equal.

Note that using color gains below 1x is discouraged since it can result in "color artifacts" in case of image saturation (consider a white saturated image, each channel should have an intensity of 255; in case fx the green channel has a gain below 1x it will end up with a "clipped" value below 255; this will cause the white image to look purplish). Therefore in order to achieve better results it is recommended adjust the following before running "Auto White Balance":

  • Un-mark the "Use red channel as Auto WB target value" box
  • Set the "Auto WB target value" box to the highest average value (among the 3 color channels) from the measuring table (remember to round up) This will ensure that the color gains stay above 1x.

![Color Calibration, before White Balance](img/gwt/color calibration before white balance.png) Color Calibration: before White Balance

![Color Calibration, after White Balance](img/gwt/color calibration after white balance.png) Color Calibration: after White Balance

V4L Parameters Tab

As the camera is based on the Video4Linux standard all camera settings can be adjusted through video4linux controls/settings.

However most of the functionality has been grouped into the different tabs/pages in order to provide an easier interface for the user (since in some cases several video4linux controls/settings need to interact in order to generate the desired result).

This page provides a complete list over all available video4linux controls/settings and must be used to interact with functions which don't have their own tabs.

The functions which have their own dedicated tabs can also be modified from here. The user is however responsible for handling the possible interactions between different settings himself.

This tab also provides a button "Stop Frame Grabber" which causes the camera interface to immediately (instead of having to wait for the 5 seconds timeout) release the video device (as long as no tab with an "image viewer" is open). See IMPORTANT under Tabs/Pages of the Interface Overview.

Utilities Tab

Contains some help tools.

Focus Adjustment Tab

Present as a sub-menu of "Utilities".

Can be used in order to aid manual focus adjustment.

It is made of an Image Viewer and a control area.

The image viewer contains the camera image and an overlayed red rectangle. This rectangle defines a measuring area in the image. The size and position of the measuring area can be adjusted by using the "Box Controls" collapsible panel.

This measuring area is used in order to calculate sharpness, which can be used to determine the focus level. The sharpness measurement is based on contrast, therefore it is important to use a proper target (one with "sharp" features, fx text).

The measured sharpness is used to populate the table present in the control area as well a "moving" graph, which aids into knowing when maximum focus has been obtained (since the sharpness measurement is a relative one, dependent on the used target as well as the amount of light present).

Example

  • Starting with an image which is out of focus
  • Make sure to place an appropriate target inside the measuring box (fx text)

![Focus Adjustment, out of focus](img/gwt/focus adjustment, out of focus.png) Focus Adjustment: out of focus

  • Now slowly start adjusting the focus manually on the lens while observing the graph in the control area

  • Once the graph peaks and falls a bit again you know the maximum focus level

  • Now move the focus ring slowly back until you reach the maximum focus value again

  • The camera is now in focus (for the used distance)

![Focus Adjustment, in focus](img/gwt/focus adjustment, in focus.png) Focus Adjustment: in focus

Lens Calculator Tab

Tool which can help calculate the camera physical setup (mounting height, field of view [FOV] and resolution [px/mm]) based on the used sensor and focal length.

One of the 3 parameters (mounting height, field of view [FOV] and resolution [px/mm]) is fixed and the other 2 automatically calculated.

![Lens Calculator](img/gwt/lens calculator.png) Lens Calculator: calculated the field of view (FOV) and resolution based on mounting height

⚠️ **GitHub.com Fallback** ⚠️