Cameras - OE-FET/JISA GitHub Wiki

Cameras

JISA provides a unified interface for scientific cameras in the form of Camera<F>. This takes the generic parameter F which specifies the class used to represent individual frames returned by the camera. Watever F is it must extend from (implement) Frame. Therefore, all frame objects hold some common functionality.

For instance, Andor3 cameras return U16Frame objects (i.e., unsigned 16-bit integer monochrome pixels), whereas ThorCam.Colour cameras return U16RGBFrame objects (i.e., unsigned 16-bit integers each for red green and blue per pixel).

The interface then provides convenient methods to configure the camera, acquire individual frames or sets of frames synchronously, as well as continuously acquiring frames.

Table of Contents

Connecting to Cameras

As with any instrument, one must first connect to it by creating an object to represent it. For instance, if we had an Andor SDK3 based camera, we would use the Andor3 class.

Most cameras are controlled through a native library (i.e., a dll/dylib/so file), and are addressed by specifying a numerical index or their serial number. Therefore, most camera classes will directly accept this as a constructor parameters, but so as to fit with the Address scheme that all instruments are supposed to follow, this value can be specified wrapped in an IDAddress object.

Below is an example of connecting to an Andor3 camera:


Java

Andor3 camera = new Andor3(0);
// or
Andor3 camera = new Andor3(IDAddress("0"));

Kotlin

val camera = Andor3(0)
// or
val camera = Andor3(IDAddress("0"))

Python

camera = Andor3(0)
# or
camera = Andor3(IDAddress("0"))

Synchronously Acquiring Frames

Taking a Single Frame

To take just a single frame, often useful when a picture needs to be taken as part of a measurement routine, one just calls getFrame() on the camera object. For instance:


Java

// The type F determined by camera
F frame = camera.getFrame();

Kotlin

val frame = camera.getFrame()

Python

frame = camera.getFrame()

The type of frame this method returns is set by the camera driver you are using (i.e., whatever F is in Camera<F>). For instance, Andor3 cameras specifically return U16Frame objects, where each pixel is represented as an unsigned 16-bit integer (hence the name).

If the camera is currently acquiring frames continuously, this method will simply wait for the next frame to come in and return a copy of it. This prevents synchronous calls such as this from disrupting continuous acquisition and vice versa.

Taking a Series of Frames

Further to this, one can acquire a series of frames by use of the getFrameSeries(...) method. For instance, to get 10 frames:


Java

// The type F determined by camera
List<F> frames = camera.getFrameSeries(10);

Kotlin

val frames = camera.getFrameSeries(10)

Python

frames = camera.getFrameSeries(10)

This will take 10 frames (or whatever number you give it), and then return them all in a list.

If the camera is currently acquiring frames continuously, this method will simply wait for the next n frames to come in and return copies of those. This prevents synchronous calls such as this from disrupting continuous acquisition and vice versa.

Continuous Acquisition

One way or another, all cameras should be capable of continuously acquiring frames, and streaming that data back to you in some sensible manner. To start this, simply call startAcquisition() on your camera object:

camera.startAcquisition();

This will start the camera acquiring frames, and lauch a thread to manage them as they come in. To access these frames, you have a choice of two overall methods: attaching "frame listeners" to the camera, or opening a "frame queue".

Frame Listeners

Frame listeners are bits of executable code that take a frame object and do something with it. Each time a new frame is acquired in continuous acquisition, a frame listener is offered it. Normally, the frame listener will accept it and run its code using said frame. However, if a frame listener is still executing from a previous frame, then it will reject the frame. Therefore, frame listeners are "lossy" in that they only run as frequently as they are able, skipping any frames that come in between executions. This makes them ideal for applications such as drawing live frames to a GUI element, as a camera will often be supplying them faster than they can be drawn.

To add a frame listener, use the addFrameListener(...) method, giving a lambda function (or method reference) like so:


Java

FrameListener<F> listener = camera.addFrameListener(frame -> {
    /* do something with frame here */
});

Kotlin

val listener = camera.addFrameListener { frame ->
    /* do something with frame here */
}

Python

def useFrame(frame):
    # do something with frame here

listener = camera.addFrameListener(useFrame)

If, later, you wish to stop this frame listener from running, you can detach it from the camera by using removeFrameListener(...) like so:

camera.removeFrameListener(listener);

Frame Queues

In order to losslessly obtain all frames, you should open a frame queue. A queue is essentially a special type of list, whereby items are added to the end of the list, and retrieved/taken/removed from the front. Further to this, frame queues are what's known as "blocking queues", meaning that if we try to retrieve an item from the front of the queue, but it is currently empty, then it will wait until there is something to retrieve.

Therefore, one can open a queue, and spawn a loop where we continuously attempt to retrieve a frame from the front of the queue. This way, all frames are captured, and any that come in faster than we can process will simply be queued up in the queue (as the name does suggest!).

To open a frame queue, one uses the openFrameQueue(...) method, optionally specifiying a limit to the number of frames it can store before it will start rejecting new frames:


Java

FrameQueue<F> queue = camera.openFrameQueue();    // No limit (cureful!)
FrameQueue<F> queue = camera.openFrameQueue(100); // Limited to 100 frames

Kotlin

val queue = camera.openFrameQueue()    // No limit (cureful!)
val queue = camera.openFrameQueue(100) // Limited to 100 frames

Python

queue = camera.openFrameQueue()    # No limit (cureful!)
queue = camera.openFrameQueue(100) # Limited to 100 frames

Now that a queue is opened, we can retrieve a single from from the queue (removing it from the queue in the process), but calling nextFrame() like so:


Java

F frame = queue.nextFrame();      // Wait forever for next frame
F frame = queue.nextFrame(10000); // Wait at most 10 seconds (10000 ms)

Kotlin

val frame = queue.nextFrame();      // Wait forever for next frame
val frame = queue.nextFrame(10000); // Wait at most 10 seconds (10000 ms)

Python

frame = queue.nextFrame()      # Wait forever for next frame
frame = queue.nextFrame(10000) # Wait at most 10 seconds (10000 ms)

If no frame is currently present in the queue, it will wait. If you also specify a timeout (in millisecond), it will then only wait up to that maximum amount of time before throwing a TimeoutException.

To stop the queue from populating, you can close it by calling close() on it:

queue.close();

Normally, one will want to keep trying to acquire frames from the queue so long as it is either (a) open or (b) has frames left in it (or both). This can be easily checked by called isAlive() on the queue. Therefore, one may have a loop that looks something like:


Java

while (queue.isAlive()) {
    F frame = queue.nextFrame();
    /* do something with the frame here */
}

Kotlin

while (queue.isAlive()) {
    val frame = queue.nextFrame()
    /* do something with frame here */
}

Python

while queue.isAlive():
    frame = queue.nextFrame()
    # do something with frame here

Most likely, you'd want this loop to be running on a separate thread to your main program.

Frame Threads and Streaming to File

To help with using frame queues, the concept of a frame thread is provided. This is a separate thread which is given a frame queue, and runs a loop like those demonstrated in the previous section. These can be created by calling startFrameThread(...). This method requires a lambda function or method reference that you want to run on each iteration of the loop.

// Creates, sets up and launches thread in one go
FrameThread<F> thread = camera.startFrameThread(frame -> /* do something with frame here */));

/* at some point later */

thread.stop();     // Stop the thread gracefully after it has processing all frames
// or
thread.stopNow();  // Forcibly stop the thread right now

This would be the equivalent of doing the following:

// Open a queue to use
FrameQueue<F> queue = camera.openFrameQueue();

// Create a new thread to continuously retrieve frames from it
Thread thread = new Thread(() -> {

    while (queue.isAlive()) {

        F frame = queue.nextFrame();

        /* do something with frame here */

    }

});

// Launch our new thread
thread.start();

/* at some point later */

queue.close();       // Stop more frames from coming in
thread.interrupt();  // Interrupt in case we are waiting on nextFrame()
thread.join();       // Wait for thread to shutdown

So, as we can hopefully see, launching a frame thread can simplify this process quite a bit. On top of this, a pre-defined frame thread may be launched specifically for streaming files directly to file. This can be done by calling the streamToFile(...) method like so:

FrameThread<F> thread = camera.streamToFile("/path/to/file.bin");

Now, whenever the camera is acuiqiring (and before we stop the thread), any frames acquired will be losslessly written directly to disk in a raw binary format. The format of this file is explained in the header of the file, but can be read back in using a FrameReader. Each camera class should have an openFrameReader(...) static method for this purpose. For instance, if our file was created by a lumenera camera, one would do:

FrameReader<U16RGBFrame> reader = Lumenera.openFrameReader("/path/to/file.bin");

while (reader.hasFrame()) {
    U16RGBFrame frame = reader.readFrame();
    /* do something with frame here */
}

This way, each frame is read in individually, rather than having to hold them all in RAM at the same time. Frame readers can also be used to convert such a file to an HDF5 file by use of convertToHDF5(...):

Lumenera.openFrameReader("/path/to/file.bin").convertToHDF5("/path/to/output.h5");

HDF5 is not well optimised for writing to sequentially (it's designed for dumping data that is held in RAM into a file all in one go, thus often requiring said data to be loaded in RAM in its entirity), so this may take a long time and could potentially use a lot of RAM to the point of causing an OutOfMemoryException. Be careful! You may find it easier to just work with one frame at a time read in by the frame reader.

Frame Objects

The objects a camera returns to represent individual frames all extend from the Frame base class/interface. The reason that different cameras use different frame classes is because different cameras return data in different formats.

For instance, some return 16-bit integers, some return 32-bit integers. Colour cameras need to return three channels per pixel, while monochrome cameras only need to return one. Therefore, each frame defines a different data type for the pixels it contains.

Some common monochrome frame classes are U8Frame, U16Frame, and U32Frame which represent frames of unsigned 8-, 16-, and 32-bit integers respectively (often referred to as byte, shot, and int respectively). or colour cameras, they will likely either return RGBFrame objects or U16RGBFrame objects depending on their intensity resolution.

Basic properties of the frame can be retrieved by their corresponding getter methods:

int  width     = frame.getWidth();
int  height    = frame.getHeight();
int  size      = frame.size(); // width * height
long timestamp = frame.getTimestamp();

The value of a pixel at given (x, y) co-ordinates can be extracted by using get(x, y) like so:

U16Frame frame1 = ...;
RGBFrame frame2 = ...;

int pixel1 = frame1.get(25, 12); // Monochrome pixel at (25, 12)
RGB pixel2 = frame2.get(15, 17); // RGB pixel at (15, 17)

The red, green, and blue channels can then be extracted individually, or as a single "ARGB" value, from the RGB pixel object like so:

int red   = pixel2.getRed();
int green = pixel2.getGreen();
int blue  = pixel2.getBlue();

int argb  = pixel2.getARGB();

The whole image can also be returned as either a 1- or 2-dimensional array by calling getData() or getImage() respectively like so:

int[]   linear1 = frame1.getData();
int[][] image2  = frame1.getImage();

RGB[]   linear2 = frame2.getData();
RGB[][] image2  = frame2.getImage();

Regardless of what type of data each pixel is represented as, one can always extract the data of the frame as either a 1- or 2-dimensional array of ARGB values representing how to display the frame visually like so:

int[]   argbData1  = frame1.getARGBData();
int[][] argbImage1 = frame1.getARGBImage();

int[]   argbData2  = frame2.getARGBData();
int[][] argbImage2 = frame2.getARGBImage();

ARGB values are integers that represent individual colours, thus regardless of whether the frame is in colour or monochrome, its argb data is universally interpretable to create a bitmap image. This is taken advantage of to provide the savePNG(...) method, allowing for a frame to be written as a PNG image file to disk.

frame1.savePNG("/path/frame1.png");
frame2.savePNG("/path/frame2.png");

The two PNG files created above from frame1 and frame2 will be monochrome and colour repsectively. Similarly, the ImageDisplay GUI element can use this universal feature to draw any frame like so

ImageDisplay iDisp1 = new ImageDisplay("Frame 1");
ImageDisplay iDisp2 = new ImageDisplay("Frame 2");

iDisp1.drawFrame(frame1);
iDisp2.drawFrame(frame2);

iDisp1.show();
iDisp2.show();

Therefore, if one takes the frame listener feature discussed previously, one can create a live view of their camera by writing the following:

Camera<F>    camera = new ...;
ImageDisplay iDisp  = new ImageDisplay("Live View");

iDisp.show();

camera.addFrameListener(frame -> iDisp.drawFrame(frame));
camera.startAcquisition();

which will keep going until camera.stopAcquisition() is called somewhere.

Example: Live Camera View

The following is a simple programme that takes a Camera object and an ImageDisplay and makes it update with the latest frame as fast as it reasonably can, thus giving us a live video stream of the camera.


Java

Camera<?>    camera = new ...(...); // Connect to whatever camera here
ImageDisplay iDisp  = new ImageDisplay("Live View");

iDisp.addToolbarButton("Start", camera::startAcquisition);
iDisp.addToolbarButton("Stop", camera::stopAcquisition);

camera.addFrameListener(iDisp::drawFrame);

iDisp.setExitOnClose(true);
iDisp.show();

Kotlin

val camera = ...(...) // Connect to whatever camera here
val iDisp  = ImageDisplay("Live View")

iDisp.addToolbarButton("Start", camera::startAcquisition)
iDisp.addToolbarButton("Stop", camera::stopAcquisition)

camera.addFrameListener(iDisp::drawFrame)

iDisp.setExitOnClose(true)
iDisp.show()

Python

camera = ...(...) # Connect to whatever camera here
iDisp  = ImageDisplay("Live View")

iDisp.addToolbarButton("Start", camera.startAcquisition)
iDisp.addToolbarButton("Stop", camera.stopAcquisition)

camera.addFrameListener(iDisp.drawFrame)

iDisp.setExitOnClose(True)
iDisp.show()