Tracking - MobsLab/PreProcessing GitHub Wiki
This chapter of the wiki explains how to run the tracking code and describes the variables it creates.
Required matlab plugins
For the webcam : Image Acquisition Toolbox Support Package for OS Generic Video Interface
Launch the code in matlab
Run these lines of code
addpath(genpath('C:\Users\MobsHP\Dropbox\Kteam\PrgMatlab'))
rmpath(genpath('C:\Users\MobsHP\Dropbox\Kteam\PrgMatlab\ImageAcq'))
cd (YourDirectory - don't save on the dropbox!)
Gen_Track_ObjectOrientated_ForTwoCams
Step by step of tracking code
**Initlisation : Click on top button and answer questions **
--> How many cameras
--> IRCamera or webcam
--> Frame rate : 10 is good for sleep, 15 is good for behaviour
--> Save format : avi only or avi + matlab files. Matlab files will slow down tracking and make big folders but can allow for easier image manipulation in matlab. For long sessions save avi only or files become unmanageably big.
--> Check the camera position and focus then. These can be adjusted with logitech menu. Press save and close.
--> Give the arduino number. If you have more than one arduino plugged in, check it is the one that is sending synchronisation ttls to intan. You can check the number of the arduino by opening the arduino software. In this example the arduino is the uno on port 16.
--> Choose the stimulator type
Get the reference
The reference picture should be your environment with no mouse (IR camera can have mouse in picture).
Make sure the ref is with exactly the same lighting as you will do your experiment.
Make sure the whole environment is visible.
After taking the picture, press save and then close.
Get the mask
Select the subregion of the image which contains your environment. The mask (turquoise region on the GUI) will be excluded from the tracking - if the mouse goes here it will not be found and if there is a change of light / object moved in these region it will be ignored.
When you're done press save and close.
Reset mask allows you to start over.
Circle button allows to toggle between drawing a circle and drawing a shape with lines.
Mask out draws the external lines of the environment. Draw the lines and then double click to validate.
Mask in draws the internal lines - this can be used to exclude a subpart of the environment like in this example.
Calibrate the pixel to cm conversion
Click on a part of the image you know the size of (like the distance between two walls) then enter the distance in cm.
Choose your tracking code
Click on choose tracking and open the scroll down menu. A range of codes are available for different tasks and setups. They are all based ont the same basic structure. If you want to add a new tracking code for a new experiment you designed you should copy as closely as possible the structure of these codes. Here we will illustrate with the code 'TrackingSleepSession'. Click on the code of your choice and then press 'track'.
Launch your tracking
Choose your session name, give the number of your mouse and the duration of the session. Recording will stop automatically after this time.
When you click on StartExpe the windows will open but recording does not start yet. Here is a description of the parameters to help you track your mouse.
ImDiff window : immobility measurement
These sliders control variables related to the bottom window of the tracking interface. This shows how much your mouse has been moving over recent time. The blue line gives the value of Imdiff (ImageDifference) which is the number of pixels that has changed between two successive images: if the mouse is immobile (freezing, sleeping) this value is very low. Note that this value is not calculatedAt the end of the sessions a period called FreezeEpoch will be calculated from this measurement using the threshold you have set (red line): all values below the red line will be classified as immobility in this epoch.
--> freezing threshold: the threshold used to define immobility. It corresponds to the red line on the ongoing Imdiff value.
--> Yaxis: change the y axis on the Imdiff window
Tracking parameters
These sliders control various parameters related to tracking.
--> threshold: the black and white image is thresholded (mouse is dark on light background) according to this value. Everything dark enough is a potential location of the mouse.
--> min size: only parts of the image obtained after thresholding which are larger than this size will be considered as a potential location for the mouse. In the example below, you can see the importance of getting this parameter right.
--> erode obj size: this allows you to erode and dilate the objects detected after thresholding. This typically allows to merge two objects that are often separated by a small distance. For example the tail of the mouse is sometimes seen as detached from the body because of changes in shadows. With this parameter you can dilate the two objects and make them join so that they are tracked as a single object. See the example below.
--> Fz Srnd Sz: the parameter is the size of the box around the mouse's current position that is used to calculate the number of pixels changing. It should be adjusted so that the dark grey box contains the whole mouse.
--> smoothing: this parameters smooths the original image taken from the camer. It should in general be set to 0, except when filming on a grid (for example in fear conditioning boxes).
Session end
The session will cease automatically after the pre-assigned time limit. If you want to stop earlier press 'stop emergency'. At the end of the session an overview figure of the session will be saved along with behavResources.mat and the video and/or individual tracking frames. The content of behavResources.mat is detailed in another article.