DeepLabCut - MobsLab/PreProcessing GitHub Wiki
Installing DeepLabCut on your computer :
CPU only :
Install Anaconda from the web.
Launch the command prompt.
Use anaconda to create a new environment :
conda create --name dlc-CPU python=3.6
(you can use any name you want instead of dlc-cpu)
NB: You may have some issues with the command conda. One solution that worked for me is to use the following command line every time you open a new command prompt :
source ~/anaconda*/bin/activate root
Activate the environment (you have to do this every time you plan on using it):
conda activate dlc-CPU
conda install tensorflow
(if you run into problems later, try tensorflow==1.15)
pip install deeplabcut
conda install ipython
conda install wxpython
launch ipython : ipython
Once ipython is launched, you should see In [1]
Try importing tensorflow to check its installation : import tensorflow
Try importing DeepLabCut : import deeplabcut
Then deeplabcut.launch_dlc()
With an Nvidia GPU (way faster) :
Start by installing CUDA on your computer from the Nvidia website.
Install Anaconda
Launch the command prompt
conda create --name dlc-gpu python=3.6
conda activate dlc-gpu
conda install tensorflow-gpu
pip install deeplabcut
conda install ipython
ipython
import tensorflow
import deeplabcut
deeplabcut.launch_dlc()
Using DeepLabCut on your computer
Now that DeepLabCut is installed and works properly, you are able to open the GUI. From there you can either :
Start a new project:
conda activate dlc-GPU
ipython
import deeplabcut
deeplabcut.launch_dlc()
Manage projects
Create new project
You have to name the project and give the experimenter’s name.
Load videos (select the videos that you want to use to train the neural network)
You can select the directory where the project will be created (for instance, selecting the path to Desktop will result in the project’s folder being created in Desktop/NameOfProject-NameOfExperimenter-Year-Month-Day
It is recommended to answer yes at « do you want to copy the videos » so that a local versions of the training videos is stored in you project’s folder for reproducibility purposes.
Edit the config file
You can (and should) use that to change the default parameters for the project. Some of the most important ones are the following :
bodyparts : you should replace the default names by the list of body parts you want to track.
numframes2pick : the number of frames you want to label out of every training video. Make sure that the total number of labeled frames (numframes2pick x number of training videos) exceeds 400 for a correct training performance.
dotsize : this is just an aesthetic variable for plotting size. Use a dot size of 2 or 3 pixels if you’re working on sub-HD videos. Otherwise the plotted tracking positions will cover the tracked body part entirely.
pcutoff : uses a threshold on the likelihood associated to each tracked point. If the likelihood is lower than pcutoff, DeepLabCut considers the body part as not visible on the image. Increasing pcutoff diminishes the number of false positives in tracking (detecting a body part that is obstructed/not in the frame) but increases the number of false negatives and vice versa.
Don’t forget to save the config.yaml file that you’ve just edited.
Extract frames
Extracts frames to label from each video. Automatic is recommended. The default clustering algorithm is Kmeans, which classifies each of the training video’s images into clusters based on the similarities and differences detected. It then proceeds to extract ‘numframes2pick’ images evenly among these clusters. This maximises the extraction of training images that are very different (presence of the experimenter’s hand in the video, animal in unusual position etc… in addition to ‘normal’ situations) which is great for training.
Label frames > Label frames > Load frames
You will find a folder for each video, containing the extracted frames that you should label. Select one of the folders. Now you can start labelling. Right click on the body part corresponding to each body part you want to track. You can displace the body parts with a left click & drag. However, if you have mistakenly placed a body part that is not visible on the image, you cannot remove it unfortunately !
Once you have placed all the body parts for the first image, click on next to label the next one etc.. If you cannot see a certain body part on an image but want to label another body part, you can directly click on the white circle corresponding to the body part you want to place (the list of body parts is located on the top right corner of the GUI). This allows you to skip some body parts that are not visible on the image. Once you’ve finished labelling all the frames for this video, click on save then quit.
« Do you want to label another data set? »
Yes
Select the folder of the frames extracted from another video that you haven’t labelled yet and proceed with labelling.
Once you are done labelling all of the videos, save and quit.
Create training dataset
This step is completely automated. You can just proceed and click on OK.
Train network > « want to edit pose_cfg.yaml ? » > Yes > « Click to open the pose config file ».
You can change several parameters. Notably, if you dataset does NOT contain «left/right polarity »(like right and left ears, or left and right hands etc) you can enable mirroring in the data augmentation parameters (mirror : true instead of mirror : false).
If DeepLabCut is running on a powerful graphics card, set save_iters on 50000 and display_iters on 1000. If you are running on CPU or a weak graphics card, change that to 10000 and 100. save_iters dictates after how many training iterations your neural network state is saved. If you stop before this value is reached, you lose all the progress. Then it is only save after each multiple of save_iters. Display_iters is just to keep you updated about the training advancement more or less often. A low value of display_iters ensures that you are updated more often on which step of training you’re at. This is recommended when the training is slow (with CPU or low graphic card performance).
Save and close the file.
« Update the parameters »
Ok
The network starts training.
If you want to interrupt it before the end of the 1 060 000 iterations, you can simply close the command prompt. Make sure that you have reached the closest multiple of ‘save_iters’ so that you don’t loose too much progress. For example, if save_iters=50 000 and you are at iteration number 198 000, the last version of training that you have is for 150 000 ! If you wait until past 200 000, then the last version of the network that would be saved is the one with 200 000 training iterations. Re-open deeplabcut (using conda activate … ipython … import deeplabcut…)
Manage project > Load an existing project
Once you have completed the training once, the next steps are viable wether it’s the first time or 100th time that you’re using your network to analyse videos.
Load an existing project:
Browse
Find you DeepLabCut project folder, open it and select the config.yaml file
Ok
Go directly to:
Analyse videos > Select videos to analyse
You can use any video, not just the ones you’ve used for training (that’s the point of deeplabcut !)
Want to save result(s) as csv ? > Yes
(You want a CSV file containing all of the tracked coordinates and their likelihood for each frame if you want to do further analysis on Matlab or Python)
Want to create labeled video(s) ? > Yes
If you want to visualise the results of the tracking to assess how well it worked.
No
otherwise if this is the 100th time that you use it and that you’ve always been satisfied with it and don’t want to fill your computer with tracking videos that you will never use.
Now :
RUN
Go to the folder where you have selected the videos that you have analysed. You should find a DeepLabCut video showing the tracked positions, and a CSV file with all the coordinates. Congratulations ! You are now able to use DeepLabCut properly ! If you want to take your usage to the next level, refer to all the comprehensive explanations that you could find on DeepLabCut’s website, and check their twitter account on a regular basis for updates.
Once you finished tracking your points of interest in your video, you can export the CSV file to Matlab for further analysis : Start by opening the resulting CSV file to understand the content of each column. Generally, each line corresponds to a different frame of the video. The first column corresponds to the frame number, the second column is the x coordinates of the first tracked body part, the third column is the y coordinates of the first tracked body part, the fourth column is the likelihood score that DeepLabCut gave for its tracking of the first body part. The fifth column is the x coordinates of the second tracked body part. And so on …
Wiki by Othman. Please contact me by email if you find any issue after following this tutorial.