Faster RCNN in EENet - tambetm/caffe GitHub Wiki

How to run Faster R-CNN in EENet

  1. Install prerequisites

EENet has most of the prerequisites already installed, only protobuf and gflags are too old.

To install recent version of protobuf:

wget https://github.com/google/protobuf/releases/download/v2.6.1/protobuf-2.6.1.tar.gz
tar xzf protobuf-2.6.1.tar.gz
cd protobuf-2.6.1
./configure --prefix=$HOME
make
make install

To install recent version of gflags:

wget https://github.com/gflags/gflags/archive/v2.1.2.tar.gz
tar xzf gflags-2.1.2.tar.gz
cd gflags-2.1.2
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_INSTALL_PREFIX:STRING=$HOME ..
make
make install

I’m happy to have all stuff installed to $HOME/lib and $HOME/bin, so I can have only one thing in my LD_LIBRARY_PATH and PATH. If you want something else, replace $HOME above with your preference.

Add those directories to corresponding search paths in your .bash_profile or .bashrc before standard path:

export PATH=$HOME/bin:$PATH
export LD_LIBRARY_PATH=$HOME/lib:$LD_LIBRARY_PATH:/opt/opencv-2.4.10/lib:/opt/boost-1.55/lib

While we are at it, I added OpenCV and Boost as well. Log out-in so they take effect.

  1. Install Faster R-CNN

git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git
cd py-faster-rcnn/lib
make
cd ../caffe-faster-rcnn
cp Makefile.config.example Makefile.config

Now change following lines in Makefile.config:

  1. Comment in line 5:

    USE_CUDNN := 1
    
  2. Change CUDA home on line 28:

    CUDA_DIR := /opt/cuda-6.0
    
  3. Comment out lines 39-40:

    #             -gencode arch=compute_50,code=sm_50 \
    #             -gencode arch=compute_50,code=compute_50
    
  4. To make use of Intel Math Library for faster matrix operations change lines 46 and 50-51:

    BLAS := mkl
    ...
    BLAS_INCLUDE := /opt/intel/mkl/include
    BLAS_LIB := /opt/intel/mkl/lib
    

    Alternatively you may use Atlas library:

    BLAS := atlas
    ...
    BLAS_INCLUDE := /usr/include/atlas-x86_64-sse3/
    BLAS_LIB := /usr/lib64/atlas-sse3/
    

    To my knowledge the difference is minor, but I would be happy to receive some benchmarking results.

  5. Change line 65 to point to Numpy installation in your virtualenv sandbox:

                   $(HOME)/sandbox/lib64/python2.7/site-packages/numpy/core/include
    
  6. Enable Python layers on line 87:

     WITH_PYTHON_LAYER := 1
    
  7. Add protobuf, gflags, OpenCV and Boost directories to library and include paths at lines 90-91:

    INCLUDE_DIRS := $(PYTHON_INCLUDE) $(HOME)/include /usr/local/include /opt/opencv-2.4.10/include /opt/boost-1.55/include
    LIBRARY_DIRS := $(PYTHON_LIB) $(HOME)/lib /usr/local/lib /usr/lib /opt/opencv-2.4.10/lib /opt/boost-1.55/lib
    

    NB! Protobuf and gflags includes and libraries must go before /usr/local!!!

Then compile as usual:

make all -j8
make pycaffe

Faster R-CNN test scripts are broken, so these don't seem to work:

make test -j8
make runtest
  1. Testing the installation

Activate Python virtualenv and install dependencies:

source ~/sandbox/bin/activate
pip install cython
pip install easydict

Then run demo on one of the GPU nodes:

srun --partition=gpu --gres=gpu:1 --constraint=K20 ./tools/demo.py

The demo runs through, but doesn't open the windows with detection results. Probably the code has to be modified to save results to file instead.