ncnn - YingkunZhou/EdgeTransformerBench GitHub Wiki
how to build in Linux/android
# for linux
sudo apt install libvulkan-dev
# for android
pkg install vulkan-tools vulkan-headers
git clone https://github.com/Tencent/ncnn.git #--depth=1
cd ncnn
# 14b000d2b739bd0f169a9ccfeb042da06fa0a84a
git submodule update --init
mkdir build && cd build
# build for linux or android
# for linux, no conda env in order to use system vulkan lib
# It seems like clang is better than gcc?
export CC=clang-16
export CXX=clang++-16
cmake -D NCNN_SHARED_LIB=ON -D NCNN_VULKAN=ON .. -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=../install -D NCNN_BUILD_BENCHMARK=OFF
------------------------
# cross build for android
# https:github.com/Tencent/ncnn/wiki/how-to-build#build-for-android
export ANDROID_NDK=$PWD/android-ndk-r22b
cmake -DCMAKE_TOOLCHAIN_FILE="$ANDROID_NDK/build/cmake/android.toolchain.cmake" \
-DANDROID_ABI="arm64-v8a" \
-DANDROID_PLATFORM=android-24 -DNCNN_VULKAN=ON .. \
-D CMAKE_INSTALL_PREFIX=../install -D NCNN_SHARED_LIB=ON
make install -j32
#export OPENCV_LIB=$HOME/miniforge3/envs/py3.8/lib
#export OPENCV_INC=$HOME/miniforge3/envs/py3.8/include/opencv4
export NCNN_LIB=$HOME/work/ncnn/install/lib
export NCNN_INC=$HOME/work/ncnn/install/include/ncnn
#g++ -O3 -o ncnn_perf ncnn_perf.cpp utils.cpp -std=c++17 \
# -I$NCNN_INC -I$OPENCV_INC -L$NCNN_LIB -L$OPENCV_LIB \
# -lncnn -lopencv_imgproc -lopencv_imgcodecs -lopencv_core -lopencv_dnn
sudo apt -y install libopencv-dev
g++ -O3 -o ncnn_perf ncnn_perf.cpp utils.cpp -std=c++17 -I$NCNN_INC -L$NCNN_LIB -lncnn `pkg-config --cflags --libs opencv4`
g++ -O3 -DTEST -o ncnn_perf-test ncnn_perf.cpp utils.cpp -std=c++17 -I$NCNN_INC -L$NCNN_LIB -lncnn `pkg-config --cflags --libs opencv4`
g++ -g -DDEBUG -o ncnn_perf-debug ncnn_perf.cpp utils.cpp -std=c++17 -I$NCNN_INC -L$NCNN_LIB -lncnn `pkg-config --cflags --libs opencv4`
# export LD_LIBRARY_PATH=$OPENCV_LIB:$NCNN_LIB
export LD_LIBRARY_PATH=$NCNN_LIB
./ncnn_perf [--only-test xxx] [--backend=?]
# or
LD_LIBRARY_PATH=$NCNN_LIB ./ncnn_perf [--only-test xxx] [--backend=?]
convert tool building and execute
# use conda env
cd tools/pnnx
# pip install torch
# remove protobuf & libprotobuf package
mkdir build && cd build
cmake ..
make -j32
有两种转换方式:
- pytorch --pnnx-> ncnn: 目前主推的转化方式,pytorch原装进口 ✅
- pytorch -> onnx -> ncnn: pnnx转换的子集,所以没有存在意义了 ❌
整个框架是递归调用的!
ncnn肉眼可见的会改变模型精度!那是因为ncnn默认启用了fp16
ncnn对新型的SOTA CV模型支持甚少!
- efficientformerv2 ✅
- SwiftFormer
❌ runtime error
layer load_model 35 normalize_16 failed
Segmentation fault (core dumped)
############# pass_ncnn
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
unsupported normalize for 2-rank tensor with axis 1
- EMO
❌ runtime error
Segmentation fault (core dumped)
############# pass_ncnn
fallback batch axis 233 for operand 58
fallback batch axis 233 for operand 59
fallback batch axis 233 for operand 60
fallback batch axis 233 for operand 61
fallback batch axis 233 for operand 62
fallback batch axis 233 for operand 63
fallback batch axis 233 for operand 64
fallback batch axis 233 for operand 65
fallback batch axis 233 for operand 67
fallback batch axis 233 for operand 68
fallback batch axis 233 for operand 69
fallback batch axis 233 for operand 70
fallback batch axis 233 for operand 71
fallback batch axis 233 for operand 72
fallback batch axis 233 for operand 73
fallback batch axis 233 for operand 74
fallback batch axis 233 for operand 98
fallback batch axis 233 for operand 99
fallback batch axis 233 for operand 100
fallback batch axis 233 for operand 101
fallback batch axis 233 for operand 102
fallback batch axis 233 for operand 103
fallback batch axis 233 for operand 104
fallback batch axis 233 for operand 105
fallback batch axis 233 for operand 107
fallback batch axis 233 for operand 108
fallback batch axis 233 for operand 109
fallback batch axis 233 for operand 110
fallback batch axis 233 for operand 111
fallback batch axis 233 for operand 112
fallback batch axis 233 for operand 113
fallback batch axis 233 for operand 114
fallback batch axis 233 for operand 138
fallback batch axis 233 for operand 139
fallback batch axis 233 for operand 140
fallback batch axis 233 for operand 141
fallback batch axis 233 for operand 142
fallback batch axis 233 for operand 143
fallback batch axis 233 for operand 144
fallback batch axis 233 for operand 145
fallback batch axis 233 for operand 147
fallback batch axis 233 for operand 148
fallback batch axis 233 for operand 149
fallback batch axis 233 for operand 150
fallback batch axis 233 for operand 151
fallback batch axis 233 for operand 152
fallback batch axis 233 for operand 153
fallback batch axis 233 for operand 154
fallback batch axis 233 for operand 178
fallback batch axis 233 for operand 179
fallback batch axis 233 for operand 180
fallback batch axis 233 for operand 181
fallback batch axis 233 for operand 182
fallback batch axis 233 for operand 183
fallback batch axis 233 for operand 184
fallback batch axis 233 for operand 185
fallback batch axis 233 for operand 187
fallback batch axis 233 for operand 188
fallback batch axis 233 for operand 189
fallback batch axis 233 for operand 190
fallback batch axis 233 for operand 191
fallback batch axis 233 for operand 192
fallback batch axis 233 for operand 193
fallback batch axis 233 for operand 194
fallback batch axis 233 for operand 218
fallback batch axis 233 for operand 219
fallback batch axis 233 for operand 220
fallback batch axis 233 for operand 221
fallback batch axis 233 for operand 222
fallback batch axis 233 for operand 223
fallback batch axis 233 for operand 224
fallback batch axis 233 for operand 225
fallback batch axis 233 for operand 227
fallback batch axis 233 for operand 228
fallback batch axis 233 for operand 229
fallback batch axis 233 for operand 230
fallback batch axis 233 for operand 231
fallback batch axis 233 for operand 232
fallback batch axis 233 for operand 233
fallback batch axis 233 for operand 234
fallback batch axis 233 for operand 258
fallback batch axis 233 for operand 259
fallback batch axis 233 for operand 260
fallback batch axis 233 for operand 261
fallback batch axis 233 for operand 262
fallback batch axis 233 for operand 263
fallback batch axis 233 for operand 264
fallback batch axis 233 for operand 265
fallback batch axis 233 for operand 267
fallback batch axis 233 for operand 268
fallback batch axis 233 for operand 269
fallback batch axis 233 for operand 270
fallback batch axis 233 for operand 271
fallback batch axis 233 for operand 272
fallback batch axis 233 for operand 273
fallback batch axis 233 for operand 274
fallback batch axis 233 for operand 298
fallback batch axis 233 for operand 299
fallback batch axis 233 for operand 300
fallback batch axis 233 for operand 301
fallback batch axis 233 for operand 302
fallback batch axis 233 for operand 303
fallback batch axis 233 for operand 304
fallback batch axis 233 for operand 305
fallback batch axis 233 for operand 307
fallback batch axis 233 for operand 308
fallback batch axis 233 for operand 309
fallback batch axis 233 for operand 310
fallback batch axis 233 for operand 311
fallback batch axis 233 for operand 312
fallback batch axis 233 for operand 313
fallback batch axis 233 for operand 314
fallback batch axis 233 for operand pnnx_expr_2751_mul(65,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_2442_mul(105,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_2133_mul(145,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_1824_mul(185,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_1515_mul(225,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_1206_mul(265,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_897_mul(305,2.236068e-01)
unbind along batch axis 0 is not supported
unbind along batch axis 0 is not supported
reshape to 6-rank tensor is not supported yet!
reshape to 6-rank tensor is not supported yet!
reshape to 6-rank tensor is not supported yet!
reshape to 6-rank tensor is not supported yet!
reshape to 6-rank tensor is not supported yet!
reshape to 6-rank tensor is not supported yet!
reshape to 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
permute 6-rank tensor is not supported yet!
permute 5-rank tensor is not supported yet!
ignore Slice unbind_7 param dim=0
ignore Slice unbind_8 param dim=0
- edgenext
❌ runtime error
layer load_model 104 normalize_42 failed
Segmentation fault (core dumped)
############# pass_ncnn
force batch axis 233 for operand 2
force batch axis 233 for operand 15
force batch axis 233 for operand 29
force batch axis 233 for operand 39
force batch axis 233 for operand 40
force batch axis 233 for operand 83
force batch axis 233 for operand 93
force batch axis 233 for operand 103
force batch axis 233 for operand 113
force batch axis 233 for operand 123
force batch axis 233 for operand 133
force batch axis 233 for operand 134
force batch axis 233 for operand 179
force batch axis 233 for operand 189
force batch axis 233 for operand 190
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
binaryop broadcast across batch axis 233 and 0 is not supported
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
insert_reshape_linear 4
unsupported normalize for 3-rank tensor with axis 2
unsupported normalize for 3-rank tensor with axis 2
unsupported normalize for 3-rank tensor with axis 2
unsupported normalize for 3-rank tensor with axis 2
unsupported normalize for 3-rank tensor with axis 2
unsupported normalize for 3-rank tensor with axis 2
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
reshape tensor with batch index 1 is not supported yet!
- mobilevitv2 ✅
- mobilevit
❌ runtime error
Creating ncnn net: mobilevit_xx_small # on jetson orin Linux
(index: 999, score: -nan), (index: 998, score: -nan), (index: 997, score: -nan),
Creating ncnn net: mobilevit_xx_small # on Exynos 990 Android
(index: 579, score: 9.695312), (index: 264, score: 9.234375), (index: 937, score: 8.140625),
-------------
fallback batch axis 233 for operand pnnx_expr_345_mul(41,2.500000e-01)
fallback batch axis 233 for operand pnnx_expr_331_add(38,74)
fallback batch axis 233 for operand pnnx_expr_329_add(pnnx_expr_331_add(38,74),79)
fallback batch axis 233 for operand pnnx_expr_322_mul(83,2.500000e-01)
fallback batch axis 233 for operand pnnx_expr_308_add(pnnx_expr_329_add(pnnx_expr_331_add(38,74),79),116)
fallback batch axis 233 for operand pnnx_expr_306_add(pnnx_expr_308_add(pnnx_expr_329_add(pnnx_expr_331_add(38,74),79),116),121)
fallback batch axis 233 for operand pnnx_expr_241_mul(149,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_227_add(146,182)
fallback batch axis 233 for operand pnnx_expr_225_add(pnnx_expr_227_add(146,182),187)
fallback batch axis 233 for operand pnnx_expr_218_mul(191,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_204_add(pnnx_expr_225_add(pnnx_expr_227_add(146,182),187),224)
fallback batch axis 233 for operand pnnx_expr_202_add(pnnx_expr_204_add(pnnx_expr_225_add(pnnx_expr_227_add(146,182),187),224),229)
fallback batch axis 233 for operand pnnx_expr_195_mul(233,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_181_add(pnnx_expr_202_add(pnnx_expr_204_add(pnnx_expr_225_add(pnnx_expr_227_add(146,182),187),224),229),266)
fallback batch axis 233 for operand pnnx_expr_179_add(pnnx_expr_181_add(pnnx_expr_202_add(pnnx_expr_204_add(pnnx_expr_225_add(pnnx_expr_227_add(146,182),187),224),229),266),271)
fallback batch axis 233 for operand pnnx_expr_172_mul(275,2.236068e-01)
fallback batch axis 233 for operand pnnx_expr_158_add(pnnx_expr_179_add(pnnx_expr_181_add(pnnx_expr_202_add(pnnx_expr_204_add(pnnx_expr_225_add(pnnx_expr_227_add(146,182),187),224),229),266),271),308)
fallback batch axis 233 for operand pnnx_expr_156_add(pnnx_expr_158_add(pnnx_expr_179_add(pnnx_expr_181_add(pnnx_expr_202_add(pnnx_expr_204_add(pnnx_expr_225_add(pnnx_expr_227_add(146,182),187),224),229),266),271),308),313)
fallback batch axis 233 for operand pnnx_expr_91_mul(341,2.041242e-01)
fallback batch axis 233 for operand pnnx_expr_77_add(338,374)
fallback batch axis 233 for operand pnnx_expr_75_add(pnnx_expr_77_add(338,374),379)
fallback batch axis 233 for operand pnnx_expr_68_mul(383,2.041242e-01)
fallback batch axis 233 for operand pnnx_expr_54_add(pnnx_expr_75_add(pnnx_expr_77_add(338,374),379),416)
fallback batch axis 233 for operand pnnx_expr_52_add(pnnx_expr_54_add(pnnx_expr_75_add(pnnx_expr_77_add(338,374),379),416),421)
fallback batch axis 233 for operand pnnx_expr_45_mul(425,2.041242e-01)
fallback batch axis 233 for operand pnnx_expr_31_add(pnnx_expr_52_add(pnnx_expr_54_add(pnnx_expr_75_add(pnnx_expr_77_add(338,374),379),416),421),458)
fallback batch axis 233 for operand pnnx_expr_29_add(pnnx_expr_31_add(pnnx_expr_52_add(pnnx_expr_54_add(pnnx_expr_75_add(pnnx_expr_77_add(338,374),379),416),421),458),463)
insert_reshape_global_pooling_forward torch.flatten_69 478
- LeViT
❌ runtime error
修改前:
layer torch.flatten not exists or registered
Segmentation fault (core dumped)
############# pass_ncnn
slice with step 2 is not supported
slice with step 2 is not supported
ignore torch.flatten torch.flatten_46 param end_dim=1
ignore torch.flatten torch.flatten_46 param start_dim=0
ignore torch.flatten torch.flatten_47 param end_dim=1
ignore torch.flatten torch.flatten_47 param start_dim=0
ignore torch.flatten torch.flatten_48 param end_dim=1
ignore torch.flatten torch.flatten_48 param start_dim=0
ignore torch.flatten torch.flatten_49 param end_dim=1
ignore torch.flatten torch.flatten_49 param start_dim=0
ignore torch.flatten torch.flatten_50 param end_dim=1
ignore torch.flatten torch.flatten_50 param start_dim=0
ignore torch.flatten torch.flatten_51 param end_dim=1
ignore torch.flatten torch.flatten_51 param start_dim=0
ignore torch.flatten torch.flatten_52 param end_dim=1
ignore torch.flatten torch.flatten_52 param start_dim=0
ignore torch.flatten torch.flatten_53 param end_dim=1
ignore torch.flatten torch.flatten_53 param start_dim=0
ignore torch.flatten torch.flatten_54 param end_dim=1
ignore torch.flatten torch.flatten_54 param start_dim=0
ignore torch.flatten torch.flatten_55 param end_dim=1
ignore torch.flatten torch.flatten_55 param start_dim=0
ignore torch.flatten torch.flatten_56 param end_dim=1
ignore torch.flatten torch.flatten_56 param start_dim=0
ignore torch.flatten torch.flatten_57 param end_dim=1
ignore torch.flatten torch.flatten_57 param start_dim=0
ignore torch.flatten torch.flatten_58 param end_dim=1
ignore torch.flatten torch.flatten_58 param start_dim=0
ignore torch.flatten torch.flatten_59 param end_dim=1
ignore torch.flatten torch.flatten_59 param start_dim=0
ignore torch.flatten torch.flatten_60 param end_dim=1
ignore torch.flatten torch.flatten_60 param start_dim=0
ignore torch.flatten torch.flatten_61 param end_dim=1
ignore torch.flatten torch.flatten_61 param start_dim=0
ignore torch.flatten torch.flatten_62 param end_dim=1
ignore torch.flatten torch.flatten_62 param start_dim=0
ignore torch.flatten torch.flatten_63 param end_dim=1
ignore torch.flatten torch.flatten_63 param start_dim=0
ignore torch.flatten torch.flatten_64 param end_dim=1
ignore torch.flatten torch.flatten_64 param start_dim=0
ignore torch.flatten torch.flatten_65 param end_dim=1
ignore torch.flatten torch.flatten_65 param start_dim=0
ignore torch.flatten torch.flatten_66 param end_dim=1
ignore torch.flatten torch.flatten_66 param start_dim=0
ignore torch.flatten torch.flatten_67 param end_dim=1
ignore torch.flatten torch.flatten_67 param start_dim=0
ignore torch.flatten torch.flatten_68 param end_dim=1
ignore torch.flatten torch.flatten_68 param start_dim=0
ignore torch.flatten torch.flatten_69 param end_dim=1
ignore torch.flatten torch.flatten_69 param start_dim=0
ignore torch.flatten torch.flatten_70 param end_dim=1
ignore torch.flatten torch.flatten_70 param start_dim=0
ignore torch.flatten torch.flatten_71 param end_dim=1
ignore torch.flatten torch.flatten_71 param start_dim=0
ignore torch.flatten torch.flatten_72 param end_dim=1
ignore torch.flatten torch.flatten_72 param start_dim=0
ignore torch.flatten torch.flatten_73 param end_dim=1
ignore torch.flatten torch.flatten_73 param start_dim=0
ignore torch.flatten torch.flatten_74 param end_dim=1
ignore torch.flatten torch.flatten_74 param start_dim=0
ignore torch.flatten torch.flatten_75 param end_dim=1
ignore torch.flatten torch.flatten_75 param start_dim=0
ignore torch.flatten torch.flatten_76 param end_dim=1
ignore torch.flatten torch.flatten_76 param start_dim=0
ignore torch.flatten torch.flatten_77 param end_dim=1
ignore torch.flatten torch.flatten_77 param start_dim=0
ignore torch.flatten torch.flatten_78 param end_dim=1
ignore torch.flatten torch.flatten_78 param start_dim=0
ignore torch.flatten torch.flatten_79 param end_dim=1
ignore torch.flatten torch.flatten_79 param start_dim=0
ignore torch.flatten torch.flatten_80 param end_dim=1
ignore torch.flatten torch.flatten_80 param start_dim=0
ignore torch.flatten torch.flatten_81 param end_dim=1
ignore torch.flatten torch.flatten_81 param start_dim=0
ignore torch.flatten torch.flatten_82 param end_dim=1
ignore torch.flatten torch.flatten_82 param start_dim=0
ignore torch.flatten torch.flatten_83 param end_dim=1
ignore torch.flatten torch.flatten_83 param start_dim=0
ignore torch.flatten torch.flatten_84 param end_dim=1
ignore torch.flatten torch.flatten_84 param start_dim=0
ignore torch.flatten torch.flatten_85 param end_dim=1
ignore torch.flatten torch.flatten_85 param start_dim=0
ignore torch.flatten torch.flatten_86 param end_dim=1
ignore torch.flatten torch.flatten_86 param start_dim=0
ignore torch.flatten torch.flatten_87 param end_dim=1
ignore torch.flatten torch.flatten_87 param start_dim=0
ignore torch.flatten torch.flatten_88 param end_dim=1
ignore torch.flatten torch.flatten_88 param start_dim=0
ignore torch.flatten torch.flatten_89 param end_dim=1
ignore torch.flatten torch.flatten_89 param start_dim=0
ignore torch.flatten torch.flatten_90 param end_dim=1
ignore torch.flatten torch.flatten_90 param start_dim=0
ignore torch.flatten torch.flatten_91 param end_dim=1
ignore torch.flatten torch.flatten_91 param start_dim=0
将torch.flatten用另一种写法替换掉
diff --git a/levit.py b/levit.py
index 7fa515d.. eb6a451 100644
--- a/levit.py
+++ b/levit.py
@@ -148,6 +148,7 @@ class Linear_BN(torch.nn.Sequential):
def forward(self, x):
l, bn = self._modules.values()
x = l(x)
+ return bn(x.view(-1, *x.shape[2:])).view(*x.shape)
return bn(x.flatten(0, 1)).reshape_as(x)
@@ -465,7 +466,8 @@ class LeViT(torch.nn.Module):
def forward(self, x):
x = self.patch_embed(x)
- x = x.flatten(2).transpose(1, 2)
+ # x = x.flatten(2).transpose(1, 2)
+ x = x.view(*x.shape[:2], -1).transpose(1, 2)
x = self.blocks(x)
x = x.mean(1)
if self.distillation:
修改后
Segmentation fault (core dumped)
############# pass_ncnn
slice with step 2 is not supported
slice with step 2 is not supported
The ncnn library would use int8 inference automatically, nothing changed in your code
同样表示一个数,bf16所用的内存空间比fp32少一半。手机cpu的cache才多大,少一半可不得了啊!即使fp32运算需要移位转换,仍然能从更少的读和更高的cache命中率得到收益! 不过现在支持bf16运算的cpu还没造出来(很快就要造出来了 所以不得不读取bf16后,转换到fp32做后面的运算,算完后再把输出转换到bf16存出来
Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
-
ncnn发布20210507版本,int8量化推理大优化超500% ncnn int8量化推理新特性:
- conv/convdw/fc 量化推理支持附带任意激活层
- int8特征数据自动转换为elempack=8内存布局,提高访存效率
- 实现全部pack1/pack1to4/pack4/pack8to4等的int8 sgemm kernel优化
- 实现int8 winograd-f43的kernel优化
- 运行时检测armv8.2 dot指令支持,并调用优化的kernel
- 启用fp16/bf16的情况下,遇到非conv/convdw/fc层,自动回退到fp16/bf16而不是fp32计算
A5:EQ的scale搜索对于vgg16这样的大模型目前比较慢(每次的推理耗时较多)。我们正准备提供一些像MobileNet这样的小模型来保证可以快速复现。FP32模型的中间结果存取确实可以减少一些搜索耗时。其他的方式还有对对weight每个channel做并行处理,推理时在优化层做early stop操作等。我们会关注进一步的提速问题。
- https://github.com/YingkunZhou/EdgeTransformerPerf/blob/main/python/ppq-quant.py
- WIP: ncnn ViT int8 #154
- mmdeploy int8 量化 ncnn ViT part1
- mmdeploy int8 量化 ncnn ViT part2
- 如何配置fp16进行计算
最新版本已经实现cpu fp16存储和运算,默认自动开启,无需设置
今年将是深度学习-计算机视觉领域模型低比特量化产品落地的第一年,毕竟各路NPU厂家均采用int8量化压缩方案,安谋科技(我偏不说ARM……)推出了新的框架Cortex-A76/A55,Mali-G76全面支持int8 dot指令,高通的超强DSP int8 tensor虎视眈眈,Intel的AVX512默默吃瓜,NVIDIA黄老板身穿皮衣手持TensorRT说“我不是针对谁……”,三大炼丹炉(TensorFlow、PyTorch、MxNet)也开始发布自己的int8模型训练方案。希望年末int8模型能够满足当前所有的任务精度需求。
看样子ncnn的GPU vulkan后端根本没有支持int8啊!!!
- fp16 and int8 support for vulkan backend
- https://polariszhao.github.io/2020/09/17/ncnn%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90-3/
CPU/GPU FP32/FP16 (ncnn format default is fp16 storage)
Model | Top-1 | Top-1 //20 est. |
Top-1 //50 est. |
#params | GMACs |
---|---|---|---|---|---|
efficientformerv2_s0 | - | 76.5(G16)76.3/75.8(C16) | 76.0 | 3.5M | 0.40G |
efficientformerv2_s1 | - | 78.5 | 79.6 | 6.1M | 0.65G |
efficientformerv2_s2 | - | 82.0 | 82.1 | 12.6M | 1.25G |
mobilevitv2_050 | - | 70.0(G16)/69.7 | 66.4 | 1.4M | 0.5G |
mobilevitv2_075 | - | 75.0 | 74.4 | 2.9M | 1.0G |
mobilevitv2_100 | - | 77.9 | 76.8 | 4.9M | 1.8G |
mobilevitv2_125 | - | 79.2 | 80.5 | 7.5M | 2.8G |
mobilevitv2_150 | - | 81.0 | 81.5 | 10.6M | 4.0G |
mobilevitv2_175 | - | 80.9 | 81.0 | 14.3M | 5.5G |
mobilevitv2_200 | - | 82.0 | 83.0 | 18.4M | 7.2G |
resnet50 | - | 79.9 | 81.4 | 25.6M | 4.1G |
mobilenetv3_large_100 | - | 75.6/75.1(C16) | 75.4/75.0(G16) | 5.5M | 0.29G |
tf_efficientnetv2_b0 | - | 78.4 | 76.9 | 7.1M | 0.72G |
tf_efficientnetv2_b1 | - | 79.2 | 79.3 | 8.1M | 1.2G |
tf_efficientnetv2_b2 | - | 81.6 | 80.4 | 10.1M | 1.7G |
tf_efficientnetv2_b3 | - | 81.6 | 82.2 | 14.4M | 3.0G |
CPU/GPU INT8
- mod means use my modification patch for calibration dataset image pre-processing
- Here is the orignal usage command
- default use kl methods
Model | Top-1 | Top-1 //20 est. |
Top-1 //50 est. |
#params | GMACs |
---|---|---|---|---|---|
efficientformerv2_s0 | - | 28.3 | 19.3 | 3.5M | 0.40G |
efficientformerv2_s0-mod | - | 35.8 | 27.4 | 3.5M | 0.40G |
efficientformerv2_s0/eq-mod | - | 64.9 | 60.6 | 3.5M | 0.40G |
efficientformerv2_s1 | - | 0 | 0 | 6.1M | 0.65G |
efficientformerv2_s1/eq-mod | - | 43.9 | 35.5 | 6.1M | 0.65G |
efficientformerv2_s2 | - | 19.2 | 13.4 | 12.6M | 1.25G |
efficientformerv2_s2-mod | - | 74.5 | 69.7 | 12.6M | 1.25G |
efficientformerv2_s2-eq-mod | - | 78.8 | 77.5 | 12.6M | 1.25G |
mobilevitv2_050 | - | 26.2 | 21.3 | 1.4M | 0.5G |
mobilevitv2_050/eq-mod | - | 47.9 | 41.7 | 1.4M | 0.5G |
mobilevitv2_050/eq-mod^^^ | - | 65.3 | 60.3 | 1.4M | 0.5G |
mobilevitv2_075 | - | 63.9 | 60.5 | 2.9M | 1.0G |
mobilevitv2_075/eq-mod | - | 68.4 | 66.4 | 2.9M | 1.0G |
mobilevitv2_075/eq-mod^^^ | - | 72.7 | 71.2 | 2.9M | 1.0G |
mobilevitv2_100 | - | 65.8 | 57.8 | 4.9M | 1.8G |
mobilevitv2_100/eq-mod | - | 69.0 | 67.6 | 4.9M | 1.8G |
mobilevitv2_100/eq-mod^^^ | - | 74.8 | 74.1 | 4.9M | 1.8G |
mobilevitv2_125 | - | 66.3 | 61.8 | 7.5M | 2.8G |
mobilevitv2_125/eq-mod | - | 74.8 | 73.3 | 7.5M | 2.8G |
mobilevitv2_125/eq-mod^^^ | - | 76.7 | 77.9 | 7.5M | 2.8G |
mobilevitv2_150 | - | 60.3 | 50.0 | 10.6M | 4.0G |
mobilevitv2_150/eq-mod | - | 74.9 | 75.2 | 10.6M | 4.0G |
mobilevitv2_150/eq-mod^^^ | - | 78.8 | 80.6 | 10.6M | 4.0G |
mobilevitv2_175 | - | 70.0 | 68.7 | 14.3M | 5.5G |
mobilevitv2_175-mod | - | 73.2 | 60.2 | 14.3M | 5.5G |
mobilevitv2_175-eq-mod^^^ | - | 79.2 | 81.9 | 14.3M | 5.5G |
mobilevitv2_200 | - | 66.8 | 63.5 | 18.4M | 7.2G |
mobilevitv2_200-mod | - | 75.5 | 74.8 | 18.4M | 7.2G |
mobilevitv2_200-eq-mod^^^ | - | 80.4 | 82.0 | 18.4M | 7.2G |
resnet50 | - | 75.3 | 75.4 | 25.6M | 4.1G |
resnet50-mod | - | 78.3 | 78.6 | 25.6M | 4.1G |
mobilenetv3_large_100 | - | 1.4 | 1.8 | 5.5M | 0.29G |
mobilenetv3_large_100-mod | - | 68.3 | 66.5 | 5.5M | 0.29G |
mobilenetv3_large_100/eq-mod | - | 70.2 | 69.4 | 5.5M | 0.29G |
mobilenetv3_large_100/eq-mod^^ | - | 72.8 | 72.0 | 5.5M | 0.29G |
tf_efficientnetv2_b0 | - | 73.4 | 73.1 | 7.1M | 0.72G |
tf_efficientnetv2_b0-mod | - | 75.1 | 74.1 | 7.1M | 0.72G |
tf_efficientnetv2_b0-mod^^ | - | 76.4 | 75.4 | 7.1M | 0.72G |
tf_efficientnetv2_b0-eq-mod | - | 78.2 | 76.5 | 7.1M | 0.72G |
tf_efficientnetv2_b1 | - | 31.8 | 29.1 | 8.1M | 1.2G |
tf_efficientnetv2_b1-mod | - | 76.6 | 77.8 | 8.1M | 1.2G |
tf_efficientnetv2_b1-mod^^ | - | 77.0 | 78.2 | 8.1M | 1.2G |
tf_efficientnetv2_b1-eq-mod | - | 78.9 | 77.4 | 8.1M | 1.2G |
tf_efficientnetv2_b2 | - | 33.1 | 28.0 | 10.1M | 1.7G |
tf_efficientnetv2_b2-mod | - | 77.9 | 76.8 | 10.1M | 1.7G |
tf_efficientnetv2_b2-mod^^ | - | 78.7 | 77.5 | 10.1M | 1.7G |
tf_efficientnetv2_b2-eq-mod | - | 80.3 | 79.6 | 10.1M | 1.7G |
tf_efficientnetv2_b3 | - | 80.6 | 81.1 | 14.4M | 3.0G |
tf_efficientnetv2_b3-mod | - | 80.8 | 80.5 | 14.4M | 3.0G |
tf_efficientnetv2_b3-eq-mod | - | 81.0 | 81.1 | 14.4M | 3.0G |
- ^^: 前两层不进行量化
- ^^^: 前三层不进行量化
CPU INT8 w/ ppq
Model | Top-1 | Top-1 //20 est. |
Top-1 //50 est. |
#params | GMACs |
---|---|---|---|---|---|
resnet50 | - | 78.0 | 78.9 | 25.6M | 4.1G |
mobilenetv3_large_100 | - | 72.2 | 70.2 | 5.5M | 0.29G |
tf_efficientnetv2_b0 | - | 77.4 | 75.9 | 7.1M | 0.72G |
tf_efficientnetv2_b1 | - | 78.2 | 77.9 | 8.1M | 1.2G |
tf_efficientnetv2_b2 | - | 80.1 | 79.4 | 10.1M | 1.7G |
tf_efficientnetv2_b3 | - | 80.4 | 81.5 | 14.4M | 3.0G |
- mobilenetv3_large_100 使用pnnx到转换方式,但最后一层没有量化。。。。