성능 분석 방법 및 결과 - 17-1-SKKU-OSS/rocksdb GitHub Wiki
RockDB에는 플래시 스토리지 상의 성능 분석을 위해 db_bench 라는 이름의 benchmark가 존재한다.
실험 환경
- OS : ubuntu 14.04 LTS
- CPU : i7-6700
- RAM 64GB
- RocksDB: 5.12
- SSD: 850 pro 256
Test 1. Bulk Load of keys in Sequential Order
연속된 순서로 키 100000000 를 load 한다. 원하는 데이터베이스 경로로 바꿔주려면 --db= 이 부분을 바꿔주면 된다.
echo "Load 1B keys sequentially into database....."
wbs=134217728; r=100000000; t=1; vs=800; cs=1048576; of=500000; ./db_bench --benchmarks=fillseq --num=$r --threads=$t --value_size=$vs --cache_size=$cs --bloom_bits=10 --open_files=$of --db=/data/mysql/leveldb/test --compression_ratio=0.5 --write_buffer_size=$wbs --use_existing_db=0
Test 2. Randwrite
아래와 같은 설정으로 db_bench를 돌렸다.
- Overwrite 1B keys in database in random order \n
- The database was first created by sequentially inserting all the 1B keys /
- Keys : 16 bytes each /
- Values: 800 bytes each /
- Entries:100000000 /
./db_bench
bpl=10485760;overlap=10;mcz=2;del=300000000;levels=2;ctrig=10000000; delay=10000000; stop=10000000; wbn=30; mbc=20; mb=1073741824;wbs=268435456; dds=1; sync=0; r=100000000; t=1; vs=800; bs=65536; cs=1048576; of=500000; si=1000000; ./db_bench --benchmarks=compact --disable_seek_compaction=1 --mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t --value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10 --cache_numshardbits=4 --open_files=$of --verify_checksum=1 --db=/data/mysql/leveldb/test --sync=$sync --disable_wal=1 --compression_type=zlib --stats_interval=$si --compression_ratio=0.5 --disable_data_sync=$dds --write_buffer_size=$wbs --target_fi``le_size_base=$mb --max_write_buffer_number=$wbn --max_background_compactions=$mbc --level0_file_num_compaction_trigger=$ctrig --level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop --num_levels=$levels --delete_obsolete_files_period_micros=$del --min_level_to_compress=$mcz --stats_per_interval=1 --max_bytes_for_level_base=$bpl --memtablerep=vector --use_existing_db=1 --disable_auto_compactions=1 --source_compaction_factor=10000000
**결과 : 31.011 micros/op 32246 ops/sec 25.1 MB/s
Test3. Randread
- Read 1B keys in database in random order
- The database was first created by sequentially writing all the 1B keys /
- Keys : 16 bytes each /
- Values: 800 bytes each /
- Entries 100000000 /
./db_bench
bpl=10485760;overlap=10;mcz=2;del=300000000;levels=6;ctrig=4; delay=8; stop=12; wbn=3; mbc=20; mb=67108864;wbs=134217728; dds=0; sync=0; r=1000000000; t=32; vs=800; bs=4096; cs=1048576; of=500000; si=1000000; ./db_bench --benchmarks=readrandom --disable_seek_compaction=1 --mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t --value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10 --cache_numshardbits=6 --open_files=$of --verify_checksum=1 --db=/data/mysql/leveldb/test --sync=$sync --disable_wal=1 --compression_type=none --stats_interval=$si --compression_ratio=0.5 --disable_data_sync=$dds --write_buffer_size=$wbs --target_file_size_base=$mb --max_write_buffer_number=$wbn --max_background_compactions=$mbc --level0_file_num_compaction_trigger=$ctrig --level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop --num_levels=$levels --delete_obsolete_files_period_micros=$del --min_level_to_compress=$mcz --stats_per_interval=1 --max_bytes_for_level_base=$bpl --use_existing_db=1