Allwinner a20 - ssvb/tinymembench GitHub Wiki

Using the bootloader from nand firmware cb_a20_ubn_12.04_x-v1.02-dram480.img on CubieBoard2. The kernel is from https://github.com/cubieboard2/linux-sunxi/tree/sunxi-3.3-cb2 (and has framebuffer disabled).

echo 1008000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq

Processor       : ARMv7 Processor rev 4 (v7l)
processor       : 0
BogoMIPS        : 2000.99

processor       : 1
BogoMIPS        : 2015.48

Features        : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc07
CPU revision    : 4

Hardware        : sun7i
Revision        : 0000
Serial          : 00000000000000000000000000000000

a10-meminfo-static

dram_clk          = 480
dram_type         = 3
dram_rank_num     = 1
dram_chip_density = 4096
dram_io_width     = 16
dram_bus_width    = 32
dram_cas          = 9
dram_zq           = 0x7f
dram_odt_en       = 0
dram_tpr0         = 0x42d899b7
dram_tpr1         = 0xa090
dram_tpr2         = 0x22a00
dram_tpr3         = 0x0
dram_emr1         = 0x4
dram_emr2         = 0x10
dram_emr3         = 0x0
tinymembench v0.2.9 (simple benchmark for memory throughput and latency)

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :    855.5 MB/s (0.3%)
 C copy                                               :    902.6 MB/s
 C copy prefetched (32 bytes step)                    :    871.2 MB/s
 C copy prefetched (64 bytes step)                    :    871.1 MB/s (0.1%)
 C 2-pass copy                                        :    735.7 MB/s (0.6%)
 C 2-pass copy prefetched (32 bytes step)             :    762.6 MB/s (0.6%)
 C 2-pass copy prefetched (64 bytes step)             :    762.6 MB/s (0.4%)
 C fill                                               :   2031.1 MB/s (0.4%)
 ---
 standard memcpy                                      :    931.6 MB/s (0.5%)
 standard memset                                      :   2031.3 MB/s (0.5%)
 ---
 NEON read                                            :   1231.8 MB/s (0.3%)
 NEON read prefetched (32 bytes step)                 :   1325.4 MB/s (0.5%)
 NEON read prefetched (64 bytes step)                 :   1405.4 MB/s (0.4%)
 NEON copy                                            :    904.7 MB/s
 NEON copy prefetched (32 bytes step)                 :    916.5 MB/s
 NEON copy prefetched (64 bytes step)                 :    949.1 MB/s
 NEON unrolled copy                                   :    924.3 MB/s
 NEON unrolled copy prefetched (32 bytes step)        :    845.6 MB/s (0.1%)
 NEON unrolled copy prefetched (64 bytes step)        :    878.5 MB/s
 NEON copy backwards                                  :    857.1 MB/s
 NEON copy backwards prefetched (32 bytes step)       :    935.6 MB/s (0.4%)
 NEON copy backwards prefetched (64 bytes step)       :    927.1 MB/s
 NEON 2-pass copy                                     :    760.8 MB/s (0.4%)
 NEON 2-pass copy prefetched (32 bytes step)          :    798.3 MB/s (0.3%)
 NEON 2-pass copy prefetched (64 bytes step)          :    808.2 MB/s (0.4%)
 NEON unrolled 2-pass copy                            :    677.7 MB/s (0.3%)
 NEON unrolled 2-pass copy prefetched (32 bytes step) :    634.7 MB/s (0.2%)
 NEON unrolled 2-pass copy prefetched (64 bytes step) :    687.0 MB/s (0.3%)
 NEON fill                                            :   2031.8 MB/s (0.4%)
 NEON fill backwards                                  :   2032.0 MB/s (0.4%)
 ARM fill (STRD)                                      :   2012.5 MB/s
 ARM fill (STM with 8 registers)                      :   2031.4 MB/s (0.3%)
 ARM fill (STM with 4 registers)                      :   2031.0 MB/s (0.4%)
 ARM copy prefetched (incr pld)                       :    920.4 MB/s
 ARM copy prefetched (wrap pld)                       :    865.7 MB/s (0.1%)
 ARM 2-pass copy prefetched (incr pld)                :    765.6 MB/s (0.3%)
 ARM 2-pass copy prefetched (wrap pld)                :    735.1 MB/s (0.3%)

==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with total 3 requests to SDRAM for almost every      ==
== memory access (though 64MiB is not large enough to experience this   ==
== effect to its fullest).                                              ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : read access time (single random read / dual random read)
         2 :    0.0 ns  /     0.0 ns 
         4 :    0.0 ns  /     0.0 ns 
         8 :    0.0 ns  /     0.0 ns 
        16 :    0.0 ns  /     0.0 ns 
        32 :    0.0 ns  /     0.0 ns 
        64 :    0.0 ns  /     0.0 ns 
       128 :    0.0 ns  /     0.0 ns 
       256 :    0.0 ns  /     0.0 ns 
       512 :    0.0 ns  /     0.0 ns 
      1024 :    0.0 ns  /     0.0 ns 
      2048 :    0.0 ns  /     0.0 ns 
      4096 :    0.0 ns  /     0.0 ns 
      8192 :    0.0 ns  /     0.0 ns 
     16384 :    0.0 ns  /     0.0 ns 
     32768 :    0.0 ns  /     0.0 ns 
     65536 :    6.8 ns  /    10.8 ns 
    131072 :   10.1 ns  /    15.1 ns 
    262144 :   13.0 ns  /    18.5 ns 
    524288 :  105.5 ns  /   165.4 ns 
   1048576 :  157.1 ns  /   216.9 ns 
   2097152 :  190.1 ns  /   241.4 ns 
   4194304 :  207.8 ns  /   251.7 ns 
   8388608 :  218.4 ns  /   259.8 ns 
  16777216 :  229.5 ns  /   273.3 ns 
  33554432 :  245.6 ns  /   301.6 ns 
  67108864 :  277.7 ns  /   364.3 ns 

Latency test with huge pages enabled:

echo 100 > /proc/sys/vm/nr_hugepages
mount -t hugetlbfs none /mnt/huge
export LD_PRELOAD=libhugetlbfs.so
export HUGETLB_MORECORE=yes
./tinymembench
block size : read access time (single random read / dual random read)
         2 :    0.0 ns  /     0.0 ns 
         4 :    0.0 ns  /     0.0 ns 
         8 :    0.0 ns  /     0.0 ns 
        16 :    0.0 ns  /     0.0 ns 
        32 :    0.0 ns  /     0.0 ns 
        64 :    0.0 ns  /     0.0 ns 
       128 :    0.0 ns  /     0.0 ns 
       256 :    0.0 ns  /     0.0 ns 
       512 :    0.0 ns  /     0.0 ns 
      1024 :    0.0 ns  /     0.0 ns 
      2048 :    0.0 ns  /     0.0 ns 
      4096 :    0.0 ns  /     0.0 ns 
      8192 :    0.0 ns  /     0.0 ns 
     16384 :    0.0 ns  /     0.0 ns 
     32768 :    0.0 ns  /     0.0 ns 
     65536 :    6.3 ns  /    10.8 ns 
    131072 :   10.2 ns  /    15.1 ns 
    262144 :   13.1 ns  /    18.5 ns 
    524288 :  105.5 ns  /   165.4 ns 
   1048576 :  157.0 ns  /   216.9 ns 
   2097152 :  183.5 ns  /   234.7 ns 
   4194304 :  197.5 ns  /   241.9 ns 
   8388608 :  204.4 ns  /   244.7 ns 
  16777216 :  208.0 ns  /   246.0 ns 
  33554432 :  209.5 ns  /   246.6 ns 
  67108864 :  210.3 ns  /   246.8 ns 
⚠️ **GitHub.com Fallback** ⚠️