Raspberry Pi 4 (BCM2711) - ssvb/tinymembench GitHub Wiki
tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 2483.8 MB/s
C copy backwards (32 byte blocks) : 2486.8 MB/s (7.0%)
C copy backwards (64 byte blocks) : 2486.0 MB/s (12.2%)
C copy : 2253.2 MB/s (10.9%)
C copy prefetched (32 bytes step) : 2465.9 MB/s
C copy prefetched (64 bytes step) : 2468.1 MB/s
C 2-pass copy : 1366.9 MB/s (9.6%)
C 2-pass copy prefetched (32 bytes step) : 2078.3 MB/s (9.4%)
C 2-pass copy prefetched (64 bytes step) : 2076.4 MB/s (9.8%)
C fill : 2883.0 MB/s (11.2%)
C fill (shuffle within 16 byte blocks) : 2924.6 MB/s (9.4%)
C fill (shuffle within 32 byte blocks) : 2903.1 MB/s (8.1%)
C fill (shuffle within 64 byte blocks) : 2897.0 MB/s (10.2%)
---
standard memcpy : 2244.6 MB/s (12.3%)
standard memset : 2877.5 MB/s (6.3%)
---
NEON LDP/STP copy : 2470.1 MB/s
NEON LDP/STP copy pldl2strm (32 bytes step) : 2469.0 MB/s
NEON LDP/STP copy pldl2strm (64 bytes step) : 2470.2 MB/s (12.1%)
NEON LDP/STP copy pldl1keep (32 bytes step) : 2469.1 MB/s (11.1%)
NEON LDP/STP copy pldl1keep (64 bytes step) : 2531.0 MB/s (9.5%)
NEON LD1/ST1 copy : 2590.9 MB/s (10.6%)
NEON STP fill : 3122.8 MB/s (11.9%)
NEON STNP fill : 2833.0 MB/s (11.4%)
ARM LDP/STP copy : 2588.5 MB/s (8.8%)
ARM STP fill : 3090.7 MB/s (11.7%)
ARM STNP fill : 2778.6 MB/s (11.2%)
==========================================================================
== Framebuffer read tests. ==
== ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled. ==
== Writes to such framebuffers are quite fast, but reads are much ==
== slower and very sensitive to the alignment and the selection of ==
== CPU instructions which are used for accessing memory. ==
== ==
== Many x86 systems allocate the framebuffer in the GPU memory, ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
== PCI-E is asymmetric and handles reads a lot worse than writes. ==
== ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall ==
== performance improvement. For example, the xf86-video-fbturbo DDX ==
== uses this trick. ==
==========================================================================
NEON LDP/STP copy (from framebuffer) : 701.0 MB/s (8.9%)
NEON LDP/STP 2-pass copy (from framebuffer) : 632.9 MB/s (4.6%)
NEON LD1/ST1 copy (from framebuffer) : 775.3 MB/s
NEON LD1/ST1 2-pass copy (from framebuffer) : 651.1 MB/s (9.0%)
ARM LDP/STP copy (from framebuffer) : 521.7 MB/s (7.2%)
ARM LDP/STP 2-pass copy (from framebuffer) : 498.6 MB/s (7.0%)
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read, [MADV_NOHUGEPAGE]
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 5.7 ns / 8.9 ns
131072 : 8.6 ns / 11.9 ns
262144 : 12.3 ns / 15.8 ns
524288 : 14.2 ns / 18.1 ns
1048576 : 27.5 ns / 45.9 ns
2097152 : 88.1 ns / 127.9 ns
4194304 : 118.4 ns / 154.3 ns
8388608 : 141.5 ns / 175.7 ns
16777216 : 152.9 ns / 185.7 ns
33554432 : 158.7 ns / 191.3 ns
67108864 : 170.2 ns / 209.3 ns
block size : single random read / dual random read, [MADV_HUGEPAGE]
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 5.7 ns / 8.9 ns
131072 : 8.5 ns / 11.8 ns
262144 : 10.0 ns / 12.8 ns
524288 : 10.7 ns / 13.3 ns
1048576 : 21.9 ns / 33.7 ns
2097152 : 83.1 ns / 123.4 ns
4194304 : 111.8 ns / 148.0 ns
8388608 : 125.5 ns / 155.8 ns
16777216 : 132.3 ns / 159.6 ns
33554432 : 136.1 ns / 160.6 ns
67108864 : 141.2 ns / 163.4 ns