stress ng_info - charlesfg/TPCx-V_setup GitHub Wiki
- Para carga em background stress-ng
Create User:
sudo useradd wl_user
cp -v run_background_work.sh ~wl_user
cp -rv stress-ng-0.07.28 ~wl_user
cp kill_background_work.sh ~wl_user
chown wl_user:wl_user -Rv ~wl_user
# Change the script to pwd to wl_user before running
R un all the stressors in the os class one by one for 5 minutes each, with 4 instances of each stressor running concurrently and show overall time utilisation statistics at the end of the run.
./stress-ng --sequential 8 --class os -t 1m
./stress-ng --cpu 1 --cpu-load 10 --io 2 --vm 1 --vm-bytes 1G --timeout 60s &
runs for 60 seconds with 1 cpu stressors with 10% load , 2 io stressors and 1 vm stressor using 1GB of virtual memory.
./stress-ng --udp 2 -t 2 --times ./stress-ng --sock 2 -t 2 --times
./stress-ng --cpu 2 --cpu-load 90 --io 2 --vm 1 --timeout 30s
./stress-ng --io 2 --hdd 4 --vm 1 --hdd-opts dsync --timeout 30s
af-alg
aio
aiol
bigheap
brk
cap
chdir
chmod
chown
chroot
clock
clone
copy-file
cpu-online
daemon
dccp
dentry
dir
dirdeep
dnotify
dup
epoll
eventfd
exec
fallocate
fanotify
fault
fcntl
fiemap
fifo
filename
flock
fork
fstat
full
futex
get
getdent
getrandom
handle
hdd
icmp-flood
inotify
io
iomix
ioprio
itimer
kcmp
key
kill
klog
lease
link
locka
lockf
lockofd
madvise
malloc
memfd
mincore
mknod
mlock
mmap
mmapfork
mmapmany
mq
mremap
msg
msync
netlink-proc
nice
null
numa
oom-pipe
opcode
open
personality
pipe
poll
procfs
pthread
ptrace
pty
quota
readahead
remap
rename
resources
rlimit
rmap
rtc
schedpolicy
seal
seccomp
seek
sem
sem-sysv
sendfile
shm
shm-sysv
sigfd
sigfpe
sigpending
sigq
sigsegv
sigsuspend
sleep
sock
sockfd
sockpair
spawn
splice
switch
symlink
sync-file
sysfs
sysinfo
tee
timer
timerfd
tlb-shootdown
tmpfs
udp
udp-flood
unshare
urandom
userfaultfd
utime
vfork
vforkmany
vm
vm-rw
vm-splice
wait
xattr
yield
zero
zombie
−−aio N
start N workers that issue multiple small asynchronous I/O writes and reads on a relatively small
temporary file using the POSIX aio interface. This will just hit the file system cache and soak up a
lot of user and kernel time in issuing and handling I/O requests. By default, each worker process
will handle 16 concurrent I/O requests.
−−aiol N
start N workers that issue multiple 4K random asynchronous I/O writes using the Linux aio system
calls io_setup(2), io_submit(2), io_getevents(2) and io_destroy(2). By default, each worker
process will handle 16 concurrent I/O requests
−d N, −−hdd N
start N workers continually writing, reading and removing temporary files. The default mode is to
stress test sequential writes and reads. With the −−aggressive option enabled without any
−−hdd−opts options the hdd stressor will work through all the −−hdd−opt options one by one to
coverarange of I/O options.
- readahead
- seek
- sync_file