UnixGeneralTools - henk52/knowledgesharing GitHub Wiki

General Unix Tools

Introduction

Purpose

Common for both Debian and RedHat

References

Network tools

Remote commands

See:

Remote commands on multiple servers, via clusterssh

See:

  1. sudo apt install clusterssh
  2. clusterssh admin@server1 admin@server2 admin@server3

(please remember that to access a machine without a password do: ssh-copy-id USER@SERVER)

ssh with forced password

  • ssh -o PreferredAuthentications=password USER@SERVER

tcpdump

See:

  • tcpdump port 3478
  • tcpdump host 10.10.10.10
  • show vlan tags: -e
  • Do not convert IP to names: -n
  • Do not convert port numbers to names: -nn

tshark - cli wireshark?

  • dumpcap -D -M
  • tshark -ni 1 -f "icmp"
    • Please replace '1' with the ID, from dumpcap, of your interface.

tcpdump inside a container

  1. docker inspect CONTAINER_NAME | grep -i pid
  2. sudo nsenter -t CONTAINER_PID -n tcpdump -nn -i any port 1234

Check if an application has set NODELAY

See:

  1. build the knetstat.ko (as per described on git)
  2. sudo insmode knetstat.ko
  3. cat /proc/net/tcpstat

Terminal

Fix terminal

Fix terminal with garbled font/text

  1. reset

See:

volume tools

fdisk - cli interactive disk partition

sfdisk - cli scriptable disk partioning tool

Auto partition via script

#!/bin/bash

DISK_DEV="/dev/vdb"
DATA_SIZE=40960

echo "label: dos
device: ${DISK_DEV}
unit: sectors
sector-size: 512

${DISK_DEV}1 : size=${DATA_SIZE}MiB, type=83" | sfdisk ${DISK_DEV}

parted

  • sudo apt install parted
  • sudo parted --align optimal /dev/sda mkpart primary ext4 0% 100%
  • sudo mkfs -t ext4 /dev/sda1
  • sudo parted /dev/sda

mkfs - create file system

sudo mkfs -t ext4 /dev/vdb1

Systemd

Systemd introduciton

systemd references

  • Systemd
  • Systemctl
    • system administration book.
  • Preset

Systemd overview

  • Directories
    • /lib/systemd/system - where systemd files are stored
    • /etc/systemd/system - where systemd files are copied after systemd enable

Systemd commands

  • systemctl daemon-reload
    • Always run the systemctl daemon-reload command after creating new unit files or modifying existing unit files
  • systemctl list-dependencies --after gdm.service
    • what services are ordered to start before the specified service
  • systemctl list-dependencies --before gdm.service
    • what services are ordered to start after the specified service
  • systemctl enable name.service
    • reads the [Install] section of the selected service unit and creates appropriate symbolic links to the /usr/lib/systemd/system/name.service file in the /etc/systemd /system/ directory and its subdirectories. (Does not re-create an existing link).
  • systemctl reenable name.service
  • systemctl disable name.service
  • systemctl mask name.service
    • to prevent it from being started manually or by another service.
  • systemctl unmask name.service
  • systemctl list-units --type target
  • systemctl get-default
  • systemctl list-units --type target [--all]
  • systemctl show name.service [-p]
  • systemd -delta
  • systemdctl restart - Service restart requests are implemented as stop operations followed by start operations. This means that ExecStop= and ExecStopPost= are executed during a service restart operation.

Systemd Target units

Content of the unit.service file

[Unit]

  • After: Start this unit(defined by this file) to start after the units give in the after list.
    • If a unit in the 'After=' list isn't started, then this unit wont be started either.
    • You separate the list with space
  • Before: Start this unit, before the given list of units.
  • Requires: If the required unit fails, then this unit fails too. If this unit is also set as 'After' the Requires unit, then this unit isn't started until after the 'After' unit is successfully started.
    • If the given unit doesn't fails, then this unit fails.
    • If one of the other units gets deactivated or its activation fails, this unit will be deactivated.
    • If the given unit is restarted, then this unit is restarted as well(I think)
    • Note that requirement dependencies do not influence the order in which services are started or stopped. This has to be configured independently with the After= or Before= options.
  • Wants: If this unit is started, it will start (or try to start) the units listed in Wants, but this unit wont fail if a wants unit fails.
    • If the given unit doesn't start successfully, this unit is still started.

[Install] - This section is optional and is used to define the behavior or a unit if it is enabled or disabled

See:

[Service] - is used to provide configuration that is only applicable for services

See:

  • Type= -
    • simple - The main process of the service is specified in the start line. and keeps running in the foreground
    • forking - This service type is used when the service forks a child process, exiting the parent process almost immediately.
    • oneshot - used for one-off tasks.
  • User=

xorp example

  • /usr/bin/systemctl preset xorp.service
  • /usr/bin/systemctl enable xorp.service
  • /usr/bin/systemctl start xorp
    • See also:
[Unit]
Description=XORP softrouter.
After=network.target

[Service]
Type=forking
PIDFile=/var/run/xorp.pid
ExecStart=/root/xorp_bg
#ExecReload=/opt/sonar/bin/linux-x86-64/sonar.sh restart
ExecStop=/root/stop_xorp

Restart=always

[Install]
WantedBy=multi-user.target

Using a service to start a bunch of other services(though not stop)

See also:

[Unit]
Description=My owner App
After=network.target app1.service app2.service
Requires=app1.service
# Do not restart app solution just because these services restart.
Wants=app2.service

[Service]
Type=oneshot
ExecStart=/bin/true
#ExecStop=/root/stop_xorp

Restart=no

[Install]
WantedBy=multi-user.target

Kernel concepts

Shared memory

  • ./Documentation/sysctl/kernel.txt

  • ./include/uapi/linux/shm.h

  • shmmax Parameter: the maximum size in bytes of a single shared memory segment that a Linux process can allocate in its virtual address space.

    • Default: (ULONG_MAX - (1UL << 24))
      • include/uapi/linux/shm.h
    • It also seems to be the max total amount of shared memory a process can attach to.
      • So if SHMMAX is 3G and 3 process has allocated 2GB each, then a process can only attach to one of the pages at a time. (from RHEL3?)
    • Access to it:
      • cat /proc/sys/kernel/shmmax
      • sysctl -w kernel.shmmax=2147483648
      • echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf
  • shmall: The total amount of shared memory pages that can be used system wide.

    • Please note this is in pages, not in bytes.
  • getconf PAGE_SIZE

  • ipcs -m

  • pmap -p PID

  • /proc/PID/map

    • containing the currently mapped memory regions and their access permissions.
    • The format is:
      • address perms offset dev inode pathname
    • From: Documentation/filesystems/proc.txt in the linux source tree.
    • I looks like the "inode" for shm is actually the shmid (not the key)
      • it seems that all the /SYSV are shared memory.

Logging

logrotation

prevent syslog et al. to fill up the /var partition

  • Add 'maxsize 1G' to '/var/log/syslog' section of /etc/logrotate.d/rsyslog

  • sudo mv /etc/cron.daily/logrotate /etc/cron.hourly/

    • Move the logrotate to be done hourly, to ensure file sizes get checked once per hour.
  • rotate 12: keep twelve old log files.

  • monthly: rotate once a month.

Update using augtool:

This assumes that the first entry is for '/var/log/syslog'

sudo augtool set /files/etc/logrotate.d/rsyslog/rule[1]/rotate 3
sudo augtool set /files/etc/logrotate.d/rsyslog/rule[1]/schedule "daily"
sudo augtool set /files/etc/logrotate.d/rsyslog/rule[1]/maxsize "1G"

See:

Augeas

Installation: sudo apt install augeas-tools

  1. augtool print /files/etc/logrotate.d/rsyslog

Examples:

Using augtool as an interpreter

See:

#!/usr/bin/augtool -f
print /files/etc/logrotate.d/rsyslog
set /files/etc/logrotate.d/rsyslog/rule[1]/rotate 3
set /files/etc/logrotate.d/rsyslog/rule[1]/schedule "daily"
set /files/etc/logrotate.d/rsyslog/rule[1]/maxsize "1G"
save

haproxy

HAProxy overview

the machine HAProxy runs on must never ever swap, and its CPU must not be artificially throttled (sub-CPU allocation in hypervisors) nor be shared with compute-intensive processes which would induce a very high context-switch latency.How HAProxy works

Installing HA-Proxy for KVM

HAProxy config file

Default: /etc/haproxy/haproxy.cfg

installing haproxy

  • sudo apt install haproxy haproxyctl hatop

Set-up haproxy and k8s with passthrough

Set-up haproxy and minikube with passthrough

  • minikube parameter Minikube parameter
    • --apiserver-ips ipSlice
      • A set of apiserver IP Addresses which are used in the generated certificate for kubernetes. This can be used if you want to make the apiserver available from outside the machine (default [])
    • minikube start --apiserver-ips=192.168.1.144

/etc/haproxy/haproxy.cfg

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen stats
   # my server has 2 IP addresses, but you can use *:<port> to listen on all interfaces and on the specific port
   bind 192.168.1.144:8086
   mode http
   stats enable
   stats uri /

# Kubernetes

frontend k8s_frontend
   description "Kubernetes Master API server"
   # my server has 2 IP addresses, but you can use *:6443 to listen on all interfaces and on that specific port
   bind *:6443
   # TODO do I need 'tcplog' ?
   option tcplog
   mode tcp
   default_backend k8s_backend

backend k8s_backend
   description "Kubernetes Master API server"
   mode tcp
   balance roundrobin
   option ssl-hello-chk
   acl internal_networks src 192.168.49.0/24 172.16.0.0/12 10.0.0.0/8 127.0.0.1
   # k8s apiServer requires a host
   # TODO how do I configure this generacyl???
   #option httpchk GET /healthz HTTP/1.1\r\nHost:\ 192.168.1.144
   server 192.168.49.2 192.168.49.2:8443 check
   #server minikube_1 192.168.49.2:8443 check check-ssl verify none
   # server minikube_1 192.168.49.2:8443 check
   #server k8s_master_2 192.168.1.16:6443 check check-ssl verify none
   
frontend airsonic_frontend
  description "AirSonic http pass through"
  mode http
  bind *:4041
  default_backend airsonic_backend

backend airsonic_backend
  description "AirSonic k8s cluster"
  balance roundrobin
  server airsonic 192.168.49.2:30033

#backend airsonic_frontend
#  description "AirSonic k8s cluster"
#  balance roundrobin
#  server airsonic 192.168.49.2:80

frontend jellyfin_frontend
  description "jellyfin http pass through"
  mode http
  bind *:8096
  default_backend jellyfin_backend

backend jellyfin_backend
  description "jellyfin k8s cluster"
  balance roundrobin
  server jellyfin 192.168.49.2:30096

frontend dockerd_frontend
   bind *:2376 ssl crt /etc/ssl/certs/haproxy_cert.pem
   option tcplog
   mode tcp
   default_backend dockerd_backend

backend dockerd_backend
   description "dockerd"
   mode tcp
   balance roundrobin
   option ssl-hello-chk
   # acl : Declare or complete an access list.
   acl internal_networks src 192.168.49.0/24 172.16.0.0/12 10.0.0.0/8 127.0.0.1
   server minikube_1 192.168.49.2:2376 check ssl verify required ca-file /etc/ssl/certs/minikube_ca.pem crt /etc/ssl/certs/minikube_client.pem
   #server minikube_1 192.168.49.2:2376 check verify required ca-file /etc/ssl/certs/minikube_ca.pem


   #  http://docs.haproxy.org/2.4/configuration.html#5.2-check
   # check - the server is considered available when a connection can be established at the highest configured transport layer.
   # ssl
   #

/etc/haproxy/haproxy.cfg for simple ssh pass-through

global
 log /dev/log local0
 log /dev/log local1 notice
 chroot /var/lib/haproxy
 stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
 stats timeout 30s
 user haproxy
 group haproxy
 daemon

 # Default SSL material locations
 ca-base /etc/ssl/certs
 crt-base /etc/ssl/private

 # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
 log global
 mode http
 option httplog
 option dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
 errorfile 400 /etc/haproxy/errors/400.http
 errorfile 403 /etc/haproxy/errors/403.http
 errorfile 408 /etc/haproxy/errors/408.http
 errorfile 500 /etc/haproxy/errors/500.http
 errorfile 502 /etc/haproxy/errors/502.http
 errorfile 503 /etc/haproxy/errors/503.http
 errorfile 504 /etc/haproxy/errors/504.http

listen stats
   # my server has 2 IP addresses, but you can use *:<port> to listen on all interfaces and on the specific port
   bind 192.168.1.102:8086
   mode http
   stats enable
   stats uri /

frontend lfs_labclient_front
   mode tcp
   bind *:2222
   log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq dstName:%[var(sess.dstName)] dstIP:%[var(sess.dstIP)] "
   default_backend lfs_labclient_back
   timeout client 1h

backend lfs_labclient_back
   mode tcp
   server lfs_labclient 192.168.122.38:22 check

after update connect using: ssh -i private_ansible_cloud_init [email protected] -p 2222

Verify and update the haproxy cfg

  • sudo haproxy -c -f haproxy.cfg
  • sudo cp haproxy.cfg /etc/haproxy/haproxy.cfg
  • sudo systemctl restart haproxy
  • systemctl status haproxy

haproxy no certs

This does not work

  • sudo apt update
  • sudo apt upgrade
  • sudo apt install haproxy
  • cp /etc/haproxy/haproxy.cfg k8s_haproxy.cfg
  • find the host ip address with: ip a
    • please note if your ip isn't hardcoded or locked on the dhcp server, then this IP address might change.
  • find the kubernetes control plane IP address(s)
    • for minikube run: minikube ip
      • e.g. 192.168.49.2
  • sudo haproxy -c -f k8s_haproxy.cfg
  • sudo cp k8s_haproxy.cfg /etc/haproxy/haproxy.cfg
  • sudo systemctl restart haproxy
  • systemctl status haproxy
    • if there are problems run: journalctl -u haproxy

haproxy with SSL termination

[req]
default_bits  = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no

[req_distinguished_name]
countryName = VX
stateOrProvinceName = N/A
localityName = N/A
organizationName = Self-signed certificate
commonName = admin

[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
IP.1 = 192.168.14.42

Fix the server ca set-up for haproxy

  • sudo cp .minikube/certs/ca.pem /etc/ssl/certs/minikube_ca.pem

  • openssl x509 -req -extensions v3_req -extfile haproxy_cert.cnf -in haproxy_cert.csr -CA ~/.minikube/ca.crt -CAkey ~/.minikube/ca.key -CAcreateserial -out haproxy_cert.crt -days 100

haproxy Cert troubleshooting

  • couldn't get current server API group list: Get "https://192.168.122.205:6443/api?timeout=32s": tls: failed to verify certificate: x509: certificate signed by unknown authority
    • When using the ca.crt from minikube, against the KVM k8s setup
  • the server has asked for the client to provide credentials
    • when using the ca.crt from the k8s kvm, but the client.crt from minikube
  • couldn't get current server API group list: Get "https://192.168.44.144:6443/api?timeout=32s": tls: failed to verify certificate: x509: certificate is valid for 192.168.49.2, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 192.168.1.144
    • when trying to access the remote minikube, via haproxy on the remote machine
    • I think this is when validating the minikub apiserver certificate: /var/lib/minikube/certs/apiserver.crt
      • X509v3 Subject Alternative Name:
        • DNS:minikubeCA, DNS:control-plane.minikube.internal, DNS:kubernetes.default.svc.cluster.local, DNS:kubernetes.default.svc, DNS:kubernetes.default, DNS:kubernetes, DNS:localhost, IP Address:192.168.49.2, IP Address:10.96.0.1, IP Address:127.0.0.1, IP Address:10.0.0.1
  • E0424 10:19:44.309655 3161730 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
    • I think this is because it is a new minikube has new client certs and keys.
  • error during connect: Get "https://192.168.1.144:2376/v1.24/containers/json": x509: cannot validate certificate for 192.168.1.144 because it doesn't contain any IP SANs
  • error during connect: Get "https://192.168.1.144:2376/v1.24/containers/json": x509: certificate signed by unknown authority
    • had to use the ca-key.pem and ca.pem for crt signing
  • Error response from daemon: Client sent an HTTP request to an HTTPS server.
  • Server dockerd_backend/minikube_1 is DOWN, reason: Socket error, check duration: 29ms. 0 active and 0 backup servers left. 0 sessions active, 0 re>
    • when setting server to ssl verify none
    • see /var/log/haproxy.log

kex_exchange_identification: Connection closed by remote host

  • it turned out the target was shut down.
  • another time the frontend was selecting the wrong backend, and that wrong backend wasn't running.

There qws no check on the backend server.

  • sudo journalctl -u haproxy
sh -i private_ansible_cloud_init -p 2201 [email protected]
kex_exchange_identification: Connection closed by remote host
Connection closed by 192.168.1.104 port 2201

Lightning rounds

running commands

Have root run a command as a specific user

su --login --command \"SCRIPT_TO_RUN SCRIPT_PARM_1\" USER_NAME
⚠️ **GitHub.com Fallback** ⚠️