Original Docs pre‐2016 - tum-t38/firefly GitHub Wiki
MegaRAID Commands
----------------------------------------------------
Hardware
We have one RAID controller
LSI MegaRAID SAS 9260-8i:
Virtual Drive: 0 (Target Id: 0)
Size : 233.312 GB
Virtual Drive: 1 (Target Id: 1)
Size : 12.727 TB
mounted on
----------------------------------------------------
MegaCli
----------------------------------------------------
# Show status information:
storcli64 -AdpAllInfo -aAll
# Get physical drive info
storcli64 -PDList -aAll
# Get virtual disk info
storcli64 -LDInfo -Lall -aAll
# Display configuration
storcli64 -CfgDsply -aAll
# Dump eventlog events to file 'events' and open it
storcli64 -AdpEventLog -GetEvents -f events -aAll | less events
# QUICK status information wrapper
# available in .bashrc via show-raid
storcli64 -PDList -aAll | sed -e "s@\[.* Sectors\]@@g" | grep -e "Firmware state\|Slot Number:\|Raw Size:\|Drive has flagged a S.M.A.R.T alert" | sed 'N;N;N;s/\n/\t\t/g'
----------------------------------------------------
Cronjob status script in /etc/cron.daily
writes RAID status to /etc/motd
/opt/MegaRAID/MegaCli/megaclisas-status --nagios
----------------------------------------------------
# INSTALLATION OF MegaRAID Storage Manager
alien --scripts *.rpm
http://www.thomas-krenn.com/de/wiki/MegaRAID\_Storage\_Manager\_Installation\_unter\_Ubuntu
---------------------------------------------------
StorCli:
https://www.thomas-krenn.com/en/wiki/StorCLI
/cx bzw. /vx stands for the Controller/Virtual Drive Number.
/ex bzw. /sx stands for the Enclosure/Slot ID.
Status:
/opt/MegaRAID/storcli/storcli64 /c0 show all
+---------------+
| Start GUI |
+---------------+
/etc/init.d/vivaldiframeworkd restart
/usr/local/MegaRAID\ Storage\ Manager/startupui.sh
TSM Backup Commands
--------------------------
TSM backup client installation:
dpkg -i gskcrypt64_8.0-50.52.linux.x86_64.deb gskssl64_8.0-50.52.linux.x86_64.deb
dpkg -i tivsm-api64.amd64.deb
dpkg -i tivsm-ba.amd64.deb
GUI (optional):
# apt-get install openjdk-7-jre icedtea-7-plugin
--------------------------
TSM backup script:
/etc/tsm-backup
--------------------------
Run backup once a week on saturday night:
#> crontab -e
#> 59 23 * * 6 test -x /etc/tsm-backup && /etc/tsm-backup
--------------------------
Configuration:
/opt/tivoli/tsm/client/ba/bin/dsm.sys
Basic setup:
Servername COW-T38-1
nodename COW-T38-1
TcpServerAddress s60.abs.lrz.de
TcpPort 2100
Credentials:
userid: COW-T38-1
passwd: TSMt38Cow
---------------------------
Logfiles:
/opt/tivoli/tsm/client/ba/bin/cow-t38-1/dsmsched.log
---------------------------
Full recovery Fr, 29. Jul. 2016:
< 10,94 TB> [ - ]
Restore processing finished.
Total number of objects restored: 4.644.296
Total number of objects failed: 0
Total number of bytes transferred: 10,94 TB
Data transfer time: 163.815,92 sec
Network data transfer rate: 71.737,93 KB/sec
Aggregate data transfer rate: 59.690,24 KB/sec
Elapsed processing time: 54:41:20
XFS Quota commands
README.quota
# Set quota
xfs_quota -x -c "limit bsoft=3000G bhard=4000G flo"
# Unset quota
xfs_quota -x -c "limit bsoft=0G bhard=0G flo"
# See quota information
xfs_quota -x -c "state"
# Set the grace time to 2 weeks
xfs_quota -x -c "timer -bir 2week"
# A nice tool to inform users about their quota:
warnquota -s -u
# Has its config files in /etc/warnquota.conf
# The disks are defined in /etc/quotatab
# Put this in a cron job
# Show all
alias quota='xfs_quota -xc "report -h" /dev/sdb 2>&1 | grep -v XFS_GETQUOTA:'