#342: Investigate Possible Security Breach – Service Outage on Multiple Servers - Rmhibbert/oe2-group-c GitHub Wiki

Incident response

Trigger/Root Cause(s):

Two malicious scripts

- Impact:

DB Servers Service on host db-c

- Detection:

It was identified via our slack notifications

image

- Resolution Steps:

  • I logged into my db server and ran this command sudo systemctl status mysql
  • This told me I had an issue 2025-05-25 20:38:42 0 [Warning] Can't create test file '/var/lib/mysql/db-c.lower-test'
  • Which I assume means MariaDB cannot write to /var/lib/mysql so its a permission issue
  • This was wrong as it had the correct permission
  • I ran df -h, now I know the issue /dev/root is full 100%
  • using this sudo du -sh /* 2>/dev/null I can identify large top-level directories /var is using 27 GB out of your 29 GB total disk space
  • I then run sudo du -sh /var/* 2>/dev/null and see that /var/tmp is using 26 GB
  • It's intended for temporary files so should be good to clean up
  • I have found the issue after running sudo find /var/tmp -type f -exec du -h {} + | sort -rh | head -n 20 this finds large files
  • image
  • Delete the diskbomb directory to free space sudo rm -rf /var/tmp/diskbomb
  • I the used df -h again and I can now see my /dev/root only uses 14%
  • Make sure permissions are correct again then start mariadb
  • sudo find /var/lib/mysql -type d -exec chmod 750 {} ; sudo find /var/lib/mysql -type f -exec chmod 660 {} ; sudo chown -R mysql:mysql /var/lib/mysql
  • sudo systemctl start mariadb
  • then restart and test sudo systemctl restart mariadb sudo systemctl status mariadb
  • still having issues so I ran this command to see if mariaDB was running ps aux | grep mariadb
  • That is when I discovered this file sim_mariadb_attack.sh the then looked at its location /usr/local/bin/tmp$ ls and discovered the other script sim_disk_attack.sh
  • I removed them, using shred rather than rm as it overwrites it first to stop recovery of the scripts
  • sudo shred -u /usr/local/bin/tmp/sim_disk_attack.sh
  • sudo shred -u /usr/local/bin/tmp/sim_mariadb_attack.sh
  • Also sudo kill -9 /usr/local/bin/tmp/sim_mariadb_attack.sh sudo kill -9 /usr/local/bin/tmp/sim_disk_attack.sh *I have noticed that the apps is also having the same issue and ownClowd is down so i am repeating the majority of the steps
  • using pstree -p to check if they still exist
  • on apps I restored the permission to the config file sudo chmod 640 /var/www/owncloud/config/config.php
  • I then used sudo systemctl restart apache2 and now ownCloud is back online

- Correspondence with Affected Parties:

Include communication with the reporting Operations team member and the manager

  • Before and After Evidence: Screenshots, logs, or command outputs showing system state before and after remediation.

This is me communicating with my team member about the issue, and the time stamps

image image

Here is me communicating with the manager

image

- Sub Tickets (if any):

https://rt.dataraster.com/Ticket/Display.html?id=342&results=711977685d3f11b73a32fdefd855742d

- Time Taken:

3 hours

- Lessons Learned:

I have identified it is a good idea to use a search for suspicious scripts eg. attack ect

- Tools/Action Items:

Full process is documented in Resolution Steps see above