#341: Investigate Possible Security Breach – Service Outage on Multiple Servers - Jenrite/OE2-project-group-B GitHub Wiki

Apps-b

ERROR MSG THROUGH SLACK:

apps-b/Root File System is CRITICAL: DISK CRITICAL - free space: /var/tmp 0 MB (0% inode=96%):

Impact

Seems to be no free space on apps-b machine. Upon navigating to owncloud service I see that it is not possible to connect. Even though apache2 service is up.

image

Detection

Initially I ran df -h to see what space has been used up

image

I then looked for processes that were out of the ordinary.

image

Instantly I saw two maliciously named scripts running on apps-b.

Resolution Steps

Script Investigation:

sim_disk_attack.sh
mkdir -p /var/tmp/diskbomb
while true; do
    dd if=/dev/urandom of=/var/tmp/diskbomb/file_$(date +%s).bin bs=10M count=10 oflag=direct status=none
    sleep 1
done

Writes random data to /var/tmp/diskbomb/. If I delete the script and then the directory containing malicious data. This will be solved.

Script deletion and delete malicious directory:

image

Lastly, kill process for sim_disk_attack.sh:

image

sim_mariadb_attack.sh

Initially I had trouble navigating to the malicious script:

image

It is possible the attacker ran the script and then deleted it. So the processes associated are still running but the script no longer exists in the file system, just in memory.

image

This shows that it has been deleted. So we kill the process immediately.

Upon checking file system space:

image

Much Better!

Db-b

ERROR MSG THROUGH SLACK:

db-b/Root File System is CRITICAL: DISK CRITICAL - free space: /var/tmp 0 MB (0% inode=96%):

db-b/DB Servers is CRITICAL: Cant connect to MySQL server on db-b:3306 (111)

Impact

MySQL Service is down:

image

No Free Disk Space:

image

pstree -p shows:

image

So possibly same scripts as apps-b running on db-b:

image

image

We can actually view the sim_mariadb_attack.sh script on this machine:

#!/bin/bash
MARIADB_DIR="/var/lib/mysql"

while true; do
  if systemctl is-active --quiet mariadb; then
    echo "[*] Locking MariaDB directory..."
    chmod -R 000 "$MARIADB_DIR" 2>/dev/null
    systemctl restart mariadb
  fi
  sleep 60  # Pause to avoid excessive load/logs
done

Delete scripts, diskbomb directories, kill processes:

image

We now have the correct free space:

image

Now to fix MySQL DB as it is still not running. The sim_mariadb_attack.sh script changes the permissions of the mysql directory so the service cannot be started. Now that the attack is no longer running I can fix the permissions and restart the service.

image

Owncloud running confirmation and Nagios Errors no longer present

image

image

However when I try to log in to owncloud as admin i get this issue:

image

Configured wrong owner and permissions when fixing attack. Fix:

sudo chown -R www-data:www-data /var/www/owncloud/
sudo find /var/www/owncloud/ -type d -exec chmod 755 {} \;
sudo find /var/www/owncloud/ -type f -exec chmod 640 {} \;

Can now log in as admin.

image

Tidy Up

Time

Started at approx 8:15. Finish at approx 10:30. Time taken until all bug fixes and permissions issues resolved: 2.25hr. Response time is acceptable. Note that the security breaches were cleaned up promptly, rest of working time is documentation and minor, non-crucial fixes.

Lessons Learned

Nagios monitoring and Slack notifications work as expected, however do not provide a comprehensive explanation of error specifics. Need to develop monitoring to cover deeper issues.

Monitor Long-Running Processes: Attack scripts in background. Implement process auditing and alerts for unusual or long-running processes.

Deleted Files Can Still Run: Scripts were deleted but remained active.

Permission Abuse is Disruptive: The attack used chmod 000 to break services.

Suspicious scripts were placed in /usr/local/bin/tmp/ unnoticed.

Correspondence with Affected Parties

All correspondence done via ticket handling.