Cloudplow - dibrz/ARMBand GitHub Wiki
Cloudplow (CP) is a script by l3uddz that has three main components.
-
Uploader to Rclone remote: Files are moved off local storage. With support for multiple uploaders (i.e. remote/folder pairings).
-
UnionFS Cleaner functionality: Deletion of UnionFS-Fuse whiteout files (
*_HIDDEN~
) and their corresponding "whited-out" files on Rclone remotes. With support for multiple remotes (useful if you have multiple Rclone remotes mounted). -
Automatic remote syncer: Sync between two different Rclone remotes using Scaleway Instances (with support for other VM/server providers in the future). With support for multiple remote/folder pairings. With support for multiple syncers (i.e. remote/remote pairings).
As setup for Cloudbox, Cloudplow uploads all the content in /mnt/local/Media/
(see Paths) to your cloud storage provider (e.g. Google Drive), after the folder reaches a 200
GB size threshold, when checked every 30
minutes.
Note: The size threshold and the check interval can be changed via steps mentioned on this page.
Google Drive Ban Sleep
Recently, Google Drive has implemented a max upload of ~750GB per day. When this limit is reached, Google Drive will put you in a 24 hour soft ban. When Cloudplow encounters this (as "Failed to copy: googleapi: Error 403: User rate limit exceeded"), it will go into a 25 hour ban sleep, and upon waking up, will resume checking and uploading tasks. This is so much better than having a Rclone task running all day long with the bwlimit set at 8M.
Note: The keywords or phrases that are monitored during Rclone tasks (that will cause this remote's upload task to abort), and the sleep time that those Rclone tasks are suspended for, can be changed at any time by editing the config.json
file.
On top of uploading your media files to the cloud storage, Cloudplow also functions as a UnionFS whiteout cleaner (i.e. *_HIDDEN~
files).
What are hidden files?
When Sonarr & Radarr upgrade your media files, they attempt delete the previous ones. When that data is still on the local server, it is deleted immediately, but when it has been moved to the cloud storage provider, for example Google Drive, it is unable to do so because the Google Drive mount is set as read-only (via Plexdrive).
Instead, UnionFS will create a whiteout file (a blank file in the format of filename.ext_HIDDEN~
) that makes the file invisible to whatever tries to access it via the UnionFS mount (.e.g. /mnt/unionfs/
) and, therefore, Sonarr & Radarr will consider the file deleted, however, the media file will still exist on the cloud.
To resolve this, on the next upload task (i.e. when size threshold is reached on the next interval check), Cloudplow will scan for the whiteout file(s), remove the corresponding media file from the cloud storage, then remove the whiteout file (since it isn't needed anymore), and as a result, keep your content free of duplicates.
/opt/cloudplow/config.json
Note: Config changes require a restart: sudo systemctl restart cloudplow
.
"uploader": {
"google": {
"check_interval": 30,
"exclude_open_files": true,
"max_size_gb": 200,
"opened_excludes": [
"/downloads/"
],
"size_excludes": [
"downloads/*"
]
}
"check_interval":
How often (in minutes) Cloudplow checks the size of /mnt/local/Media
.
"max_size_gb":
Max size (in GB) Cloudplow allows /mnt/local/Media
to get before starting an upload task.
Note: Must not be less than 2 GB.
Cloudplow can throttle Rclone uploads during active, playing Plex streams (paused streams are ignored).
"plex": {
"enabled": false,
"url": "https://plex.domain.com",
"token": "",
"poll_interval": 60,
"max_streams_before_throttle": 1,
"rclone": {
"throttle_speeds": {
"1": "50M",
"2": "40M",
"3": "30M",
"4": "20M",
"5": "10M"
},
"url": "http://localhost:7949"
}
}
enabled
- true
to enable.
url
- Your Plex URL.
token
- Your Plex Access Token.
poll_interval
- How often (in seconds) Plex is checked for active streams.
max_streams_before_throttle
- How many playing streams are allowed before enabling throttling.
rclone
-
url
- Leave as default. -
throttle_speed
- Categorized option to configure upload speeds for various stream counts (where5
represents 5 streams or more).M
is MB/s. -
Format:
"STREAM COUNT": "THROTTLED UPLOAD SPEED",
See here.
You can run a manual Cloudplow task from anywhere by just using the cloudplow
command.
To clean the hidden files and remove deleted files from the cloud:
cloudplow clean
To start uploading right away, regardless of what the folder size is:
cloudplow upload