server config - GamerDuck123/copyparty GitHub Wiki
using arguments or config files, or a mix of both:
- config files (
-c some.conf
) can set additional commandline arguments; see ./docs/example.conf and ./docs/example2.conf -
kill -s USR1
(same assystemctl reload copyparty
) to reload accounts and volumes from config files without restarting- or click the
[reload cfg]
button in the control-panel if the user hasa
/admin in any volume - changes to the
[global]
config section requires a restart to take effect
- or click the
NB: as humongous as this readme is, there is also a lot of undocumented features. Run copyparty with --help
to see all available global options; all of those can be used in the [global]
section of config files, and everything listed in --help-flags
can be used in volumes as volflags.
- if running in docker/podman, try this:
docker run --rm -it copyparty/ac --help
- or see this (probably outdated): https://ocv.me/copyparty/helptext.html
- or if you prefer plaintext, https://ocv.me/copyparty/helptext.txt
announce enabled services on the LAN (pic) -- -z
enables both mdns and ssdp
-
--z-on
/--z-off
limits the feature to certain networks
config file example:
[global]
z # enable all zeroconf features (mdns, ssdp)
zm # only enables mdns (does nothing since we already have z)
z-on: 192.168.0.0/16, 10.1.2.0/24 # restrict to certain subnets
LAN domain-name and feature announcer
uses multicast dns to give copyparty a domain which any machine on the LAN can use to access it
all enabled services (webdav, ftp, smb) will appear in mDNS-aware file managers (KDE, gnome, macOS, ...)
the domain will be partybox.local
if the machine's hostname is partybox
unless --name
specifies something else
and the web-UI will be available at http://partybox.local:3923/
- if you want to get rid of the
:3923
so you can use http://partybox.local/ instead then see listen on port 80 and 443
windows-explorer announcer
uses ssdp to make copyparty appear in the windows file explorer on all machines on the LAN
doubleclicking the icon opens the "connect" page which explains how to mount copyparty as a local filesystem
if copyparty does not appear in windows explorer, use --zsv
to see why:
- maybe the discovery multicast was sent from an IP which does not intersect with the server subnets
print a qr-code (screenshot) for quick access, great between phones on android hotspots which keep changing the subnet
-
--qr
enables it -
--qrs
does https instead of http -
--qrl lootbox/?pw=hunter2
appends to the url, linking to thelootbox
folder with passwordhunter2
-
--qrz 1
forces 1x zoom instead of autoscaling to fit the terminal size- 1x may render incorrectly on some terminals/fonts, but 2x should always work
it uses the server hostname if mdns is enabled, otherwise it'll use your external ip (default route) unless --qri
specifies a specific ip-prefix or domain
an FTP server can be started using --ftp 3921
, and/or --ftps
for explicit TLS (ftpes)
- based on pyftpdlib
- needs a dedicated port (cannot share with the HTTP/HTTPS API)
- uploads are not resumable -- delete and restart if necessary
- runs in active mode by default, you probably want
--ftp-pr 12000-13000
- if you enable both
ftp
andftps
, the port-range will be divided in half - some older software (filezilla on debian-stable) cannot passive-mode with TLS
- if you enable both
- login with any username + your password, or put your password in the username field
some recommended FTP / FTPS clients; wark
= example password:
- https://winscp.net/eng/download.php
- https://filezilla-project.org/ struggles a bit with ftps in active-mode, but is fine otherwise
-
https://rclone.org/ does FTPS with
tls=false explicit_tls=true
lftp -u k,wark -p 3921 127.0.0.1 -e ls
lftp -u k,wark -p 3990 127.0.0.1 -e 'set ssl:verify-certificate no; ls'
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a great way to access copyparty straight from the file explorer in your OS
click the connect button in the control-panel to see connection instructions for windows, linux, macos
general usage:
- login with any username + your password, or put your password in the username field (password field can be empty/whatever)
on macos, connect from finder:
- [Go] -> [Connect to Server...] -> http://192.168.123.1:3923/
in order to grant full write-access to webdav clients, the volflag daw
must be set and the account must also have delete-access (otherwise the client won't be allowed to replace the contents of existing files, which is how webdav works)
note: if you have enabled IdP authentication then that may cause issues for some/most webdav clients; see the webdav section in the IdP docs
using the GUI (winXP or later):
- rightclick [my computer] -> [map network drive] -> Folder:
http://192.168.123.1:3923/
- on winXP only, click the
Sign up for online storage
hyperlink instead and put the URL there - providing your password as the username is recommended; the password field can be anything or empty
- on winXP only, click the
the webdav client that's built into windows has the following list of bugs; you can avoid all of these by connecting with rclone instead:
- win7+ doesn't actually send the password to the server when reauthenticating after a reboot unless you first try to login with an incorrect password and then switch to the correct password
- or just type your password into the username field instead to get around it entirely
- connecting to a folder which allows anonymous read will make writing impossible, as windows has decided it doesn't need to login
- workaround: connect twice; first to a folder which requires auth, then to the folder you actually want, and leave both of those mounted
- or set the server-option
--dav-auth
to force password-auth for all webdav clients
- win7+ may open a new tcp connection for every file and sometimes forgets to close them, eventually needing a reboot
- maybe NIC-related (??), happens with win10-ltsc on e1000e but not virtio
- windows cannot access folders which contain filenames with invalid unicode or forbidden characters (
<>:"/\|?*
), or names ending with.
- winxp cannot show unicode characters outside of some range
- latin-1 is fine, hiragana is not (not even as shift-jis on japanese xp)
a TFTP server (read/write) can be started using --tftp 3969
(you probably want ftp instead unless you are actually communicating with hardware from the 90s (in which case we should definitely hang some time))
that makes this the first RTX DECT Base that has been updated using copyparty 🎉
- based on partftpy
- no accounts; read from world-readable folders, write to world-writable, overwrite in world-deletable
- needs a dedicated port (cannot share with the HTTP/HTTPS API)
- run as root (or see below) to use the spec-recommended port
69
(nice)
- run as root (or see below) to use the spec-recommended port
- can reply from a predefined portrange (good for firewalls)
- only supports the binary/octet/image transfer mode (no netascii)
-
RFC 7440 is not supported, so will be extremely slow over WAN
- assuming default blksize (512), expect 1100 KiB/s over 100BASE-T, 400-500 KiB/s over wifi, 200 on bad wifi
most clients expect to find TFTP on port 69, but on linux and macos you need to be root to listen on that. Alternatively, listen on 3969 and use NAT on the server to forward 69 to that port;
- on linux:
iptables -t nat -A PREROUTING -i eth0 -p udp --dport 69 -j REDIRECT --to-port 3969
some recommended TFTP clients:
- curl (cross-platform, read/write)
- get:
curl --tftp-blksize 1428 tftp://127.0.0.1:3969/firmware.bin
- put:
curl --tftp-blksize 1428 -T firmware.bin tftp://127.0.0.1:3969/
- get:
- windows:
tftp.exe
(you probably already have it)tftp -i 127.0.0.1 put firmware.bin
- linux:
tftp-hpa
,atftp
atftp --option "blksize 1428" 127.0.0.1 3969 -p -l firmware.bin -r firmware.bin
tftp -v -m binary 127.0.0.1 3969 -c put firmware.bin
unsafe, slow, not recommended for wan, enable with --smb
for read-only or --smbw
for read-write
click the connect button in the control-panel to see connection instructions for windows, linux, macos
dependencies: python3 -m pip install --user -U impacket==0.11.0
- newer versions of impacket will hopefully work just fine but there is monkeypatching so maybe not
some BIG WARNINGS specific to SMB/CIFS, in decreasing importance:
- not entirely confident that read-only is read-only
- the smb backend is not fully integrated with vfs, meaning there could be security issues (path traversal). Please use
--smb-port
(see below) and prisonparty or bubbleparty- account passwords work per-volume as expected, and so does account permissions (read/write/move/delete), but
--smbw
must be given to allow write-access from smb - shadowing probably works as expected but no guarantees
- account passwords work per-volume as expected, and so does account permissions (read/write/move/delete), but
and some minor issues,
- clients only see the first ~400 files in big folders;
- this was originally due to impacket#1433 which was fixed in impacket-0.12, so you can disable the workaround with
--smb-nwa-1
but then you get unacceptably poor performance instead
- this was originally due to impacket#1433 which was fixed in impacket-0.12, so you can disable the workaround with
- hot-reload of server config (
/?reload=cfg
) does not include the[global]
section (commandline args) - listens on the first IPv4
-i
interface only (default = :: = 0.0.0.0 = all) - login doesn't work on winxp, but anonymous access is ok -- remove all accounts from copyparty config for that to work
- win10 onwards does not allow connecting anonymously / without accounts
- python3 only
- slow (the builtin webdav support in windows is 5x faster, and rclone-webdav is 30x faster)
known client bugs:
- on win7 only,
--smb1
is much faster than smb2 (default) because it keeps rescanning folders on smb2- however smb1 is buggy and is not enabled by default on win10 onwards
- windows cannot access folders which contain filenames with invalid unicode or forbidden characters (
<>:"/\|?*
), or names ending with.
the smb protocol listens on TCP port 445, which is a privileged port on linux and macos, which would require running copyparty as root. However, this can be avoided by listening on another port using --smb-port 3945
and then using NAT on the server to forward the traffic from 445 to there;
- on linux:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 445 -j REDIRECT --to-port 3945
authenticate with one of the following:
- username
$username
, password$password
- username
$password
, passwordk
tweaking the ui
- set default sort order globally with
--sort
or per-volume with thesort
volflag; specify one or more comma-separated columns to sort by, and prefix the column name with-
for reverse sort- the column names you can use are visible as tooltips when hovering over the column headers in the directory listing, for example
href ext sz ts tags/.up_at tags/Circle tags/.tn tags/Artist tags/Title
- to sort in music order (album, track, artist, title) with filename as fallback, you could
--sort tags/Circle,tags/.tn,tags/Artist,tags/Title,href
- to sort by upload date, first enable showing the upload date in the listing with
-e2d -mte +.up_at
and then--sort tags/.up_at
- the column names you can use are visible as tooltips when hovering over the column headers in the directory listing, for example
see ./docs/rice for more, including how to add stuff (css/<meta>
/...) to the html <head>
tag, or to add your own translation
discord and social-media embeds
can be enabled globally with --og
or per-volume with volflag og
note that this disables hotlinking because the opengraph spec demands it; to sneak past this intentional limitation, you can enable opengraph selectively by user-agent, for example --og-ua '(Discord|Twitter|Slack)bot'
(or volflag og_ua
)
you can also hotlink files regardless by appending ?raw
to the url
if you want to entirely replace the copyparty response with your own jinja2 template, give the template filepath to --og-tpl
or volflag og_tpl
(all members of HttpCli
are available through the this
object)
enable symlink-based upload deduplication globally with --dedup
or per-volume with volflag dedup
by default, when someone tries to upload a file that already exists on the server, the upload will be politely declined, and the server will copy the existing file over to where the upload would have gone
if you enable deduplication with --dedup
then it'll create a symlink instead of a full copy, thus reducing disk space usage
- on the contrary, if your server is hooked up to s3-glacier or similar storage where reading is expensive, and you cannot use
--safe-dedup=1
because you have other software tampering with your files, so you want to entirely disable detection of duplicate data instead, then you can specify--no-clone
globally ornoclone
as a volflag
warning: when enabling dedup, you should also:
- enable indexing with
-e2dsa
or volflage2dsa
(see file indexing section below); strongly recommended - ...and/or
--hardlink-only
to use hardlink-based deduplication instead of symlinks; see explanation below
it will not be safe to rename/delete files if you only enable dedup and none of the above; if you enable indexing then it is not necessary to also do hardlinks (but you may still want to)
by default, deduplication is done based on symlinks (symbolic links); these are tiny files which are pointers to the nearest full copy of the file
you can choose to use hardlinks instead of softlinks, globally with --hardlink-only
or volflag hardlinkonly
;
advantages of using hardlinks:
- hardlinks are more compatible with other software; they behave entirely like regular files
- you can safely move and rename files using other file managers
- symlinks need to be managed by copyparty to ensure the destinations remain correct
advantages of using symlinks (default):
- each symlink can have its own last-modified timestamp, but a single timestamp is shared by all hardlinks
- symlinks make it more obvious to other software that the file is not a regular file, so this can be less dangerous
- hardlinks look like regular files, so other software may assume they are safe to edit without affecting the other copies
warning: if you edit the contents of a deduplicated file, then you will also edit all other copies of that file! This is especially surprising with hardlinks, because they look like regular files, but that same file exists in multiple locations
global-option --xlink
/ volflag xlink
additionally enables deduplication across volumes, but this is probably buggy and not recommended
config file example:
[global]
e2dsa # scan and index filesystem on startup
dedup # symlink-based deduplication for all volumes
[/media]
/mnt/nas/media
flags:
hardlinkonly # this vol does hardlinks instead of symlinks
enable music search, upload-undo, and better dedup
file indexing relies on two database tables, the up2k filetree (-e2d
) and the metadata tags (-e2t
), stored in .hist/up2k.db
. Configuration can be done through arguments, volflags, or a mix of both.
through arguments:
-
-e2d
enables file indexing on upload -
-e2ds
also scans writable folders for new files on startup -
-e2dsa
also scans all mounted volumes (including readonly ones) -
-e2t
enables metadata indexing on upload -
-e2ts
also scans for tags in all files that don't have tags yet -
-e2tsr
also deletes all existing tags, doing a full reindex -
-e2v
verifies file integrity at startup, comparing hashes from the db -
-e2vu
patches the database with the new hashes from the filesystem -
-e2vp
panics and kills copyparty instead
the same arguments can be set as volflags, in addition to d2d
, d2ds
, d2t
, d2ts
, d2v
for disabling:
-
-v ~/music::r:c,e2ds,e2tsr
does a full reindex of everything on startup -
-v ~/music::r:c,d2d
disables all indexing, even if any-e2*
are on -
-v ~/music::r:c,d2t
disables all-e2t*
(tags), does not affect-e2d*
-
-v ~/music::r:c,d2ds
disables on-boot scans; only index new uploads -
-v ~/music::r:c,d2ts
same except only affecting tags
note:
- upload-times can be displayed in the file listing by enabling the
.up_at
metadata key, either globally with-e2d -mte +.up_at
or per-volume with volflagse2d,mte=+.up_at
(will have a ~17% performance impact on directory listings) -
e2tsr
is probably always overkill, sincee2ds
/e2dsa
would pick up any file modifications ande2ts
would then reindex those, unless there is a new copyparty version with new parsers and the release note says otherwise
config file example (these options are recommended btw):
[global]
e2dsa # scan and index all files in all volumes on startup
e2ts # check newly-discovered or uploaded files for media tags
to save some time, you can provide a regex pattern for filepaths to only index by filename/path/size/last-modified (and not the hash of the file contents) by setting --no-hash '\.iso$'
or the volflag :c,nohash=\.iso$
, this has the following consequences:
- initial indexing is way faster, especially when the volume is on a network disk
- makes it impossible to file-search
- if someone uploads the same file contents, the upload will not be detected as a dupe, so it will not get symlinked or rejected
similarly, you can fully ignore files/folders using --no-idx [...]
and :c,noidx=\.iso$
NOTE: no-idx
and/or no-hash
prevents deduplication of those files
- when running on macos, all the usual apple metadata files are excluded by default
if you set --no-hash [...]
globally, you can enable hashing for specific volumes using flag :c,nohash=
to exclude certain filepaths from search-results, use --srch-excl
or volflag srch_excl
instead of --no-idx
, for example --srch-excl 'password|logs/[0-9]'
config file example:
[/games]
/mnt/nas/games
flags:
noidx: \.iso$ # skip indexing iso-files
srch_excl: password|logs/[0-9] # filter search results
avoid traversing into other filesystems using --xdev
/ volflag :c,xdev
, skipping any symlinks or bind-mounts to another HDD for example
and/or you can --xvol
/ :c,xvol
to ignore all symlinks leaving the volume's top directory, but still allow bind-mounts pointing elsewhere
- symlinks are permitted with
xvol
if they point into another volume where the user has the same level of access
these options will reduce performance; unlikely worst-case estimates are 14% reduction for directory listings, 35% for download-as-tar
as of copyparty v1.7.0 these options also prevent file access at runtime -- in previous versions it was just hints for the indexer
filesystem monitoring; if copyparty is not the only software doing stuff on your filesystem, you may want to enable periodic rescans to keep the index up to date
argument --re-maxage 60
will rescan all volumes every 60 sec, same as volflag :c,scan=60
to specify it per-volume
uploads are disabled while a rescan is happening, so rescans will be delayed by --db-act
(default 10 sec) when there is write-activity going on (uploads, renames, ...)
note: folder-thumbnails are selected during filesystem indexing, so periodic rescans can be used to keep them accurate as images are uploaded/deleted (or manually do a rescan with the reload
button in the controlpanel)
config file example:
[global]
re-maxage: 3600
[/pics]
/mnt/nas/pics
flags:
scan: 900
set upload rules using volflags, some examples:
-
:c,sz=1k-3m
sets allowed filesize between 1 KiB and 3 MiB inclusive (suffixes:b
,k
,m
,g
) -
:c,df=4g
block uploads if there would be less than 4 GiB free disk space afterwards -
:c,vmaxb=1g
block uploads if total volume size would exceed 1 GiB afterwards -
:c,vmaxn=4k
block uploads if volume would contain more than 4096 files afterwards -
:c,nosub
disallow uploading into subdirectories; goes well withrotn
androtf
: -
:c,rotn=1000,2
moves uploads into subfolders, up to 1000 files in each folder before making a new one, two levels deep (must be at least 1) -
:c,rotf=%Y/%m/%d/%H
enforces files to be uploaded into a structure of subfolders according to that date format- if someone uploads to
/foo/bar
the path would be rewritten to/foo/bar/2021/08/06/23
for example - but the actual value is not verified, just the structure, so the uploader can choose any values which conform to the format string
- just to avoid additional complexity in up2k which is enough of a mess already
- if someone uploads to
-
:c,lifetime=300
delete uploaded files when they become 5 minutes old
you can also set transaction limits which apply per-IP and per-volume, but these assume -j 1
(default) otherwise the limits will be off, for example -j 4
would allow anywhere between 1x and 4x the limits you set depending on which processing node the client gets routed to
-
:c,maxn=250,3600
allows 250 files over 1 hour from each IP (tracked per-volume) -
:c,maxb=1g,300
allows 1 GiB total over 5 minutes from each IP (tracked per-volume)
notes:
-
vmaxb
andvmaxn
requires either thee2ds
volflag or-e2dsa
global-option
config file example:
[/inc]
/mnt/nas/uploads
accs:
w: * # anyone can upload here
rw: ed # only user "ed" can read-write
flags:
e2ds # filesystem indexing is required for many of these:
sz: 1k-3m # accept upload only if filesize in this range
df: 4g # free disk space cannot go lower than this
vmaxb: 1g # volume can never exceed 1 GiB
vmaxn: 4k # ...or 4000 files, whichever comes first
nosub # must upload to toplevel folder
lifetime: 300 # uploads are deleted after 5min
maxn: 250,3600 # each IP can upload 250 files in 1 hour
maxb: 1g,300 # each IP can upload 1 GiB over 5 minutes
files can be autocompressed on upload, either on user-request (if config allows) or forced by server-config
- volflag
gz
allows gz compression - volflag
xz
allows lzma compression - volflag
pk
forces compression on all files - url parameter
pk
requests compression with server-default algorithm - url parameter
gz
orxz
requests compression with a specific algorithm - url parameter
xz
requests xz compression
things to note,
- the
gz
andxz
arguments take a single optional argument, the compression level (range 0 to 9) - the
pk
volflag takes the optional argumentALGORITHM,LEVEL
which will then be forced for all uploads, for examplegz,9
orxz,0
- default compression is gzip level 9
- all upload methods except up2k are supported
- the files will be indexed after compression, so dupe-detection and file-search will not work as expected
some examples,
-
-v inc:inc:w:c,pk=xz,0
folder named inc, shared at inc, write-only for everyone, forces xz compression at level 0 -
-v inc:inc:w:c,pk
same write-only inc, but forces gz compression (default) instead of xz -
-v inc:inc:w:c,gz
allows (but does not force) gz compression if client uploads to/inc?pk
or/inc?gz
or/inc?gz=4
-
:c,magic
enables filetype detection for nameless uploads, same as--magic
- needs https://pypi.org/project/python-magic/
python3 -m pip install --user -U python-magic
- on windows grab this instead
python3 -m pip install --user -U python-magic-bin
- needs https://pypi.org/project/python-magic/
in-volume (.hist/up2k.db
, default) or somewhere else
copyparty creates a subfolder named .hist
inside each volume where it stores the database, thumbnails, and some other stuff
this can instead be kept in a single place using the --hist
argument, or the hist=
volflag, or a mix of both:
-
--hist ~/.cache/copyparty -v ~/music::r:c,hist=-
sets~/.cache/copyparty
as the default place to put volume info, but~/music
gets the regular.hist
subfolder (-
restores default behavior)
by default, the per-volume up2k.db
sqlite3-database for -e2d
and -e2t
is stored next to the thumbnails according to the --hist
option, but the global-option --dbpath
and/or volflag dbpath
can be used to put the database somewhere else
if your storage backend is unreliable (NFS or bad HDDs), you can specify one or more "landmarks" to look for before doing anything database-related. A landmark is a file which is always expected to exist inside the volume. This avoids spurious filesystem rescans in the event of an outage. One line per landmark (see example below)
note:
- putting the hist-folders on an SSD is strongly recommended for performance
- markdown edits are always stored in a local
.hist
subdirectory - on windows the volflag path is cyglike, so
/c/temp
meansC:\temp
but use regular paths for--hist
- you can use cygpaths for volumes too,
-v C:\Users::r
and-v /c/users::r
both work
- you can use cygpaths for volumes too,
config file example:
[global]
hist: ~/.cache/copyparty # put db/thumbs/etc. here by default
[/pics]
/mnt/nas/pics
flags:
hist: - # restore the default (/mnt/nas/pics/.hist/)
hist: /mnt/nas/cache/pics/ # can be absolute path
landmark: me.jpg # /mnt/nas/pics/me.jpg must be readable to enable db
landmark: info/a.txt^=ok # and this textfile must start with "ok"
set -e2t
to index tags on upload
-mte
decides which tags to index and display in the browser (and also the display order), this can be changed per-volume:
-
-v ~/music::r:c,mte=title,artist
indexes and displays title followed by artist
if you add/remove a tag from mte
you will need to run with -e2tsr
once to rebuild the database, otherwise only new files will be affected
but instead of using -mte
, -mth
is a better way to hide tags in the browser: these tags will not be displayed by default, but they still get indexed and become searchable, and users can choose to unhide them in the [⚙️] config
pane
-mtm
can be used to add or redefine a metadata mapping, say you have media files with foo
and bar
tags and you want them to display as qux
in the browser (preferring foo
if both are present), then do -mtm qux=foo,bar
and now you can -mte artist,title,qux
tags that start with a .
such as .bpm
and .dur
(ation) indicate numeric value
see the beautiful mess of a dictionary in mtag.py for the default mappings (should cover mp3,opus,flac,m4a,wav,aif,)
--no-mutagen
disables Mutagen and uses FFprobe instead, which...
- is about 20x slower than Mutagen
- catches a few tags that Mutagen doesn't
- melodic key, video resolution, framerate, pixfmt
- avoids pulling any GPL code into copyparty
- more importantly runs FFprobe on incoming files which is bad if your FFmpeg has a cve
--mtag-to
sets the tag-scan timeout; very high default (60 sec) to cater for zfs and other randomly-freezing filesystems. Lower values like 10 are usually safe, allowing for faster processing of tricky files
provide custom parsers to index additional tags, also see ./bin/mtag/README.md
copyparty can invoke external programs to collect additional metadata for files using mtp
(either as argument or volflag), there is a default timeout of 60sec, and only files which contain audio get analyzed by default (see ay/an/ad below)
-
-mtp .bpm=~/bin/audio-bpm.py
will execute~/bin/audio-bpm.py
with the audio file as argument 1 to provide the.bpm
tag, if that does not exist in the audio metadata -
-mtp key=f,t5,~/bin/audio-key.py
uses~/bin/audio-key.py
to get thekey
tag, replacing any existing metadata tag (f,
), aborting if it takes longer than 5sec (t5,
) -
-v ~/music::r:c,mtp=.bpm=~/bin/audio-bpm.py:c,mtp=key=f,t5,~/bin/audio-key.py
both as a per-volume config wow this is getting ugly
but wait, there's more! -mtp
can be used for non-audio files as well using the a
flag: ay
only do audio files (default), an
only do non-audio files, or ad
do all files (d as in dontcare)
- "audio file" also means videos btw, as long as there is an audio stream
-
-mtp ext=an,~/bin/file-ext.py
runs~/bin/file-ext.py
to get theext
tag only if file is not audio (an
) -
-mtp arch,built,ver,orig=an,eexe,edll,~/bin/exe.py
runs~/bin/exe.py
to get properties about windows-binaries only if file is not audio (an
) and file extension is exe or dll - if you want to daisychain parsers, use the
p
flag to set processing order-
-mtp foo=p1,~/a.py
runs before-mtp foo=p2,~/b.py
and will forward all the tags detected so far as json to the stdin of b.py
-
- option
c0
disables capturing of stdout/stderr, so copyparty will not receive any tags from the process at all -- instead the invoked program is free to print whatever to the console, just using copyparty as a launcher-
c1
captures stdout only,c2
only stderr, andc3
(default) captures both
-
- you can control how the parser is killed if it times out with option
kt
killing the entire process tree (default),km
just the main process, orkn
let it continue running until copyparty is terminated
if something doesn't work, try --mtag-v
for verbose error messages
config file example; note that mtp
is an additive option so all of the mtp options will take effect:
[/music]
/mnt/nas/music
flags:
mtp: .bpm=~/bin/audio-bpm.py # assign ".bpm" (numeric) with script
mtp: key=f,t5,~/bin/audio-key.py # force/overwrite, 5sec timeout
mtp: ext=an,~/bin/file-ext.py # will only run on non-audio files
mtp: arch,built,ver,orig=an,eexe,edll,~/bin/exe.py # only exe/dll
trigger a program on uploads, renames etc (examples)
you can set hooks before and/or after an event happens, and currently you can hook uploads, moves/renames, and deletes
there's a bunch of flags and stuff, see --help-hooks
if you want to write your own hooks, see devnotes
event-hooks can send zeromq messages instead of running programs
to send a 0mq message every time a file is uploaded,
-
--xau zmq:pub:tcp://*:5556
sends a PUB to any/all connected SUB clients -
--xau t3,zmq:push:tcp://*:5557
sends a PUSH to exactly one connected PULL client -
--xau t3,j,zmq:req:tcp://localhost:5555
sends a REQ to the connected REP client
the PUSH and REQ examples have t3
(timeout after 3 seconds) because they block if there's no clients to talk to
- the REQ example does
t3,j
to send extended upload-info as json instead of just the filesystem-path
see zmq-recv.py if you need something to receive the messages with
config file example; note that the hooks are additive options, so all of the xau options will take effect:
[global]
xau: zmq:pub:tcp://*:5556` # send a PUB to any/all connected SUB clients
xau: t3,zmq:push:tcp://*:5557` # send PUSH to exactly one connected PULL cli
xau: t3,j,zmq:req:tcp://localhost:5555` # send REQ to the connected REP cli
the older, more powerful approach (examples):
-v /mnt/inc:inc:w:c,e2d,e2t,mte=+x1:c,mtp=x1=ad,kn,/usr/bin/notify-send
that was the commandline example; here's the config file example:
[/inc]
/mnt/inc
accs:
w: *
flags:
e2d, e2t # enable indexing of uploaded files and their tags
mte: +x1
mtp: x1=ad,kn,/usr/bin/notify-send
so filesystem location /mnt/inc
shared at /inc
, write-only for everyone, appending x1
to the list of tags to index (mte
), and using /usr/bin/notify-send
to "provide" tag x1
for any filetype (ad
) with kill-on-timeout disabled (kn
)
that'll run the command notify-send
with the path to the uploaded file as the first and only argument (so on linux it'll show a notification on-screen)
note that this is way more complicated than the new event hooks but this approach has the following advantages:
- non-blocking and multithreaded; doesn't hold other uploads back
- you get access to tags from FFmpeg and other mtp parsers
- only trigger on new unique files, not dupes
note that it will occupy the parsing threads, so fork anything expensive (or set kn
to have copyparty fork it for you) -- otoh if you want to intentionally queue/singlethread you can combine it with --mtag-mt 1
for reference, if you were to do this using event hooks instead, it would be like this: -e2d --xau notify-send,hello,--
redefine behavior with plugins (examples)
replace 404 and 403 errors with something completely different (that's it for now)
as for client-side stuff, there is plugins for modifying UI/UX
autologin based on IP range (CIDR) , using the global-option --ipu
for example, if everyone with an IP that starts with 192.168.123
should automatically log in as the user spartacus
, then you can either specify --ipu=192.168.123.0/24=spartacus
as a commandline option, or put this in a config file:
[global]
ipu: 192.168.123.0/24=spartacus
repeat the option to map additional subnets
be careful with this one! if you have a reverseproxy, then you definitely want to make sure you have real-ip configured correctly, and it's probably a good idea to nullmap the reverseproxy's IP just in case; so if your reverseproxy is sending requests from 172.24.27.9
then that would be --ipu=172.24.27.9/32=
replace copyparty passwords with oauth and such
you can disable the built-in password-based login system, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
- the regular config-defined users will be used as a fallback for requests which don't include a valid (trusted) IdP username header
some popular identity providers are Authelia (config-file based) and authentik (GUI-based, more complex)
there is a docker-compose example which is hopefully a good starting point (alternatively see ./docs/idp.md if you're the DIY type)
a more complete example of the copyparty configuration options look like this
but if you just want to let users change their own passwords, then you probably want user-changeable passwords instead
if permitted, users can change their own passwords in the control-panel
-
not compatible with identity providers
-
must be enabled with
--chpw
because account-sharing is a popular usecase- if you want to enable the feature but deny password-changing for a specific list of accounts, you can do that with
--chpw-no name1,name2,name3,...
- if you want to enable the feature but deny password-changing for a specific list of accounts, you can do that with
-
to perform a password reset, edit the server config and give the user another password there, then do a config reload or server restart
-
the custom passwords are kept in a textfile at filesystem-path
--chpw-db
, by defaultchpw.json
in the copyparty config folder-
if you run multiple copyparty instances with different users you almost definitely want to specify separate DBs for each instance
-
if password hashing is enabled, the passwords in the db are also hashed
- ...which means that all user-defined passwords will be forgotten if you change password-hashing settings
-
connecting to an aws s3 bucket and similar
there is no built-in support for this, but you can use FUSE-software such as rclone / geesefs / JuiceFS to first mount your cloud storage as a local disk, and then let copyparty use (a folder in) that disk as a volume
if copyparty is unable to access the local folder that rclone/geesefs/JuiceFS provides (for example if it looks invisible) then you may need to run rclone with --allow-other
and/or enable user_allow_other
in /etc/fuse.conf
you will probably get decent speeds with the default config, however most likely restricted to using one TCP connection per file, so the upload-client won't be able to send multiple chunks in parallel
before v1.13.5 it was recommended to use the volflag
sparse
to force-allow multiple chunks in parallel; this would improve the upload-speed from1.5 MiB/s
to over80 MiB/s
at the risk of provoking latent bugs in S3 or JuiceFS. But v1.13.5 added chunk-stitching, so this is now probably much less important. On the contrary,nosparse
may now increase performance in some cases. Please try all three options (default,sparse
,nosparse
) as the optimal choice depends on your network conditions and software stack (both the FUSE-driver and cloud-server)
someone has also tested geesefs in combination with gocryptfs with surprisingly good results, getting 60 MiB/s upload speeds on a gbit line, but JuiceFS won with 80 MiB/s using its built-in encryption
you may improve performance by specifying larger values for --iobuf
/ --s-rd-sz
/ --s-wr-sz
if you've experimented with this and made interesting observations, please share your findings so we can add a section with specific recommendations :-)
tell search engines you don't wanna be indexed, either using the good old robots.txt or through copyparty settings:
-
--no-robots
adds HTTP (X-Robots-Tag
) and HTML (<meta>
) headers withnoindex, nofollow
globally - volflag
[...]:c,norobots
does the same thing for that single volume - volflag
[...]:c,robots
ALLOWS search-engine crawling for that volume, even if--no-robots
is set globally
also, --force-js
disables the plain HTML folder listing, making things harder to parse for some search engines -- note that crawlers which understand javascript (such as google) will not be affected
you can change the default theme with --theme 2
, and add your own themes by modifying browser.css
or providing your own css to --css-browser
, then telling copyparty they exist by increasing --themes
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
the classname of the HTML tag is set according to the selected theme, which is used to set colors as css variables ++
- each theme generally has a dark theme (even numbers) and a light theme (odd numbers), showing in pairs
- the first theme (theme 0 and 1) is
html.a
, second theme (2 and 3) ishtml.b
- if a light theme is selected,
html.y
is set, otherwisehtml.z
is - so if the dark edition of the 2nd theme is selected, you use any of
html.b
,html.z
,html.bz
to specify rules
see the top of ./copyparty/web/browser.css where the color variables are set, and there's layout-specific stuff near the bottom
if you want to change the fonts, see ./docs/rice/
-
see running on windows for a fancy windows setup
- or use any of the examples below, just replace
python copyparty-sfx.py
withcopyparty.exe
if you're using the exe edition
- or use any of the examples below, just replace
-
allow anyone to download or upload files into the current folder:
python copyparty-sfx.py
-
enable searching and music indexing with
-e2dsa -e2ts
-
start an FTP server on port 3921 with
--ftp 3921
-
announce it on your LAN with
-z
so it appears in windows/Linux file managers
-
-
anyone can upload, but nobody can see any files (even the uploader):
python copyparty-sfx.py -e2dsa -v .::w
-
block uploads if there's less than 4 GiB free disk space with
--df 4
-
show a popup on new uploads with
--xau bin/hooks/notify.py
-
-
anyone can upload, and receive "secret" links for each upload they do:
python copyparty-sfx.py -e2dsa -v .::wG:c,fk=8
-
anyone can browse (
r
), onlykevin
(passwordokgo
) can upload/move/delete (A
) files:
python copyparty-sfx.py -e2dsa -a kevin:okgo -v .::r:A,kevin
-
read-only music server:
python copyparty-sfx.py -v /mnt/nas/music:/music:r -e2dsa -e2ts --no-robots --force-js --theme 2
-
...with bpm and key scanning
-mtp .bpm=f,audio-bpm.py -mtp key=f,audio-key.py
-
...with a read-write folder for
kevin
whose password isokgo
-a kevin:okgo -v /mnt/nas/inc:/inc:rw,kevin
-
...with logging to disk
-lo log/cpp-%Y-%m%d-%H%M%S.txt.xz
-
become a real webserver which people can access by just going to your IP or domain without specifying a port
if you're on windows, then you just need to add the commandline argument -p 80,443
and you're done! nice
if you're on macos, sorry, I don't know
if you're on Linux, you have the following 4 options:
-
option 1: set up a reverse-proxy -- this one makes a lot of sense if you're running on a proper headless server, because that way you get real HTTPS too
-
option 2: NAT to port 3923 -- this is cumbersome since you'll need to do it every time you reboot, and the exact command may depend on your linux distribution:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3923 iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3923
-
option 3: disable the security policy which prevents the use of 80 and 443; this is probably fine:
setcap CAP_NET_BIND_SERVICE=+eip $(realpath $(which python)) python copyparty-sfx.py -p 80,443
-
option 4: run copyparty as root (please don't)
running copyparty next to other websites hosted on an existing webserver such as nginx, caddy, or apache
you can either:
- give copyparty its own domain or subdomain (recommended)
- or do location-based proxying, using
--rp-loc=/stuff
to tell copyparty where it is mounted -- has a slight performance cost and higher chance of bugs- if copyparty says
incorrect --rp-loc or webserver config; expected vpath starting with [...]
it's likely because the webserver is stripping away the proxy location from the request URLs -- see theProxyPass
in the apache example below
- if copyparty says
when running behind a reverse-proxy (this includes services like cloudflare), it is important to configure real-ip correctly, as many features rely on knowing the client's IP. Look out for red and yellow log messages which explain how to do this. But basically, set --xff-hdr
to the name of the http header to read the IP from (usually x-forwarded-for
, but cloudflare uses cf-connecting-ip
), and then --xff-src
to the IP of the reverse-proxy so copyparty will trust the xff-hdr. Note that --rp-loc
in particular will not work at all unless you do this
some reverse proxies (such as Caddy) can automatically obtain a valid https/tls certificate for you, and some support HTTP/2 and QUIC which could be a nice speed boost, depending on a lot of factors
- warning: nginx-QUIC (HTTP/3) is still experimental and can make uploads much slower, so HTTP/1.1 is recommended for now
- depending on server/client, HTTP/1.1 can also be 5x faster than HTTP/2
for improved security (and a 10% performance boost) consider listening on a unix-socket with -i unix:770:www:/tmp/party.sock
(permission 770
means only members of group www
can access it)
example webserver / reverse-proxy configs:
- apache config
- caddy uds:
caddy reverse-proxy --from :8080 --to unix///dev/shm/party.sock
- caddy tcp:
caddy reverse-proxy --from :8081 --to http://127.0.0.1:3923
- haproxy config
- lighttpd subdomain -- entire domain/subdomain
- lighttpd subpath -- location-based (not optimal, but in case you need it)
- nginx config -- recommended
- traefik config
teaching copyparty how to see client IPs when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare
if you (and maybe everybody else) keep getting a message that says thank you for playing
, then you've gotten banned for malicious traffic. This ban applies to the IP address that copyparty thinks identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
for most common setups, there should be a helpful message in the server-log explaining what to do, but see docs/xff.md if you want to learn more, including a quick hack to just make it work (which is not recommended, but hey...)
most reverse-proxies support connecting to copyparty either using uds/unix-sockets (/dev/shm/party.sock
, faster/recommended) or using tcp (127.0.0.1
)
with copyparty listening on a uds / unix-socket / unix-domain-socket and the reverse-proxy connecting to that:
index.html | upload | download | software |
---|---|---|---|
28'900 req/s | 6'900 MiB/s | 7'400 MiB/s | no-proxy |
18'750 req/s | 3'500 MiB/s | 2'370 MiB/s | haproxy |
9'900 req/s | 3'750 MiB/s | 2'200 MiB/s | caddy |
18'700 req/s | 2'200 MiB/s | 1'570 MiB/s | nginx |
9'700 req/s | 1'750 MiB/s | 1'830 MiB/s | apache |
9'900 req/s | 1'300 MiB/s | 1'470 MiB/s | lighttpd |
when connecting the reverse-proxy to 127.0.0.1
instead (the basic and/or old-fasioned way), speeds are a bit worse:
index.html | upload | download | software |
---|---|---|---|
21'200 req/s | 5'700 MiB/s | 6'700 MiB/s | no-proxy |
14'500 req/s | 1'700 MiB/s | 2'170 MiB/s | haproxy |
11'100 req/s | 2'750 MiB/s | 2'000 MiB/s | traefik |
8'400 req/s | 2'300 MiB/s | 1'950 MiB/s | caddy |
13'400 req/s | 1'100 MiB/s | 1'480 MiB/s | nginx |
8'400 req/s | 1'000 MiB/s | 1'000 MiB/s | apache |
6'500 req/s | 1'270 MiB/s | 1'500 MiB/s | lighttpd |
in summary, haproxy > caddy > traefik > nginx > apache > lighttpd
, and use uds when possible (traefik does not support it yet)
- if these results are bullshit because my config exampels are bad, please submit corrections!
if you have a domain and want to get your copyparty online real quick, either from your home-PC behind a CGNAT or from a server without an existing reverse-proxy setup, one approach is to create a Cloudflare Tunnel (formerly "Argo Tunnel")
I'd recommend making a Locally-managed tunnel
for more control, but if you prefer to make a Remotely-managed tunnel
then this is currently how:
-
cloudflare dashboard
»zero trust
»networks
»tunnels
»create a tunnel
»cloudflared
» choose a coolsubdomain
and leave thepath
blank, and useservice type
=http
andURL
=127.0.0.1:3923
-
and if you want to just run the tunnel without installing it, skip the
cloudflared service install BASE64
step and instead docloudflared --no-autoupdate tunnel run --token BASE64
NOTE: since people will be connecting through cloudflare, as mentioned in real-ip you should run copyparty with --xff-hdr cf-connecting-ip
to detect client IPs correctly
config file example:
[global]
xff-hdr: cf-connecting-ip
metrics/stats can be enabled at URL /.cpr/metrics
for grafana / prometheus / etc (openmetrics 1.0.0)
must be enabled with --stats
since it reduces startup time a tiny bit, and you probably want -e2dsa
too
the endpoint is only accessible by admin
accounts, meaning the a
in rwmda
in the following example commandline: python3 -m copyparty -a ed:wark -v /mnt/nas::rwmda,ed --stats -e2dsa
follow a guide for setting up node_exporter
except have it read from copyparty instead; example /etc/prometheus/prometheus.yml
below
scrape_configs:
- job_name: copyparty
metrics_path: /.cpr/metrics
basic_auth:
password: wark
static_configs:
- targets: ['192.168.123.1:3923']
currently the following metrics are available,
-
cpp_uptime_seconds
time since last copyparty restart -
cpp_boot_unixtime_seconds
same but as an absolute timestamp -
cpp_active_dl
number of active downloads -
cpp_http_conns
number of open http(s) connections -
cpp_http_reqs
number of http(s) requests handled -
cpp_sus_reqs
number of 403/422/malicious requests -
cpp_active_bans
number of currently banned IPs -
cpp_total_bans
number of IPs banned since last restart
these are available unless --nos-vst
is specified:
-
cpp_db_idle_seconds
time since last database activity (upload/rename/delete) -
cpp_db_act_seconds
same but as an absolute timestamp -
cpp_idle_vols
number of volumes which are idle / ready -
cpp_busy_vols
number of volumes which are busy / indexing -
cpp_offline_vols
number of volumes which are offline / unavailable -
cpp_hashing_files
number of files queued for hashing / indexing -
cpp_tagq_files
number of files queued for metadata scanning -
cpp_mtpq_files
number of files queued for plugin-based analysis
and these are available per-volume only:
-
cpp_disk_size_bytes
total HDD size -
cpp_disk_free_bytes
free HDD space
and these are per-volume and total
:
-
cpp_vol_bytes
size of all files in volume -
cpp_vol_files
number of files -
cpp_dupe_bytes
disk space presumably saved by deduplication -
cpp_dupe_files
number of dupe files -
cpp_unf_bytes
currently unfinished / incoming uploads
some of the metrics have additional requirements to function correctly,
-
cpp_vol_*
requires either thee2ds
volflag or-e2dsa
global-option
the following options are available to disable some of the metrics:
-
--nos-hdd
disablescpp_disk_*
which can prevent spinning up HDDs -
--nos-vol
disablescpp_vol_*
which reduces server startup time -
--nos-vst
disables volume state, reducing the worst-case prometheus query time by 0.5 sec -
--nos-dup
disablescpp_dupe_*
which reduces the server load caused by prometheus queries -
--nos-unf
disablescpp_unf_*
for no particular purpose
note: the following metrics are counted incorrectly if multiprocessing is enabled with -j
: cpp_http_conns
, cpp_http_reqs
, cpp_sus_reqs
, cpp_active_bans
, cpp_total_bans
you'll never find a use for these:
change the association of a file extension
using commandline args, you can do something like --mime gif=image/jif
and --mime ts=text/x.typescript
(can be specified multiple times)
in a config file, this is the same as:
[global]
mime: gif=image/jif
mime: ts=text/x.typescript
run copyparty with --mimes
to list all the default mappings
imagine using copyparty professionally... TINLA/IANAL; EU laws are hella confusing
-
remember to disable logging, or configure logrotation to an acceptable timeframe with
-lo cpp-%Y-%m%d.txt.xz
or similar -
if running with the database enabled (recommended), then have it forget uploader-IPs after some time using
--forget-ip 43200
- don't set it too low; unposting a file is no longer possible after this takes effect
-
if you actually are a lawyer then I'm open for feedback, would be fun
buggy feature? rip it out by setting any of the following environment variables to disable its associated bell or whistle,
env-var | what it does |
---|---|
PRTY_NO_DB_LOCK |
do not lock session/shares-databases for exclusive access |
PRTY_NO_IFADDR |
disable ip/nic discovery by poking into your OS with ctypes |
PRTY_NO_IMPRESO |
do not try to load js/css files using importlib.resources
|
PRTY_NO_IPV6 |
disable some ipv6 support (should not be necessary since windows 2000) |
PRTY_NO_LZMA |
disable streaming xz compression of incoming uploads |
PRTY_NO_MP |
disable all use of the python multiprocessing module (actual multithreading, cpu-count for parsers/thumbnailers) |
PRTY_NO_SQLITE |
disable all database-related functionality (file indexing, metadata indexing, most file deduplication logic) |
PRTY_NO_TLS |
disable native HTTPS support; if you still want to accept HTTPS connections then TLS must now be terminated by a reverse-proxy |
PRTY_NO_TPOKE |
disable systemd-tmpfilesd avoider |
example: PRTY_NO_IFADDR=1 python3 copyparty-sfx.py
force-enable features with known issues on your OS/env by setting any of the following environment variables, also affectionately known as fuckitbits
or hail-mary-bits
env-var | what it does |
---|---|
PRTY_FORCE_MP |
force-enable multiprocessing (real multithreading) on MacOS and other broken platforms |