TLS Configuration - SKS-Keyserver/sks-keyserver GitHub Wiki
While the Peering page contains basic information on HTTP server configuration for the front-end of SKS, it doesn't go into detail on HTTPS/TLS configuration. This page provides that guidance.
(Initial seed server configs and text from Phil Pennock, copy/pasted own writings from elsewhere as appropriate).
We'll cover the issues, what you should do, then provide server configuration examples for a variety of web-servers (initially nginx-only).
Terminology
- TLS: Transport Layer Security; ancient versions were called SSL but those should not still be in use
- Key: in this page, a private key used in combination with an X.509 Certificate, not an OpenPGP key (unless stated otherwise)
- Certificate: an X.509 certificate used for TLS end-point identity, predominantly of the server side
- PKIX: Public Key Infrastructure; in effect, the entire system around issuing "certificates" which might be verifiable
- Pool: some DNS pseudo-hostname which resolves to multiple real hosts, run by different people
- OpenPGP: the specification of the packet formats, etc, which combine to create a "PGP message"
- GnuPG: one widespread implementation of the OpenPGP standard, including the
gpg
tool as part of the suite
The Issues
There is "access to your own keyserver under its own hostname", and "access to your keyserver under a pool hostname". The former is classic HTTPS server configuration, with nothing particularly special to it. The latter is unfortunately non-trivial.
The keyservers are run by various people with no formal affiliation, as a public good by each person choosing to cooperate. There is no shared organization, no formal responsibility. There are "pool" hostnames, which are maintained by spidering the peering mesh on the "list of peers" info page, and working under a common hostname. https://sks-keyservers.net/overview-of-pools.php has more information.
Thus for hkps, we need "several" different, unaffiliated, people to all have certificates for the same hostname.
This is all directly opposite to the security model of the Trusted Third Party PKIX. If keyserver operators could get certificates from a browser-store CA, I'd tell you to stop trusting that CA because their processes are clearly broken.
Anyone can run a pool definition, for DNS they control. Anyone can point a CNAME to that pool. To have TLS available, they also need to run an Application-Specific Certificate Authority (or prove a different system can work) and issue certificates. Keyserver operators can install as many different TLS certificates as they want, used for different hostnames.
All TLS-speaking HKP clients (thus HKPS clients) supply ServerNameIndication, or can be considered broken. Pool operators can verify use of SNI and exclude servers which are misconfigured.
In practice, there's one well-run HKPS pool, which has pretty much defined the semantics of HKP/TLS operation. This is run by Kristian Fiskerstrand in Norway, and details of that pool's root CA are available at https://sks-keyservers.net/verify_tls.php. To have your server join this pool, read https://sks-keyservers.net/overview-of-pools.php#pool_hkps and follow the instructions there.
The GnuPG folks maintain a hostname, keys.gnupg.net
, which used to be the default hostname used by their software; they ship a copy of Kristian's root, share/gnupg/sks-keyservers.netCA.pem
, and default to hkps://hkps.pool.sks-keyservers.net
as the keyserver in current (at time of writing) releases (this is a ./configure
option).
You don't know who is running a keyserver in any particular pool: they're discovered via automatic spidering. Someone could choose to create a vetted pool with manual admission criteria. To the best of my knowledge, none such is currently in wide use. Thus you don't know who you're talking to. For privacy, talk to a specific hostname under the control of someone you trust, not a pool. In practice, HKPS buys you protection against tampering on the wire, or sniffing on the wire, but the people you want to protect against can (1) be running a keyserver within the pool, and (2) be tampering with DNS to ensure that you visit their server.
What To Do
- Configure port 11371 to provide unlogged HKP access, any hostname
- Configure port 80 path
/pks
to provide unlogged HKP access, any hostname - Configure port 443 to provide unlogged HKP access, any hostname:
- With one or more PKIX certificates
- With some certificate used by default
- Disabling session resumption on pool hostnames, because you don't know whose server a client will hit on refresh
- Do not enable
Strict-Transport-Security
: it breaks browser-access to:11371
port
- Configure non-HKP access to always redirect to HTTPS on your specific server hostname
- Don't serve real content on a pool hostname, as you don't know where image/JS/CSS assets will be loaded from
- Consider joining the main
sks-keyservers.net
HKPS pool; read https://sks-keyservers.net/overview-of-pools.php#pool_hkps for instructions.
Server Examples
nginx
Assumption: you've read Peering and have SKS listening to localhost for HKP but the public IPs for recon; thus hkp_address: 127.0.0.1
.
For filesystem paths, we're going to assume that nginx configs live in /www/conf/nginx
and TLS keys and certs in /www/conf/tls
. This is unlikely enough that everyone will have to fix the paths for their OS. Adjust as appropriate for file-per-server setups, etc.
Your IPs here are 192.0.2.42
and 2001:DB8::1:42
with a hostname of sks.example.org
.
In the nginx config directory, create an small file which can be included repeatedly, defining how to talk to the backend:
/www/conf/nginx/fragment-pks
:
# Pass /pks onto the SKS keyserver
# Note: changes here should be replicated for the :11371 vhost as well
location /pks {
access_log off;
proxy_pass http://127.0.0.1:11371;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Server;
add_header Via "1.1 $server_name:$server_port (nginx)" always;
add_header X-Robots-Tag 'nofollow, notranslate' always;
proxy_ignore_client_abort on;
client_max_body_size 8m;
}
Breaking this down: no logging of requests for keys, pass data onto the SKS server, give it some basic information for when you do have to turn up logging for debugging; return to the client the Server:
header from SKS, add a Via
header, including on errors (the always
), add some robots.txt
controls via HTTP headers to reduce the damage done by bad crawlers; support older HKP/HKPS clients (see Peering for details) and support larger keys being uploaded.
If you want tighter HTTP headers for HTTPS, then create /www/conf/nginx/fragment-site.secure-headers
and put the relevant directives in there.
Then in /www/conf/nginx/nginx.conf
first we define the global http settings:
http {
#...
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_prefer_server_ciphers on;
ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-128-GCM-SHA256 TLS13-AES-256-GCM-SHA384 ECDHE+CHACHA20 EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4 !SEED !CAMELLIA";
ssl_dhparam /www/conf/tls/dhparams.pem;
ssl_session_cache shared:nginx_tls:1m;
ssl_verify_client off;
# Turn on stapling on a per-server basis with ssl_stapling.
ssl_stapling_verify on;
# This path should point to a file of CAs, used to verify OCSP responses:
ssl_trusted_certificate /etc/ssl/certs/ca-certificates.pem;
# resolver needed to resolve hostnames for OCSP verification; note that
# resolvers are tried round-robin, instead of fallback
resolver 127.0.0.1;
# server blocks go here
}
When it comes to whether the client's preference or the server's preference should be honored for sorting ciphers, the answer is "whichever cipher preference list is better maintained". For browsers, assuming modern browsers, they're probably pretty well maintained, but you often can't assume modern browsers. For servers, in most cases they're not well maintained. If and only if you are willing to assume responsibility for staying informed and keeping your cipherlist up-to-date, then you should consider setting ssl_prefer_server_ciphers on;
. Do not turn this on by default: it is effectively a commitment which you are making to your users.
Here we don't turn on TLS stapling globally, because the pool definitions won't use it, but we do enable verification of responses and set up verification, and the DNS resolver needed. So when you enable OCSP for a vhost, it will all be configured correctly.
Make sure that ssl_trusted_certificate
points to the file containing all the CA certs, as shipped with your OS, with enough in it to verify signatures from whatever CA you use for obtaining publicly-verifiable certificates.
The vhosts
These go inside the http
block.
You can make the port 80 one be default_server
as long as there's nothing requiring authentication or cookies on any of the hostnames. Otherwise: responding to arbitrary hostnames with valuable content is a bad idea, exposing you to attacks. So in the example here, we don't.
# Port 80: pass through HKP, and otherwise redirect to https on our specific hostname
# Consider "default_server" on the listen lines.
server {
listen 192.0.2.42:80;
listen [2001:DB8::1:42]:80;
server_name sks.example.org sks-peer.example.org sks sks-peer whatever
# Pools we are part of (if we're up):
pool.sks-keyservers.net *.pool.sks-keyservers.net
*.sks.pool.globnix.net
# Alias pools
keys.gnupg.net http-keys.gnupg.net
;
location / {
return 301 https://sks.example.org$request_uri;
}
include fragment-pks;
}
# HTTPS/HKPS:
# Start with your own servername, setup Let's Encrypt or something for this,
# so that you have a publicly verifiable certificate for regular users, and
# OCSP stapling, etc
server {
listen 192.0.2.42:443 ssl http2;
listen [2001:DB8::1:42]:443 ssl http2;
server_name sks.example.org;
ssl on;
# Use Let's Encrypt to get these:
ssl_certificate /www/conf/tls/sks.example.org.crt;
ssl_certificate_key /www/conf/tls/sks.example.org.key;
# Let's Encrypt so enable ssl_stapling
ssl_stapling on;
include fragment-pks;
# include fragment-site.secure-headers;
}
# SNI-based hkps: pool membership
# You will need to apply to join this pool to get a certificate issued to you.
server {
listen 192.0.2.42:443 ssl http2;
listen [2001:DB8::1:42]:443 ssl http2;
server_name pool.sks-keyservers.net *.pool.sks-keyservers.net;
ssl on;
ssl_certificate /www/conf/tls/hkps-sks-fiskerstrand.chained.crt;
ssl_certificate_key /www/conf/tls/hkps-sks-fiskerstrand.key;
ssl_session_cache off;
ssl_session_tickets off;
location / {
return 301 https://sks.example.org$request_uri;
}
include fragment-pks;
}
# Include other pools here.
# Include other hostnames from private CAs here too.
# HKP fronting for port 11371; we pass _everything_ to SKS,
# not just /pks, but you can simplify if you want to have
# nginx handle the stuff outside /pks
server {
listen 192.0.2.42:11371;
listen [2001:DB8::1:42]:11371;
access_log off;
# Like `include fragment-pks;` but for `/`:
location / {
proxy_pass http://127.0.0.1:11371;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Server;
add_header Via "1.1 sks.example.org:11371 (nginx)" always;
add_header X-Robots-Tag 'nofollow, notranslate' always;
proxy_ignore_client_abort on;
client_max_body_size 8m;
}
}