Vault - kamialie/knowledge_corner GitHub Wiki
Vault is a secrets management solution. It provides central location where all secrets are stored, encryption at rest and in transit, ACL (Access Control List), audit. Exposes RESTful JSON API. Distributed as a single binary that contains both client and server applications.
Vault protects data in transit with TLS (asymmetric algorithm, one key is used to encrypt data, while another key is used to decrypt it).
Default kv secrets engine outputs metadata as well (on put
, get
, etc)
In dev mode KV Secrets Engine
is available at secret/
path. This is where arbitrary secrets can be written
to.
Multiple secrets can be put under one path. It is more secure to use files for sending secrets, rather than CLI.
$ vault kv put <path> <secret>
$ vault kv put secret/hello foo=world
$ vault kv put secret/hello foo=world one=two
-format=json
parameter can be provided for easy integration with other tools,
like jq.
$ vault kv get <path>
$ vault kv get secret/hello
$ vault kv delete <path>
$ vault kv delete secret/hello
Vault operates as a client/server application.
All of Vault's capabilities are accessible via HTTP API in addition to CLI (most CLI commands invoke HTTP API). Some features are only accessible via API.
Vault has a built-in, pre-configured (requires no further setup, f.e. CLI is automatically authenticated) dev server for local testing and development. Root token is shown on a start up process (can also be optionally set).
Properties:
- initialized and unsealed
- in-memory storage (all data stored in-memory)
- bound to local address (127.0.0.1) without TLS
- automatically authenticated (Vault stores token and prints it out)
- single unseal key (for seal/unseal experimentation)
- KV v2 is mounted as
secret/
- UI enabled
# start a dev server
$ vault server -dev
# to configure Vault client set VAULT_ADDR variable
$ export VAULT_ADDR='http://127.0.0.1:8200'
# provide the token, VAULT_TOKEN, to interact with Vault
$ export VAULT_TOKEN=value
All data stored by Vault is encrypted by key in a keyring, which is encrypted by the master key, which is encrypted by the seal key. Every initialized Vault server starts in a sealed state. Prior to unsealing only unseal and check the status of the seal commands are available (via API or CLI).
Default configuration uses Shamir seal, which splits unseal key into multiple shards. Certain threshold of shards (any order) is required to reconstruct the unseal key. Shards are not verified to be correct until threshold number of keys is passed in (nature of the algorithm). Unseal process is stateful. It is intended that multiple operators enter keys (so should be possible from multiple locations). Number of key shares and threshold number can not be the same - number of shares must be greater than threshold by at least one.
Single operator (with sudo permission on sys/seal
path) can seal Vault again.
All state is cleared from memory, including encryption key. Vault server also
seals itself, if it has restarted or if storage layer encounters an
unrecoverable error.
During UI initialization use keys
(not keys_base64
) to enter Unseal key
portion.
# Initializa server and set custom number of shares and threshold
$ vault operator init -key-shares=6 -key-threshold=4
# Unseal (enter threshold number of keys, one by one)
$ vault operator unseal
# Seal
$ vault operator seal
Delegates the responsibility of securing the unseal key from users to a trusted device or service. At startup Vault will connect to the device or service implementing the seal and ask it to decrypt the master key that Vault read from storage.
Certain operations in Vault require a quorum of users to perform. When using a Shamir seal the unseal keys must be provided to authorize these operations. When using Auto Unseal these operations require recovery keys instead. Initializing with an Auto Unseal yields recovery keys.
# Initializa server with auto-unseal and set custom recovery shares
$ vault operator init -recovery-shares=6 -recovery-threshold=4
Once Vault is sealed via API, unseal API with Auto Seal requires the recovery key fragments.
It is possible to migrate between Auto-Unseal services, from Auto-Unseal to Shamir secret sharing and vice versa through seal migration.
All dynamic secrets and service
type authentication tokens have associated
leases - metadata such as time duration, renewability, etc.
A lease can be revoked - secret is immediately invalidated and renewals are prevented. Can be done manually via API, CLI, or automatically by Vault (when lease is expired). When a token is revoked, all associated leases will be revoked as well.
lease_id
is a unique identifier for a lease - can be obtained by vault read
command. Used in lease management commands, such as vault lease renew
. Contains a prefix, which is the path, where the secret was requested
from. Prefix can be used to revoke tree of secrets, f.e. vault lease revoke -prefix aws/
, but requires sudo
permissions.
Lease duration is TTL in seconds for which lease is valid. Consumer must
renew the lease within that time. User can optionally request a specific amount
of remaining on the lease (increment
) - from the current time. Secrets
engine may ignore increment
argument, thus, return value should always be
inspected. Lease duration is calculated at request time plus increment
.
increment
can be shorter than initial TTL, but if new duration exceeds maximum
TTL, then Vault sets lease duration to the maximum remaining TTL.
When secret revocation request is made, first, Vault revokes the lease, and then queues the secret deletion request to secrets engine.
# List leases
$ vault list sys/leases/lookup/<path>
$ vault list sys/leases/lookup/consul/creds/<name>
# View properties
$ vault write sys/lease/lookup lease_id=<lease_id>
# Renew
$ vault lease renew <lease_id>
# Revoke
$ vault lease revoke <lease_id>
# Revoke all leases associated with a path
$ vault lease revoke -prefix <prefix_path>
Tokens is the core authentication method. Token auth (Token store
) is
automatically enabled and can not be disabled. It is responsible for creating
and storing tokens, and doesn't have login capability except existing token.
Tokens can be created via auth methods (f.e. GitHub) or parent tokens. Root
token can also be created through special process. Authenticating a second time
returns a new token, but does not invalidate the original token.
Policies are attached to token, which control what token owner is allowed to do. Tokens also include creation time, renewal time, and more, which is included in audit logs. By default, child token inherits policies from its parent.
Tokens created by the current token become its children. When parent token is revoked all of its children and hierarchy is revoked alongside all leases.
Tokens can be limited by a number of uses. Token with use limit expires at the end of the last use regardless of TTL.
# Create repiodic token
$ vault token create -use-limit=2
Batch tokens are encrypted blobs that carry enough information to be used for
Vault actions, but lack features of service tokens. Create batch token with
-type=batch
parameter. Batch token can not be root, therefore, specify policy
as well, when creating batch token as root. Batch tokens can not be revoked.
Characteristic | service | batch |
---|---|---|
can be root | yes | no |
can have child tokens | yes | no |
renewable, can be periodic, can have explicit max TTL | yes | no |
has accessor | yes | no |
has cubbyhole | yes | no |
revoked with parent (if not orphan) | yes | stops working |
storage | persistent | not stored |
ID prefix | s. |
b. |
can be used across Performance Replication clusters | no | yes |
creation scales with performance standby node count | no | yes |
Token accessor is created alongside token. It acts as a reference to a token and is intended to be used by administrators and automation systems to perform actions on tokens without having actual tokens. Only following actions can be performed via token accessor (token making the calls, not the token associated with accessor, must have appropriate permissions):
- lookup token's properties (excluding the actual token ID)
- look up token's capabilities on a path
- renew the token
- revoke the token
To run commands using token accessor pass -accessor <id>
parameter to
command.
# "List" tokens
$ vault list auth/token/accessors
# Lookup token info via token accessor
$ vault token lookup -accessor <id>
Root token is the one that has root policy attached to it. Root token can perform any actions in Vault and is the only type that can be set to never expire. There are 3 ways to create root token:
- initial root token generated by
vault operator init
(never expires) - using another root token (root token with expiration can not create root token that never expires)
-
vault operator generate-root
with the permission of a quorum of unseal key holders, example
Best practice is to use root token just for initial setup and revoke it afterwards. Create one on the fly with 3rd option and revoke when no longer needed.
Every non-root token has a time-to-live (TTL) associated with it, which is a current period of validity since either the token's creation time or last renewal time, whichever is more recent. After the current TTL is up, the token is revoked along with its associated leases and children tokens.
If the token is renewable, it can be renewed to extend its TTL before it expires. Token can be repeatedly renewed until it reaches its maximum TTL.
Token's lifetime since it was created is compared to maximum TTL, which is dynamically generated based on the following settings:
- system's max TTL set in Vault's config file (default is 32 days or 768 hours)
- max TTL set on an auth method mount (can be changed by mount tuning,
vault tune
) - allowed to override system value and can be shorter or longer - value suggested by auth method that issued the token; overrides system or
mount value, but only if it is shorter; might be configured on a per-role,
per-group, or per-user basis (set by
vault write
)
Since last 2 settings can be changed at any time, the TTL returned from renewal operation must be checked - if it hasn't extended or it hit it's max value, and client should reauthenticate.
# Set mount TTL
$ vault auth enable -max-lease-ttl=700h <auth_method>
Explicit max TTL can be set on token itself, which becomes it's hard limit (overrides general case settings). Can be set explicitly in command or implicitly through configuration.
Periodic tokens have a TTL (validity period), but no max TTL. At renewal time
TTL is set again to configured value at issue time (-increment
parameter is
ignored). As long as renewal is within TTL, token will never expire (the only
way to have unlimited time, except root tokens).
Can be created:
- by having sudo capability or a root token with the
auth/token/create
endpoint - by using token store roles
- supported auth methods, such as AppRole
When both period and explicit max TTL are set on a token, it behaves as a periodic token. However, once max TTL is reached, the token will be revoked.
When a periodic token is created via a token store role, the current value of the role's period setting will be used at renewal time.
Some tokens are able to be bound to CIDR (except non-expiring root tokens).
# Create periodic token
$ vault token create -period=24h
Users with appropriate permissions can create orphan token (has no parent and
is root for its own hierarchy). Users with appropriate permissions can revoke
orphan tokens via auth/token/revoke-orphan
, which revokes just the given
token, setting any children to be orphans. Orphan token can be created:
- via
write
access toauth/token/create-orphan
- by having
sudo
orroot
access toauth/token/create
and settingno_parent
totrue
- via token store roles
- by logging in with any (non-token) auth method
- revoke parent token with
-mode=orphan
- all children will continue to exist and become orphans
Create orphan token by -orphan
option of vault token create
command.
Output of commands acquiring tokens, f.e. vault token create
, vault write auth/approle/login role_id="$ROLE_ID" secret_id="$SECRET_ID"
, contains 3
policy related fields:
-
token_policies
- represent all policies attached by auth methods -
identity_policies
- represent all policies attached by identity secrets engine -
policies
- correlation of fields above to show all available policies
Some commands default to the token issuing the command, if no token was provided as an argument.
# Create new token
$ vault token create
# Create new token, attach a policy and set ttl
$ vault token create -policy=<policy_name> -ttl=60s
# Login with token
$ vault login <token>
# View info on token
$ vault token lookup <token_id>
# View info using accessor
$ vault token lookup -accessor <accessor>
# Check permissions on a path
$ vault token capabilities <token> <path>
# Renew token
$ vault token renew <token>
# Optionally change the TTL, that was set at creation time
$ vault token renew -increment=60 <token>
# Revoke token
$ vault token revoke <token>
# Revoke current token
$ vault token revoke -self
# Show currently used token
$ vault print token
# Create and store token
$ TOKEN=$(vault token create -format=json -policy=<policy> | jq -r ".auth.client_token")
Token role can be used to set multiple parameters at once.
$ vault write auth/token/roles/zabbix \
allowed_policies="policy1, policy2, policy3" \
orphan=true \
period=8h
$ vault create token -role=zabbix
Original token is stored in cubbyhole secrets engine. The wrapped secret can be
unwrapped using the single-use wrapping token(lookup
does not spend the use).
No one except wrapping token can access the original value, including user or
system that created the initial token and root. The wrapping token is
short-lived (has separate TTL than the wrapped secret) and can be revoked. Any
Vault response can be distributed using the response wrapping. Wrapping token
can only be a service token.
Environment:
- Vault
- DevOps automation (f.e. Jenkins)
- Application
- Database
In naive setup with AppRole, Jenkins requests token from Vault and embeds it into application, which then uses this token to request database credentials. However, that token is saved on disk or environment variable and can be accessible, if application is compromised.
With response wrapping technique, Vault generates wrapping token using cubbyhole's secrets engine. Cubbyhole stores permanent token and returns temporary, wrapping, token to requester, which can be used only once for writing and once for reading. Permanent token is revoked once wrapping token expires. Jenkins receives wrapped token from Vault and passes it to application. On startup application unwraps the token (makes request to Vault with wrapped token to receive permanent token), thus, permanent token is stored only in memory. Once wrapping token is used it becomes invalid and can't be used again.
Wrapping token validation:
- if wrapping token did not arrive - trigger alert
- perform lookup - immediately tells if token was unwrapped or is expired - should trigger alert for cause investigation
- validate creation path (ideally full path) - matches expectation
- failed unwrap - trigger alert
Response wrapping is per-request and is triggered by providing desired TTL for
wrapping token: X-Vault-Wrap-TTL
for API, -wrap-ttl
for CLI.
# Request wrapping token (actual token is saved in cubbyhole)
$ vault <command> create -wrap-ttl=<duration> <path>
$ vault token create -wrap-ttl=5m
$ vault kv get -wrap-ttl=2h secrets/<path>
# Unwrap actual token
$ vault unwrap <wrapping_token>
# Or authenticate as wrapping token and run unwrap command
$ VAULT_TOKEN=<WRAPPING_TOKEN> vault unwrap
# Or authenticate via login command
$ vault login <WRAPPING_TOKEN>
$ vault unwrap
sys/wrapping
endpoint provides several actions to wrapping token holder:
-
sys/wrapping/lookup
- fetch token's creation time, creation path, and TTL -
sys/wrapping/unwrap
- get the original secret -
sys/wrapping/rewrap
- migrate data to a new response-wrapping token -
sys/wrapping/wrap
- a helper endpoint that echoes back the data sent to it in a response-wrapping token
Policies grand or deny access to certain paths and operations in Vault (authorization). By default, policy denies access. Prefix matching is used to determine which policy is used - either exact match or longest-prefix glob match. Can be associated with single user or group of users (if auth method supports it).
Policies can be assigned directly to a token, to an entity through identity secrets engine or applied through auth method.
Built-in policies can not be removed.
root
policy grands sudo access to all paths (including sys), but can't access
cubbyhole. Can't be modified.
default
policy is attached to all tokens by default (unless explicitly
excluded). Can be modified. Defines a common set of capabilities that enable a
token the ability to reflect and manage itself.
Policies define capabilities on a path (API verb in parenthesis):
- create (POST/PUT) - create, if didn't exist before, or update an existing one
- read (GET)
- update (POST/PUT) - changing data; in most parts also implicitly includes the ability to create the initial value
- delete (DELETE)
- list (LIST) - listing (not all backends support it)
- sudo - grand access to root protected paths, typically administrative
access to Vault's component or secrets engine; tokens can not interact with
these paths unless they have a
sudo
capability in addition to other necessary ones, likecreate
,read
, etc - deny - explicitly deny a capability, always takes precedence, including
sudo
KV v2 secrets engine automatically appends /data
to the secret path, so it
should be omitted in kv commands, such as put
, get
, etc.
path "secret/data/*" {
capabilities = ["create", "update"]
}
path "secret/data/foo" {
capabilities = ["read"]
}
path "secret/+/teamb" {
capabilities = ["read"]
}
Required settings:
-
path
can be an exact match or glob pattern,*
, (Vault will perform prefix match, thus, it is only supported as the last character of the path); path segment match,+
, can be used to denote any number of characters bounded within a single path segment -
capabilities
are specified as a list of strings
path "secret/foo" {
capabilities = ["create"]
required_parameters = ["bar", "baz"]
}
path "secret/foo" {
capabilities = ["create"]
allowed_parameters = {
"bar" = []
"foo" = ["zip-*", "zap"]
"*" = []
}
denied_parameters = {
"dev" = []
}
}
Optional settings (for fine-grained control); *
can only be set to empty list;
parameters and values can make use of globing (though might have unexpected
results with parameter globing due to engine implementation):
-
required_parameters
- list of parameters that must be specified -
allowed_parameters
- list of keys and values that are permitted in a given path- empty list value allows any values for the given parameter
- populated list allows only specified values
-
*
parameter set to empty list allows all parameters to be modified; can be combined with populated list on a parameter to restrict particular parameter, while allowing modification of any other parameter
-
denied_parameters
- blacklist of parameters and values; takes precedence overallowed_parameters
; same options as inallowed_parameters
; non-specified parameters are, therefore, allowed, unlessallowed_parameters
setting is also specified
If the same patterns appears in multiple policies, union of capabilities is taken. If different patterns appear in applicable policies, only the highest-priority match is taken. Priority rules in the syntax section.
Since listing always operates on a prefix, policies must operate on a prefix because Vault will sanitize request paths to be prefixes.
Policies support templating (injective variables) - list of available parameters.
Without access to sys/
path commands like vault policy list
and vault secrets list
will not work.
# List policies
$ vault policy list
# View policy
$ vault policy read <policy_name>
# Upload (or update) policy from local disk
$ vault policy write <policy_name> <file_path>
# Upload (or update) policy from stdin
$ cat <file_path> | vault policy write <policy_name> -
# Remove policy
$ vault policy delete <policy_name>
# Format policy according to HCL guidelines
$ vault policy fmt <file_path>
Assigning a policy
$ vault token create -policy=<policy_name>
$ vault write auth/userpass/users/<username> token_policies=<policy_name>
$ vault write identity/entity/name/<name> policies=<policy_name>
Password policy is a set of instructions on how to generate a password. Policies are used by some of available secrets engines.
At least one charset must be specified.
length = 20
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "!@#$%^&*"
min-chars = 1
}
-
length
- integer, must be >= 4 -
charset
rule - specifies minimum number of characters from a charset; multiple charsets can be specified, in which case they will be combined and de-duplicated - can not be longer than 256 after these operations-
charset
- UTF-8 compatible string of printable characters -
min-chars
- integer, defaults to 0
-
# Upload password policy
$ vault write sys/policies/password/<policy_name> [email protected]
# Generate password
$ vault read sys/policies/password/<policy_name>/generate
Vault can be started using the -recovery
flag to bring it up in Recovery Mode.
- Vault is automatically unsealed once a recovery token is issued
- recovery token operations and
sys/raw
endpoint are available -
raw
request must be authenticated with recovery token - clusters will not be formed (also requests from standbys won't be handled)
Recovery token is not persistent and is regenerated on restart.
Operators can create rate limit quotas to avoid DOS issues. Limits can be set
globally (no path
specified), on a namespace or mount to the specified path.
Most specific rule is applied.
Optional block_interval
can be set to block user for subsequent requests for
a set duration, if threshold is hit.
The following paths are exempt from rate limiting, but this list can be updated
by setting rate_limit_exempt_paths
field:
/v1/sys/generate-recovery-token/attempt
/v1/sys/generate-recovery-token/update
/v1/sys/generate-root/attempt
/v1/sys/generate-root/update
/v1/sys/health
/v1/sys/seal-status
/v1/sys/unseal
High availability mode is automatically enabled when using a data store that supports it.
One node becomes the active node, while the rest are standby. Stanby nodes are not able to process client requests - requests are either forwarded to active node (default option) or client is redirected (failback option). Enterprise licence provides an option to provide read-only access for standby nodes.
Active node places a lock in a data store, with meta information about itself. If storage backend does not support locking, then different storage can be configured separately for HA.
Server settings:
-
cluster_address
option in listener -
cluster_addr
in general settings -
api_addr
in general settings
Vault Agent is a client daemon that assists with application integration and secrets caching and provides the following features:
# Start Vault agent
$ vault agent -config=<path>
Example configuration:
pid_file = "./pidfile"
vault {
address = "http://10.0.101.62:8200"
}
auto_auth {
method "aws" {
mount_path = "auth/aws"
config = {
type = "iam"
role = "app-role"
}
}
sink "file" {
config = {
path = "/home/ubuntu/vault-token-via-agent"
}
}
}
template {
source = "/home/ubuntu/customer.tmpl"
destination = "/home/ubuntu/customer.txt"
}
{{ with secret "secret/data/customers/acme" }}
Organization: {{ .Data.data.organization }}
ID: {{ .Data.data.customer_id }}
Contact: {{ .Data.data.contact_email }}
{{ end }}
- vault stanza specifies the connection settings to Vault server
-
pid_file
- path to the file in which the agent's Process ID (PID) should be stored -
exit_after_auth
(default isfalse
) - if set to true, the agent will exit with code 0 after a single successful auth
Automatically authenticates client to Vault and manages the token renewal process for locally-retrieved dynamic secrets. Tokens can be response-wrapped or encrypted, so application would have to perform these steps itself.
auto_auth {
method "name" {
mount_path = "path/to/auth_method/on/vault/server"
config = {}
}
sink "file" {
config = {
path = "/path/to/local/file"
}
}
}
auto_auth stanza defines auth method to be used and its configuration.
sink
stanza defines where to store tokens and dynamic secrets obtained from
Vault server. The only available type right now is file.
Client-side caching of responses containing newly created tokens or leased
secrets generated off of these newly created tokens. Works with Auto-Auth
to
skip authentication. Also renews and revokes secrets and tokens.
Agent automatically evicts cache based on request types and response codes,
but it can also be triggered manually via /agent/v1/cache-clear
endpoint.
# Completely reset cache
$ curl --request POST --data '{ "type": "all" }' \
$VAULT_AGENT_ADDR/agent/v1/cache-clear
cache {
use_auto_auth_token = true
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
Listener block has a require_request_header
setting, which if set to true
requires proper header (X-Vault-Request
set to true
) for all requests.
Rendering of user supplied templates, using the token generated by the
Auto-Auth
. Uses Consul template markup. Can also set permissions on the
resulting file (Unix style); by default inherits permissions from parent
directory.
Can run arbitrary command (generally under 30 seconds execution time), some options are:
- set env variable
- update application
template {
source = "/path/to/source"
destination = "/path/to/rendered/file"
perm = "640"
}
Vault architecture consists of core, which is responsible for lifecycle management, processing requests, etc, and plugins with different functionalities:
- Authentication methods - use a trusted system (AWS AIM, LDAP/Active Directory, Kubernetes RBAC) to authenticate clients to Vault.
- Audit - stream out response/request to external system
- Storage backend - where Vault stores data at rest (AWS RDS, Consul); storage never sees the original secret, as it is encrypted by Vault
- Secret engines - implementation of secrets, f.e. key/value pair, database plugin (for MySQL, PostreSQL, Oracle, etc), which has dynamic secret feature, RabbitMQ, AWS IAM, PKI (certificates), SSH and so on.
Vault abstracts away handling of secrets, thus, making it possible to work with physical systems, databases, HSMs, AWS AMI, etc. Read/write/delete/list operations are forwarded to the corresponding secrets engine.
Secrets engine can work in one of the following ways:
- store sensitive data in Vault (storage backend)
- generate sensitive data (f.e. certificate)
- encrypt existing data
Path prefix is used to determine (longest prefix match) which secrets engine should request be forwarded to. Paths (thus, engines) are completely isolated and can't communicate with each other. Secret engine can be moved (unlike auth method), but all existing leases are revoked and policy paths need to be updated.
All engines are enabled at /sys/mounts
. Disabling engine also revokes all
secrets and removes Vault data and configuration.
# List available engines
$ vault secrets list
# Enable
$ vault secrets enable <secret_engine>
$ vault secrets enable -path=<path> <secret_engine>
# Tune
$ vault secrets tune <path>
$ vault secrets tune -description="Custom description" <path>
# Check available configurations
$ vault path-help /sys/mounts/<path>/
# Change configurations (most likely path)
$ vault write <path>/config <parameter>=<value>
# Disable
$ vault secrets disable <path>
# Move
$ vault secrets move <source_path> <destination_path>
After secrets engine is enabled path-help
command can be used to get the
descriptions and available paths. Takes path as an argument. Available paths
can also be specified to dive deeper and get information about specific path
inside secrets engine. Non existent path can also be passed.
$ vault path-help aws
$ vault path-help aws/config/lease
$ vault path-help aws/creds/my-non-existent-role
Secrets engine mount parameters
Write data to secrets engine
$ vault write <secrets_engine_path>/<engine_specific_endpoint>
Dynamics secret is a way to provide temporary credentials to clients (f.e. application) instead of long lived credentials (since application could reveal secrets in logs, error reports, monitoring and so on). Also each client gets a unique set of credentials. This makes it easy to revoke the compromised secret that affects only one client. Dynamic secrets are generated when they are accessed (do not exist until they are read).
Simple key/value store. KV Secrets Engine has 2 versions: kv
and kv-v2
,
which is a versioned kv
engine. By default KV Secrets Engine
(kv-v2
) is enabled at secret/
when running in dev mode. Version 1 can be
upgraded to version 2, but not vice versa.
# Enable v1
$ vault secrets enable -version=1 kv
# Enable v2
$ vault secrets enable -version=2 kv
$ vault secrets enable kv-2
Key/value pairs can be passed in 2 ways:
- inline
# Enter value inline or put file's contents $ vault kv put secret/<path> key=value key2=@file # Promt for value, press Ctrl+d to end $ vault kv put secret/<path> key=-
- via JSON file
{ "key": "value", "key2": "value", }
$ vault kv put secret/customers/acme @data.json
Omitting the version
parameter when reading secrets, returns the latest
version of the secrets at the path. By default keeps up to 10 version.
Includes key metadata, thus, path is separated between data
, where key and
it's value are located, and metadata
, where key metadata is located. F.e. kv
secrets engine enabled at secret
path and a secret with a key value of new
will have the following paths generated:
secret/data/new
secrets/metadata/new
Creating secrets at the same path replaces existing data (fields are not
merged) and results in the next version. However, patch
command can be used
to create new version with merged fields:
$ vault kv patch secret/<path> key=value
# View specific version
$ vault kv get -version=1 secret/<path>
# Delete versions
$ vault kv delete -versions="4,5" secret/<path>
# Undelete version
$ vault kv undelete -versions=5 secret/<path>
# Permanently delete
$ vault kv destroy -versions=4 secret/<path>
# Delete all versions
$ vault kv metadata delete secret/<path>
# View metadata of secret
$ vault kv metadata get secret/<path>
# View configurations
$ vault read secret/config
# Set maximum number of version to keepy
$ vault write secret/config max_versions=4
# Configure specific path
$ vault kv metadata put -max-versions=4 secret/<path>
# Set automatic deletion on a path (not destroyed)
$ vault kv metadata put -delete-version-after=40s secret/<path>
Supports Check-And-Set operation to prevent unintentional overwrite. Can be
enabled on secrets engine or specific path (disabled by default). After being
enabled requires -cas=<number>
to write to a specific version. -cas=0
checks if secret does not exist at that path.
# Enable
$ vault write secret/config cas-required=true
$ vault kv metadata put -cas-required=true secret/<path>
# Write data
$ vault kv put -cas=1 secret/<path> key=value
Acts as a special "locker" for secrets. Used for response wrapping. Namespaced under particular token.
Enabled by default at cubbyhole
path. Can not be disabled or removed. All
tokens can read and write to cubbyhole by default
policy.
Can be used as a private storage - only wrapping token that created secret can access it (root token can not access it either). If that token expires or is revoked, all secrets are revoked as well.
$ vault write cubbyhole/<PATH> <KEY>=<VALUE>
$ vault read cubbyhole/<PATH>
Identity (client) management solution for Vault. Enabled by default, can not be disabled, moved or duplicated. Each client is mapped as an entity, while each account (or auth method object) is mapped as an alias. An entity may have multiple aliases associated with it. Policies can be attached to alias and/or to entity. Capabilities granted to token via entities are an addition to existing capabilities.
The alias name in combination with the authentication backend mount's accessor, serve as the unique identifier of an alias. Auth methods and how they form aliases.
Policies attached directly to token generally act as static policies, since association is updated only at creation or renewal time. With identities each request is assessed to check which policies are assigned, thus, resulting in more dynamic policy association.
Setup example:
- create 3 policies: base, first account and second account
- get mount accessor of auth method used for each account (example below
gets value for userpass auth method)
$ vault auth list -format=json | jq -r '.["userpass/"].accessor' > accessor.txt
- create an entity (attach base policy, add custom metadata)
$ vault write identity/entity name=<entity_name> policies=<base_policy_name> \ metadata=organization="ACME Inc." \ metadata=team="QA"
- attach account to entity (previous command outputs
entity_id
)$ entity_id=$(vault read identity/entity/name/<entity_name> -format=json | jq .data.id -r $ vault write identity/alias name=<account_username> \ canonical_id=<entity_id> \ mount_accessor=<accessor>
- review entity details
$ vault read identity/entity/id/<entity_id>
Auth method | Property used for name parameter (when creating alias) |
---|---|
Userpass | name |
AppRole | role-id |
Groups are used to provide policies to multiple entities. A
group can also contain a subgroup. Group created in identity store is an internal
group (Vault membership management). Specify member_entity_ids
and/or
member_group_ids
to add members.
Setup example:
- create and upload group policy
- get entity id
$ vault read identity/entity/name/<entity_name> -format=json | jq .data.id -r
- create group, attach policy and member IDs (entity ID)
$ vault write identity/group name=<group_name> \ policies=<group_policy> \ member_entity_ids=<entity_id> \ metadata=team="Engineering" \ metadata=region="North America"
External group can be created to serve as a mapping for a group outside of the identity store (f.e. LDAP, GitHub, etc). External group can have only one alias. Changes made in the outside group get reflected in Vault only upon subsequent login or renewal operation.
Setup example (GitHub):
- get mount accessor for GitHub auth method:
$ vault auth list -format=json | jq -r '.["github/"].accessor' > accessor.txt
- configure to point to your organization
$ vault write auth/github/config organization=<organization_name>
- create external group
$ vault write identity/group name=<group_name> \ policies=<policy_name> \ type="external" \ metadata=organization="Product Education"
- create group alias
$ vault write identity/group-alias name=<group_alias_name> \ mount_accessor=$(cat accessor.txt) \ canonical_id="<group_ID>"
Provides encryption as a service (does not store data). List of supported key types.
Supported actions:
- encrypt/decrypt
- sign and verify
- generate hashes
- create random bytes
Create named cryptographic key. Either submit data as an argument or use -f
flag to let Vault do it itself (same applies for rotate operation).
Each key is actually a map of keys, where key is a version. New version is created by rotating a key. Future encryption requests will use new key, old data can still be decrypted by old keys in a keyring.
Upgrade already-encrypted data to a new key - Vault will decrypt it with appropriate key and encrypt the resulting plaintext. Does not reveal the plaintext data.
Encryption key configuration can be changed to specify minimum version allowed to be used to encrypt data, decrypt data, if key is allowed to be deleted, etc. If data was encrypted with the version of the key that is no longer available (because of configuration and subsequent key rotations), the client should rewrap it. Available versions of a key are referred as working set (loaded into memory). Archived keys are stored in storage. min_encryption_version set to 0 means any key can be used for encryption.
# Create a key
$ vault write -f transit/keys/<key_name>
# List keys
$ vault list <path>
# Read key info
$ vault read transit/keys/<key_name>
# Update configuration
$ vault write transit/keys/<key_name>/config <parameter>=<value>
# Rotate a key
$ vault write -f transit/keys/<key_name>/rotate
# Remove a key(s)
$ vault transit/keys/<key_name>/trim
# Remove all keys before version 5
$ vault transit/keys/<key_name>/trim min_available_version=5
Encrypt plaintext data with a named key. All plaintext data must be
base64-encoded, because Vault can also accept binary format, such as PDF.
Returns ciphertext with a vault:v1
prefix - vault
just identifies that is
was wrapped by Vault, v1
indicates what key version was used (when key is
rotated, version is incremented). The rest is a base64 concatenation of the
initialization vector (IV) and ciphertext.
Vault API imposes a maximum request size of 32MB to prevent a denial of service attack. This can be tuned per listener block in the Vault server configuration.
Decrypt action returns base64-encoded string.
When large amount of data needs to be encrypted, data key (encryption key used to encrypt data in previous steps) can be requested. Vault returns plaintext of this data key along with its ciphertext, so that client can even save ciphertext alongside the original data. When data needs to be decrypted, client requests Vault to return data key (decrypt ciphertext) that was used to encrypt it.
# Encrypt
$ vault write transit/encrypt/<key_name> plaintext=$(base64 <<< "my secret data")
# Decrypt
$ vault write transit/decrypt/<key_name> ciphertext=<...>
$ base64 --decode <<< "plaintext_value"
# Rewrap data
$ vault write transit/rewrap/<key_name> ciphertext=<...>
# Request datakey
$ vault write -f transit/datakey/plaintext/<key_name>
The database secrets engine generates database credentials dynamically based on configured roles. Static roles is a 1-to-1 mapping of Vault Roles to username in a database - only password is rotated. In contrast, dynamic secrets generate a unique username and password for each credential request. Not all database types support static roles.
Workflow:
- client authenticates with Vault
- Vault creates user with credentials via SQL query
- Vault generates dynamic credentials
- User authenticates with dynamic credentials
Setup:
- enable database secrets engine
$ vault secrets enable database
- configure database secrets engine (choose appropriate plugin based on
database); provided database credentials are used by Vault to create dynamic
credentials - it is recommended to rotate password, so that only Vault can
use these credentials; optional password policy and
username template
settings can be added
$ vault write database/config/<database_name> \ plugin_name="..." \ connection_url="..." \ allowed_roles="..." \ username="..." \ password="..." \ password_policy=<policy_name> \ username_template="..." $ vault write -f database/rotate-root/<database_name> # Example $ vault write database/config/<config_name> \ plugin_name=mysql-database-plugin \ connection_url="{{username}}:{{password}}@tcp(mariadb:3306)/" \ allowed_roles="<role_one>,<role_two>" \ username="root" \ password="mysql"
- configure a role that maps a name in Vault to a set of creation statements to
create database credentials (the
{{username}}
and{{password}}
fields will be populated by the plugin with dynamically generated values)$ vault write database/roles/<role_name> \ db_name=<database_name> \ creation_statements="..." \ default_ttl="1h" \ max_ttl="24h" # Examples $ vault write database/roles/<role_one> \ db_name=<database_name> \ creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \ default_ttl="1h" \ max_ttl="24h" $ vault write database/roles/<role_two> \ db_name=<database_name> \ creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT ALL ON *.* TO '{{name}}'@'%';" \ default_ttl="1h" \ max_ttl="24h"
- get credentials (identified within Vault by
lease_id
)$ vault read database/creds/<role_one>
Databases can optionally set a password policy for use across all roles for that database.
- create role in database
CREATE ROLE ro NOINHERIT; GRANT SELECT ON ALL TABLES IN SCHEMA public TO "ro";
- write SQL creation statement to
readonly.sql
fileCREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' INHERIT; GRANT ro TO "{{name}}";
- create a Vault role
$ vault write database/roles/readonly \ db_name=postgresql \ [email protected] \ default_ttl=1h \ max_ttl=24h
Configure access to Consul server:
$ vault write <path>/config/access address=<url> token=<consul_token>
Create a role that binds to Consul role
$ vault write <path>/roles/<role_name> name=<role_name> policies=<consul_policy>
Get credentials
$ vault read <path>/creds/<role_name>
On Vault server:
- enable ssh secrets engine
$ vault secrets enable ssh
- create a role (
cidr_list
allows all hosts)
$ vault write ssh/roles/<role_name> key_type=otp default_user=<username> cidr_list=0.0.0.0/0,0.0.0.0/0
- get otp
$ vault write ssh/creds/<role_name> ip=<ssh_server>
On ssh server:
- download Vault SSH helper
- create vault helper config file,
config.hcl
(on SSH server)
vault_addr = "http://vaultserver:8200"
ssh_mount_point = "ssh"
tls_skip_verify = true # local experimentation
allowed_roles = "*" # roles can be specified
- verify connection to Vault server:
$ vault-ssh-helper -dev -verify-only -config=path/to/config.hcl
- modify
/etc/pam.d/sshd
following lines to disable common authentication and enabled otp
#@include common-auth
auth requisite pam_exec.so quiet expose_authtok log=/tmp/vaultssh.log /usr/local/bin/vault-ssh-helper -dev -config=path/to/config.hcl
auth optional pam_unix.so not_set_pass use_first_pass nodelay
Example policy with required IAM permissions for Vault.
Configure secrets engine:
$ vault write aws/config/root \
access_key=$AWS_ACCESS_KEY_ID \
secret_key=$AWS_SECRET_ACCESS_KEY \
region=us-east-1
Roles are used to tell Vault what permissions to attach to the user. AWS IAM policy is attached to Vault role, so when credentials for a role are requested, user is created and the IAM policy is attached to it.
$ vault write aws/roles/my-role \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1426528957000",
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": [
"*"
]
}
]
}
EOF
Each time the request is made Vault creates new user with new credentials.
lease_id
value is used for renewal, revocation, and inspection.
$ vault read aws/creds/my-role
Before a client can interact with Vault, it must authenticate against an auth method. Upon authentication, a token is generated. The token may have policy attached, which is mapped at authentication time.
Identities have leases associated with them. User must reauthenticate after
given lease period. Identities can be renewed to avoid reauthentication - just
renew token with vault token renew <token>
.
Auth method is available at auth/<path>
after being enabled, f.e.
auth/github/
. However, they are enabled at /sys/auth
, thus, user that
enables auth method needs access to that path. Default path is auth method
name.
Userpass, Approle, Token (enabled by default) methods are built-in.
All CLI examples below will use default path (name of method), but it can be set
to anything else with -path
parameter when auth method is being enabled.
Credentials are RoleID
and SecretID
(similar to username and password).
SecretID
can be constrained by CIDR address.
Setup:
- enable AppRole
- create policy, f.e.
# Login with AppRole path "auth/approle/login" { capabilities = ["create", "read"] } # Read secrets path "secrets/dev/*" { capabilities = ["read"] }
- upload policy to Vault
- create role
$ vault write auth/approle/role/<role_name> policies=<policy_name>
- get role's credentials
$ vault read auth/approle/role/<role_name>/role-id $ vault write -f auth/approle/role/<role_name>/secret-id
- log in
$ vault write auth/approle/login role_id=<role_id> secret_id=<secret_id>
# Create user
$ vault write auth/userpass/users/<username> password=<password>
If auth method is defined at custom path, specify -path
with vault login
as
well. All commands that are only used in the context of auth methods (such as
vault tune
, vault disable
, etc) can work with short version of path, f.e.
custom
instead of auth/custom
.
# Get help on auth method
$ vault auth help <method>
# List available auth methods
$ vault auth list
# Enable auth method
$ vault auth enable <method>
$ vault auth enable -path=<path> <method>
$ vault write sys/auth/<path> type=<auth_method>
# Login with interactive auth methods
$ vault login -method=<method> [auth specific key/value pairs]
# Userpass
$ vault login -method=userpass username=<username>
# Login programatically
$ vault write auth/<path>/login[/auth_specific_path] [auth specific key/value pairs]
# Remove all tokens generated by auth method
$ vault token revoke -mode path auth/github
# Disable auth method (also deletes all data stored by auth method)
$ vault auth disable <path>
After being enabled Auth method can be further configured or tuned. Tuning is
setting general settings that all methods share, such as default TTL or
description. Configuration is method specific; run path-help
to find out
endpoints and their parameters.
# Set configurations - GitHub example
$ vault write auth/github/config organization=<name>
# Tune auth method
$ vault tune -description=<custom text> <path>
Each auth methods implements its own login endpoint. Use the
vault path-help
mechanism to find the proper endpoint. F.e. GitHub is
auth/github/login
. To determine available arguments run path-help
on full
path with login endpoint.
All access to secrets can be audited. Records are stored in storage backend. If auditing is enabled, failure to audit will prevent secret access.
Vault is configured using HCL or JSON file or environment variables. Basic block and configurations:
listener "tcp" {}
storage "type" {}
seal "type" {}
service_registration "type" {} # Register Vault service with Consul or K8s
telemetry {}
ui = [true | false]
disable_mlock = [true | false]
cluster_addr = "https://address:port"
api_addr = "https://address:port"
Simple example with raft
storage:
storage "raft" {
path = "./vault/data"
node_id = "node1"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = "true"
}
api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
ui = true
Initialization is the process of configuring Vault. This only happens once when the server is started against a new backend that has never been used with Vault before. When running in HA mode, this happens once per cluster, not per server. During initialization, the encryption keys are generated, unseal keys are created, and the initial root token is set up.
# Start Vault server with configuration file
$ vault server -config=<path_to_config>
# Check server status
# Does not require authorization
$ vault status
# Initialize Vault
$ vault operator init
operator init
outputs the following info only once:
- unseal keys
- initial root token
api_addr
indicated the address (full URL) to advertise to route client
requests (what clients should use to communicate with current node).
cluster_addr
indicates the address (full URL) to be used for communication
between the Vault nodes in a cluster (what other nodes in the cluster should
use to communicate with current node).
ui
setting activates the UI (automatically enabled in dev serverl). Runs on
the same port as the Vault listener and accessible at <host>:<port>/ui
.
disable_mlock
, mlock prevents memory from being swapped to disk. Set to
false
by default. Only available on Unix-like systems, disabled on other
platforms. Can be disabled if swap is encrypted, recommended to be disabled for
integrated storage (raft).
service_registration
is used to register Vault service with Consul or k8s.
Physical backend that Vault uses for storing all of its data, except the configuration audit devices, auth methods, and secrets engines.
Storage types:
- object (AWS S3)
- database (MySQL)
- key/value (HashiCorp Consul)
- file (local filesystem)
- memory (dev server uses by default)
- integrated storage (raft)
Officially HashiCorp supported options are Consul and Raft.
Saves data in the local filesystem.
Uses raft consensus protocol and local storage. Meant to be run on a cluster, since data is replicated. Directory must pre-exist.
storage "raft" {
path = "/path/to/directory"
node_id = "unique_node_identifier"
retry_join {
leader_api_addr = "https://node1.vault.local:8200"
# optional, specify path to certificate
leader_ca_cert_file = "/path/to/file"
}
}
retry_join
block is used to specify an existing member of a cluster for
current node to contact. Add block for each member of the cluster, including
the node on which it is configured (so that it can recognize itself, when it is
the first node in the cluster).
One or more listeners determine how Vault listens for API requests. Currently
the only supported listener type is tcp
.
listener "tcp" {
# define address on which Vault is listening
# default port is 8200
address = "127.0.0.1:8200"
# if node is part of a cluster
# address for cluster communication
# default port is listening port plus 1
cluster_address = "127.0.0.1:8201"
# optional, path to TLS public certificate and its key
tls_cert_file = "path/to/public/cert.crt"
tls_key_file = "path/to/private/cert.key"
}
TLS enabled connection requires a certificate files and key file on each Vault
host. To disable TLS set tls_disable
option to 1
or "true"
.
Optional stanza. Required when setting Auto-unseal.
seal "type_name" {}
Not enabled by default in production mode. Runs on the same port as API, f.e.
http://127.0.0.1:8200/ui
.
To launch API explorer check learn_ui_api.hcl
policy, launch web CLI and type
api
in search bar.
Interpret /sys/mounts
output
$ jq -r '.data | keys' response_<unix_timestamp>.json
Install auto-complete (adds setup to shell profile file):
$ vault -autocomplete-install
Construct commands in the following way (options precede path and its arguments):
$ vault <command> [options] [path] [args]
Parameters that include time period default to integer seconds. Can also
specify a string duration of minutes, hours or seconds - 1h
, 20m
, 30s
.
Can mix in any order - 35m2h10s
.
If the VAULT_*
environment variables are set, the autocompletion will
automatically query the Vault server and return helpful argument suggestions.
Get help:
vault <command> -help
vault path-help PATH
To output just the value of desired field use -field
option with the
corresponding field name, f.e.
# Output just token to save it to file or environment variable
$ vault token create -field=token
Default token helper stores (cache) token in ~/.vault-token
file after
authentication.
Some commands can read data from stdin. If -
is the entire argument, Vault
expects to read a JSON object, otherwise it expects just a value:
$ echo -n '{"value":"itsasecret"}' | vault kv put secret/password -
$ echo -n "itsasecret" | vault kv put secret/password value=-
Similar to stdin, Vault can also read data from files:
$ vault kv put secret/password @data.json
$ vault kv put secret/password [email protected]
-
VAULT_TOKEN
- Vault authentication token; overwrites valut in token helper (saved byvault login
) -
VAULT_ADDR
- address of a Vault server as URL and port, f.e.http://127.0.0.0:8200/
-
VAULT_FORMAT
- provide Vault output (read/status/write) in the specified format; valid formats are "table", "json", or "yaml" -
VAULT_SKIP_VERIFY
- if TLS is enabled and certificate is self signed
Rekeying Vault is the process of changing values for unseal and master key (can also change seal settings as part of it). Requires unseal key shares or recovery key shares to perform it.
Rotating encryption keyring is adding a new encryption key, but also saving previous versions. New key is going to be used with future operations with data. Doesn't require unseal or recovery key, but need to be logged in with an identity that has permissions to perform this action.
$ vault operator rekey <key>
# Initialize rekey process
# Optionally update seal settings
$ vault operator rekey -init -key-shares=5 -key-threshold=3 <key>
# Continue rekey process by supplying unseal or recovery key shares
$ vault operator rekey
# Check the encryption key status
$ vault operator key-status
# Rotate key in keyring
$ vault operator rotate
Useful endpoints:
-
/sys/host-info
- underlying host system information -
/sys/mounts
- listing of enabled secrets engines
Include X-Vault-Token
header with token value when running API requests,
f.e. cURL. General form - VAULT_ADDR/v1/<endpoint>
(as of now the only
available version is v1
).
To get the cURL equivalent of the CLI command add -output-curl-string
flag,
f.e. vault auth enable -output-curl-string approle
.
Example workflow:
- get initial root token (and unseal key):
$ curl \ --request POST \ --data '{"secret_shares": 1, "secret_threshold": 1}' \ http://127.0.0.1:8200/v1/sys/init | jq
- unseal Vault (take the value of
keys_base64
from the output of command above):$ curl \ --request POST \ --data '{"key": "/ye2PeRrd/qruh9Ppu9EyUjk1vLqIflg1qqw6w9OE5E="}' \ http://127.0.0.1:8200/v1/sys/unseal | jq
- validate initialization
$ curl http://127.0.0.1:8200/v1/sys/init
- enable auth method
$curl \ --header "X-Vault-Token: $VAULT_TOKEN" \ --request POST \ --data '{"type": "approle"}' \ http://127.0.0.1:8200/v1/sys/auth/approle
- create policy
$ curl \ --header "X-Vault-Token: $VAULT_TOKEN" \ --request PUT \ --data '{"policy":"# Dev servers have version 2 of KV secrets engine mounted by default, so will\n# need these paths to grant permissions:\npath \"secret/data/*\" {\n capabilities = [\"create\", \"update\"]\n}\n\npath \"secret/data/foo\" {\n capabilities = [\"read\"]\n}\n"}' \ http://127.0.0.1:8200/v1/sys/policies/acl/my-policy
- enable KV v2 (since policy expects that path)
$ curl \ --header "X-Vault-Token: $VAULT_TOKEN" \ --request POST \ --data '{ "type":"kv-v2" }' \ http://127.0.0.1:8200/v1/sys/mounts/secret
- associate AppRole tokens with new policy
$ curl \ --header "X-Vault-Token: $VAULT_TOKEN" \ --request POST \ --data '{"policies": ["my-policy"]}' \ http://127.0.0.1:8200/v1/auth/approle/role/my-role
- get RoleID and SecretID
$ curl \ --header "X-Vault-Token: $VAULT_TOKEN" \ http://127.0.0.1:8200/v1/auth/approle/role/my-role/role-id | jq -r ".data" $ curl \ --header "X-Vault-Token: $VAULT_TOKEN" \ --request POST \ http://127.0.0.1:8200/v1/auth/approle/role/my-role/secret-id | jq -r ".data"
- fetch Vault token
$ curl \ --request POST \ --data '{"role_id": "3c301960-8a02-d776-f025-c3443d513a18", "secret_id": "22d1e0d6-a70b-f91f-f918-a0ee8902666b"}' \ http://127.0.0.1:8200/v1/auth/approle/login | jq -r ".auth"
- Replication
- HSM support
- Automated integrated storage snapshots
- Lease count quotas
- Entropy augmentation
- Seal wrap
- Namespaces
- Control groups
- MFA
- Sentinel
Enterprise only feature. Cluster is a unit of replication. One cluster is primary, while secondary clusters receive data (leader/follower model). Replication is asynchronous - primary cluster does not wait for acknowledgement from secondary clusters.
Replication options:
- disaster recovery - all data is replicated including tokens and leases, however, secondary clusters do not serve any requests from clients
- performance - each cluster keeps track of it's own tokens and leases, while configurations, policies and supporting secrets (K/V values, encryption keys for Transit engine, etc) are replicated; data modification requests are forwarded to the primary cluster
Using replication requires a storage backend that supports transactional updates, such as Consul.
Namespaces are isolated environments that functionally exist as Vaults within a Vault. They have separate login paths and support creating and managing data isolated to their namespace. Namespaces can be nested. Child namespace can reference parent identities in their own policies; similarly parent namespace can reference child identities.
API request path and X-Vault-Namespace
header can be used to specify
namespace value. Full name can be specified in the API request or in header.
Alternatively partial header can be specified in header, while the remaining
part is specified in the request (examples below are identical):
- request:
ns1/ns2/secret/foo
- request:
secret/foo
, header:ns1/ns2/
- request:
ns2/secret/foo
, header:ns1/
API paths that can be called only from root namespace:
- sys/init
- sys/license
- sys/leader
- sys/health
- sys/metrics
- sys/config/state
- sys/host-info
- sys/key-status
- sys/storage
- sys/storage/raft