old 5. Object Storage Service - marcosaletta/Juno-CentOS7-Guide GitHub Wiki
- proxy-server installed on the controller node
node01
- account, container, object servers installed on
node02
,node03
,node04
Create a swift user that the Object Storage Service can use to authenticate with the Identity Service. Choose a password (replace $SWIFT_PASS
with it) and specify an email address for the swift user. Use the service tenant and give the user the admin role:
$ keystone user-create --name=swift --pass=$SWIFT_PASS [email protected]
$ keystone user-role-add --user=swift --tenant=service --role=admin
Create a service entry for the Object Storage Service:
$ keystone service-create --name=swift --type=object-store --description="OpenStack Object Storage"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| id | eede9296683e4b5ebfa13f5166375ef6 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+
Specify an API endpoint for the Object Storage Service by using the returned service ID. When you specify an endpoint, you provide URLs for the public API, internal API, and admin API. In this guide, the controller host name is used:
$ keystone endpoint-create --service-id=$(keystone service-list | awk '/ object-store / {print $2}') --publicurl='http://controller:8080/v1/AUTH_%(tenant_id)s' --internalurl='http://controller:8080/v1/AUTH_%(tenant_id)s' --adminurl=http://controller:8080
+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+
| adminurl | http://controller:8080/ |
| id | 9e3ce428f82b40d38922f242c095982e |
| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| region | regionOne |
| service_id | eede9296683e4b5ebfa13f5166375ef6 |
+-------------+---------------------------------------------------+
Install the sift packages for the controller node:
yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token python-keystonemiddleware memcached
Obtain the proxy service configuration file from the Object Storage source repository:
curl -o /etc/swift/proxy-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample
Edit the /etc/swift/proxy-server.conf file and edit the following sections:
[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift
[pipeline:main]
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
[app:proxy-server]
...
allow_account_management = true
account_autocreate = true
[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,_member_
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = swift
admin_password = $SWIFT_PASS
delay_auth_decision = true
Replace $SWIFT_PASS with the password you chose for the swift user in the Identity service.
[filter:cache]
...
memcache_servers = 127.0.0.1:11211
NOTE Some of those section could be commented in the configuration file, so be careful and de-comment if necessary.
Next, set up your storage nodes and proxy node. This example uses the Identity Service for the common authentication piece.
NOTE This section describes how to install and configure storage nodes that operate the account, container, and object services. For simplicity, this configuration references two storage nodes, each containing two empty local block storage devices. Each of the devices, /dev/sdb
and /dev/sdc
, must contain a suitable partition table with one partition occupying the entire device. Although the Object Storage service supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS.
Install the supporting utility packages:
yum install xfsprogs rsync
On each storage node format the /dev/sdb1
and /dev/sdc1
partitions as XFS:
# mkfs.xfs /dev/sdb1
# mkfs.xfs /dev/sdc1
Create the mount point directory structure:
# mkdir -p /srv/node/sdb1
# mkdir -p /srv/node/sdc1
Edit the /etc/fstab file and add the following to it:
/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
Mount the devices:
# mount /srv/node/sdb1
# mount /srv/node/sdc1
Moving on with the configuration, edit the /etc/rsyncd.conf file and add the following to it:
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = $MANAGEMENT_INTERFACE_IP_ADDRESS
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Replace $MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.
Start the rsyncd service and configure it to start when the system boots:
# systemctl enable rsyncd.service
# systemctl start rsyncd.service
$ yum install openstack-swift-account openstack-swift-container openstack-swift-object
Obtain the accounting, container, and object service configuration files from the Object Storage source repository:
#curl -o /etc/swift/account-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample
# curl -o /etc/swift/container-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample
# curl -o /etc/swift/object-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample
<<<<<<< HEAD Edit the /etc/swift/account-server.conf file and complete the following actions:
- In the
[DEFAULT]
section:
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
- In the
[pipeline:main]
section, enable the appropriate modules:
[pipeline:main]
pipeline = healthcheck recon account-server
=======
origin/master
For each device on the node that you want to use for storage, set up the XFS volume (/dev/sdc
is used as an example). Use a single partition per drive. For example, in a server with 12 disks you may use one or two disks for the operating system which should not be touched in this step. The other 10 or 11 disks should be partitioned with a single partition, then formatted in XFS.
# fdisk /dev/sdc
# mkfs.xfs /dev/sdc1
# echo "/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdc1
# mount /srv/node/sdc1
# chown -R swift:swift /srv/node
NOTE
To create a partion on the disk(s) that you would use for swift
use
fdisk /dev/$disk
,where $disk
is the id of the disk, for example /dev/sdb/
, and use n
for create a new primary partition and w
to save it and exit.
After the definition of disks that are gonna to be used for swift
, to install rsync
# sudo yum install -y rsync
# sudo yum install -y xinetd
To enable rsync
, instead of using /etc/default/rsync
, edit /etc/xinetd.d/rsync
like that
# default: off
# description: The rsync server is a good addition to an ftp server, as it \
# allows crc checksumming etc.
service rsync
{
disable= no# change yes to no
flags= IPv6
socket_type= stream
wait= no
user= root
server= /usr/bin/rsync
server_args= --daemon
log_on_failure+= USERID
}
Than create the /etc/rsyncd.conf
file (replace $STORAGE_LOCAL_NET_IP
with the IP address of the node):
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = $STORAGE_LOCAL_NET_IP
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
(Optional) If you want to separate rsync and replication traffic to replication network, set $STORAGE_REPLICATION_NET_IP
instead of $STORAGE_LOCAL_NET_IP
:
address = $STORAGE_REPLICATION_NET_IP
Edit the following line in /etc/default/rsync
:
RSYNC_ENABLE=true
Start the rsync service:
# service rsync start
[Note] The rsync service requires no authentication, so run it on a local, private network.
Create the swift recon cache directory and set its permissions:
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon
The proxy server takes each request and looks up locations for the account, container, or object and routes the requests correctly. The proxy server also handles API requests. You enable account management by configuring it in the /etc/swift/proxy-server.conf
file.
[Note] The Object Storage processes run under a separate user and group, set by configuration options, and referred to as swift:swift. The default user is swift.
On the controller node, install swift on the proxy node
sudo yum install -y swift
sudo yum intsall -y openstack-swift-proxy.noarch
sudo yum install -y python-swiftclient.noarch
Modify memcached to listen on the default interface on a local, non-public network. Edit this line in the /etc/memcached.conf
file:
-l 127.0.0.1
Change it to:
-l $PROXY_LOCAL_NET_IP
Restart the memcached
service:
# service memcached restart
Note: if you have modified /etc/memcached.conf
and if the dashboard has been installed on the same node, you need to modify the value of CACHES in /etc/openstack-dashboard/local_settings.py
accordingly, in order to match the new settings in /etc/memcached.conf
(see Step 7 in Controller and Network node installation )
Create /etc/swift/proxy-server.conf
(replace $SWIFT_PASS
with a suitable password):
[DEFAULT]
bind_port = 8080
user = swift
[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = controller
auth_port = 35357
# the service tenant and swift username and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = $SWIFT_PASS
[filter:cache]
use = egg:swift#memcache
memcache_servers = $PROXY_LOCAL_NET_IP:11211
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck
[Note]
If you run multiple memcache servers, put the multiple IP:port listings in the [filter:cache]
section of the /etc/swift/proxy-server.conf
file:
memcache_servers = 10.1.2.3:11211,10.1.2.4:11211
Only the proxy server uses memcache.
Create the account, container, and object rings. The builder command creates a builder file with a few parameters. The parameter with the value of 18 represents 2 ^ 18th, the value that the partition is sized to. Set this “partition power of each object, with the last value being the number of hours to restrict moving a partition more than once.
# cd /etc/swift
# swift-ring-builder account.builder create 18 3 1
# swift-ring-builder container.builder create 18 3 1
# swift-ring-builder object.builder create 18 3 1
For every storage device on each node add entries to each ring:
# swift-ring-builder account.builder add z${ZONE}-$STORAGE_LOCAL_NET_IP:6002[R$STORAGE_REPLICATION_NET_IP:6005]/DEVICE 100
# swift-ring-builder container.builder add z${ZONE}-$STORAGE_LOCAL_NET_IP_1:6001[R$STORAGE_REPLICATION_NET_IP:6004]/DEVICE 100
# swift-ring-builder object.builder add z${ZONE}-$STORAGE_LOCAL_NET_IP_1:6000[R$STORAGE_REPLICATION_NET_IP:6003]/DEVICE 100
[Note]
the optional $STORAGE_REPLICATION_NET_IP
parameter if you do not want to use dedicated network for replication.
In our case, we want to deploy the account, container and object servers on the three hosts node02, node03 and node04. Therefore we will run the following commands (assuming that the device to be added to the cluster is /dev/sdc1 for all our three nodes):
# swift-ring-builder account.builder add z1-10.10.10.12:6002R10.10.20.12:6005/sdc1 100
# swift-ring-builder container.builder add z1-10.10.10.12:6001R10.10.20.12:6004/sdc1 100
# swift-ring-builder object.builder add z1-10.10.10.12:6000R10.10.20.12:6003/sdc1 100
# swift-ring-builder account.builder add z1-10.10.10.13:6002R10.10.20.13:6005/sdc1 100
# swift-ring-builder container.builder add z1-10.10.10.13:6001R10.10.20.13:6004/sdc1 100
# swift-ring-builder object.builder add z1-10.10.10.13:6000R10.10.20.13:6003/sdc1 100
# swift-ring-builder account.builder add z1-10.10.10.14:6002R10.10.20.14:6005/sdc1 100
# swift-ring-builder container.builder add z1-10.10.10.14:6001R10.10.20.14:6004/sdc1 100
# swift-ring-builder object.builder add z1-10.10.10.14:6000R10.10.20.14:6003/sdc1 100
Verify the ring contents for each ring:
# swift-ring-builder account.builder
# swift-ring-builder container.builder
# swift-ring-builder object.builder
Rebalance the rings:
# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance
[Note] Rebalancing rings can take some time.
Once the swift proxy
node has been configured and the rings created, on all storages nodes edit
/etc/swift/account-server.conf
/etc/swift/container-server.conf
/etc/swift/object-server.conf
and change
bind_ip = 127.0.0.1
to
bind_ip = 0.0.0.0
Copy the account.ring.gz
, container.ring.gz
, and object.ring.gz
files to each of the Proxy and Storage nodes in /etc/swift
.
Make sure the swift user owns all configuration files:
# chown -R swift:swift /etc/swift
Restart the Proxy service:
# service swift-proxy restart
Now that the ring files are on each storage node, you can start the services. On each storage node, run the following command to start all swift services at once:
# swift-init all start
Load credentials:
# source admin-openrc.sh
Create a container:
# swift post testcontainer
Upload a file (upload --object-name ):
# swift upload --object-name testfile testcontainer test.txt
testfile
Display information for the account, container, or object
# swift stat
Account: AUTH_5d2a076b4cbd463a95408a461772612e
Containers: 1
Objects: 1
Bytes: 166
Accept-Ranges: bytes
X-Timestamp: 1403474802.26481
X-Trans-Id: txde503d323f8d467ca860b-0053a99687
Content-Type: text/plain; charset=utf-8
List the container/objects:
# swift list
testcontainer
# swift list testcontainer
testfile
The same operations can be performed through the dashboard as shown in figure