5. Object Storage Service - marcosaletta/Juno-CentOS7-Guide GitHub Wiki

Swift Installation and configuration guide

Set-up

  1. proxy-server installed on the controller node node01
  2. account, container, object servers installed on node02, node04

prompt_install

Installation

  1. Create a swift user that the Object Storage Service can use to authenticate with the Identity Service. Choose a password (replace $SWIFT_PASS with it) and specify an email address for the swift user. Use the service tenant and give the user the admin role:

    $ keystone user-create --name=swift --pass=$SWIFT_PASS [email protected]
    $ keystone user-role-add --user=swift --tenant=service --role=admin
    
  2. Create a service entry for the Object Storage Service:

    $ keystone service-create --name=swift --type=object-store --description="OpenStack Object Storage"
    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |     OpenStack Object Storage     |
    |      id     | eede9296683e4b5ebfa13f5166375ef6 |
    |     name    |              swift               |
    |     type    |           object-store           |
    +-------------+----------------------------------+
    

    Specify an API endpoint for the Object Storage Service by using the returned service ID. When you specify an endpoint, you provide URLs for the public API, internal API, and admin API. In this guide, the controller host name is used:

    $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ object-store / {print $2}') --publicurl='http://controller:8080/v1/AUTH_%(tenant_id)s' --internalurl='http://controller:8080/v1/AUTH_%(tenant_id)s' --adminurl=http://controller:8080
    +-------------+---------------------------------------------------+
    |   Property  |                       Value                       |
    +-------------+---------------------------------------------------+
    |   adminurl  |            http://controller:8080/                |
    |      id     |          9e3ce428f82b40d38922f242c095982e         |
    | internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s      |
    |  publicurl  | http://controller:8080/v1/AUTH_%(tenant_id)s      |
    |    region   |                     regionOne                     |
    |  service_id |          eede9296683e4b5ebfa13f5166375ef6         |
    +-------------+---------------------------------------------------+
    
  3. Install the swift packages for the controller node:

    yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token python-keystonemiddleware memcached
    

    Obtain the proxy service configuration file from the Object Storage source repository:

    curl -o /etc/swift/proxy-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample
    
  4. Edit the /etc/swift/proxy-server.conf file and edit the following sections:

    [DEFAULT]
    ...
    bind_port = 8080
    user = swift
    swift_dir = /etc/swift
    
    [pipeline:main]
    pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
    
    [app:proxy-server]
    ...
    allow_account_management = true
    account_autocreate = true
    
    [filter:keystoneauth]
    use = egg:swift#keystoneauth
    ...
    operator_roles = admin,_member_
    
    [filter:authtoken]
    paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    ...
    auth_uri = http://controller:5000/v2.0
    identity_uri = http://controller:35357
    admin_tenant_name = service
    admin_user = swift
    admin_password = $SWIFT_PASS
    delay_auth_decision = true
    

    Replace $SWIFT_PASS with the password you chose for the swift user in the Identity service.

    [filter:cache]
    ...
    memcache_servers = 127.0.0.1:11211
    

    NOTE Some of those section could be commented in the configuration file, so be careful and de-comment if necessary.

Next, set up your storage nodes and proxy node. This example uses the Identity Service for the common authentication piece.

Install and configure storage nodes

NOTE: This section describes how to install and configure storage nodes that operate the account, container, and object services. For simplicity, this configuration references two storage nodes, each containing two empty local block storage devices. Each of the devices, /dev/sdb and /dev/sdc, must contain a suitable partition table with one partition occupying the entire device. Although the Object Storage service supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS.

  1. Install the supporting utility packages:

    yum install xfsprogs rsync
    
  2. On each storage node format the /dev/sdb1 and /dev/sdc1 partitions as XFS:

    # mkfs.xfs /dev/sdb1
    # mkfs.xfs /dev/sdc1
    
  3. Create the mount point directory structure:

    # mkdir -p /srv/node/sdb1
    # mkdir -p /srv/node/sdc1
    
  4. Edit the /etc/fstab file and add the following to it:

    /dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
    /dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
    
  5. Mount the devices:

    # mount /srv/node/sdb1
    # mount /srv/node/sdc1
    

6. Moving on with the configuration, edit the /etc/rsyncd.conf file and add the following to it:

uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = $MANAGEMENT_INTERFACE_IP_ADDRESS

[account] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/account.lock

[container] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/container.lock

[object] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/object.lock

Replace $MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

7. Start the rsyncd service and configure it to start when the system boots:

systemctl enable rsyncd.service

systemctl start rsyncd.service


### Install storage node components

1. Install storage components

$ yum install openstack-swift-account openstack-swift-container openstack-swift-object


2. Obtain the accounting, container, and object service configuration files from the Object Storage source repository:

curl -o /etc/swift/account-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample

curl -o /etc/swift/container-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample

curl -o /etc/swift/object-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample


3. Edit the `/etc/swift/account-server.conf` file and complete the following actions:
a. In the `[DEFAULT]` section:

   ```
   [DEFAULT]
   ...
   bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
   bind_port = 6002
   user = swift
   swift_dir = /etc/swift
   devices = /srv/node 
   ```

b. In the `[pipeline:main]` section, enable the appropriate modules: 

   ```
   [pipeline:main]
   pipeline = healthcheck recon account-server
   ```

c. In the `[filter:recon]` section, configure the recon (metrics) cache directory:

   ```
   [filter:recon]
   ...
   recon_cache_path = /var/cache/swift
   ```

4. Edit the `/etc/swift/container-server.conf` file and complete the following actions:

a. In the `[DEFAULT]` section, configure the bind IP address, bind port, user, configuration directory, and mount point directory:

   ```
   [DEFAULT]
   ...
   bind_ip = $MANAGEMENT_INTERFACE_IP_ADDRESS
   bind_port = 6001
   user = swift
   swift_dir = /etc/swift
   devices = /srv/node
   ```
   Replace $MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. 

b. In the `[pipeline:main]` section, enable the appropriate modules:

   ```
   [pipeline:main]
   pipeline = healthcheck recon container-server   
   ```

c. In the `[filter:recon]` section, configure the recon (metrics) cache directory:

   ```
   [filter:recon]
   ...
   recon_cache_path = /var/cache/swift
   ```

5. Edit the `/etc/swift/object-server.conf` file and complete the following actions:

a. In the `[DEFAULT]` section, configure the bind IP address, bind port, user, configuration directory, and mount point directory:

   ```
   [DEFAULT]
   ...
   bind_ip = $MANAGEMENT_INTERFACE_IP_ADDRESS
   bind_port = 6000
   user = swift
   swift_dir = /etc/swift
   devices = /srv/node
   ```
   Replace $MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

b. In the [pipeline:main] section, enable the appropriate modules:

   ```
   [pipeline:main]
   pipeline = healthcheck recon object-server
   ```

c. In the [filter:recon] section, configure the recon (metrics) cache directory:

   ```
   [filter:recon]
   ...
   recon_cache_path = /var/cache/swift
   ```

6. Ensure proper ownership of the mount point directory structure:

chown -R swift:swift /srv/node


7. Create the recon directory and ensure proper ownership of it:

mkdir -p /var/cache/swift

chown -R swift:swift /var/cache/swift


## Create initial rings

Before starting the Object Storage services, you must create the initial account, container, and object rings. The ring builder creates configuration files that each node uses to determine and deploy the storage architecture. For simplicity, this guide uses one region and zone with 2^10 (1024) maximum partitions, 3 replicas of each object, and 1 hour minimum time between moving a partition more than once. For Object Storage, a partition indicates a directory on a storage device rather than a conventional partition table.

**Note** Perform the following steps on the controller node.

### Account ring

The account server uses the account ring to maintain lists of containers.
Perform the following steps:

1. Change to the `/etc/swift` directory.

2. Create the base account.builder file:

swift-ring-builder account.builder create 10 3 1


3. Add each storage node to the ring.

The command to use is

swift-ring-builder account.builder add r1z1-STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS:6002/DEVICE_NAME DEVICE_WEIGHT


In our case:

swift-ring-builder account.builder add r1z1-$STORAGE_NODE_1_MANAGEMENT_INTERFACE_IP_ADDRESS:6002/sdb1 100 swift-ring-builder account.builder add r1z1-$STORAGE_NODE_1_MANAGEMENT_INTERFACE_IP_ADDRESS:6002/sdc1 100 swift-ring-builder account.builder add r1z1-$STORAGE_NODE_2_MANAGEMENT_INTERFACE_IP_ADDRESS:6002/sdb1 100 swift-ring-builder account.builder add r1z1-$STORAGE_NODE_2_MANAGEMENT_INTERFACE_IP_ADDRESS:6002/sdc1 100


  Replace $STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

4. Verify the ring contents:

swift-ring-builder account.builder

account.builder, build version 4 1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 10.0.0.51 6002 10.0.0.51 6002 sdb1 100.00 768 0.00 1 1 1 10.0.0.51 6002 10.0.0.51 6002 sdc1 100.00 768 0.00 2 1 1 10.0.0.52 6002 10.0.0.52 6002 sdb1 100.00 768 0.00 3 1 1 10.0.0.52 6002 10.0.0.52 6002


5. Rebalance the ring:

swift-ring-builder account.builder rebalance

**Note:** Rebalancing rings can take some time.


### Container ring


The account server uses the account ring to maintain lists of containers.
Perform the following steps:

1. Change to the `/etc/swift` directory.

2. Create the base container.builder file:

swift-ring-builder container.builder create 10 3 1


3. Add each storage node to the ring.
   

swift-ring-builder container.builder add r1z1-$STORAGE_NODE_1_MANAGEMENT_INTERFACE_IP_ADDRESS:6001/sdb1 100 swift-ring-builder container.builder add r1z1-$STORAGE_NODE_1_MANAGEMENT_INTERFACE_IP_ADDRESS:6001/sdc1 100 swift-ring-builder container.builder add r1z1-$STORAGE_NODE_2_MANAGEMENT_INTERFACE_IP_ADDRESS:6001/sdb1 100 swift-ring-builder container.builder add r1z1-$STORAGE_NODE_2_MANAGEMENT_INTERFACE_IP_ADDRESS:6001/sdc1 100


  Replace $STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

4. Verify the ring contents:

swift-ring-builder container.builder

container.builder, build version 4 1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 10.0.0.51 6001 10.0.0.51 6001 sdb1 100.00 768 0.00 1 1 1 10.0.0.51 6001 10.0.0.51 6001 sdc1 100.00 768 0.00 2 1 1 10.0.0.52 6001 10.0.0.52 6001 sdb1 100.00 768 0.00 3 1 1 10.0.0.52 6001 10.0.0.52 6001 sdc1 100.00 768 0.00


5. Rebalance the ring:

swift-ring-builder container.builder rebalance

**Note:** Rebalancing rings can take some time.


### Object ring

The account server uses the account ring to maintain lists of containers.
Perform the following steps:

1. Change to the `/etc/swift` directory.

2. Create the base object.builder file:

swift-ring-builder object.builder create 10 3 1


3. Add each storage node to the ring.
   

swift-ring-builder object.builder add r1z1-$STORAGE_NODE_1_MANAGEMENT_INTERFACE_IP_ADDRESS:6000/sdb1 100 swift-ring-builder object.builder add r1z1-$STORAGE_NODE_1_MANAGEMENT_INTERFACE_IP_ADDRESS:6000/sdc1 100 swift-ring-builder object.builder add r1z1-$STORAGE_NODE_2_MANAGEMENT_INTERFACE_IP_ADDRESS:6000/sdb1 100 swift-ring-builder object.builder add r1z1-$STORAGE_NODE_2_MANAGEMENT_INTERFACE_IP_ADDRESS:6000/sdc1 100


  Replace $STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

4. Verify the ring contents:

swift-ring-builder object.builder

object.builder, build version 4 1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 10.0.0.51 6000 10.0.0.51 6000 sdb1 100.00 768 0.00 1 1 1 10.0.0.51 6000 10.0.0.51 6000 sdc1 100.00 768 0.00 2 1 1 10.0.0.52 6000 10.0.0.52 6000 sdb1 100.00 768 0.00 3 1 1 10.0.0.52 6000 10.0.0.52 6000 sdc1 100.00 768 0.00


5. Rebalance the ring:

swift-ring-builder object.builder rebalance

**Note:** Rebalancing rings can take some time.


**NOTE** Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

## Finalize installation

### Configure hashes and default storage policy

Obtain the `/etc/swift/swift.conf` file from the Object Storage source repository:

curl -o /etc/swift/swift.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample

   
Edit the `/etc/swift/swift.conf` file and complete the following actions:

1. In the [swift-hash] section, configure the hash path prefix and suffix for your environment:
  

[swift-hash] ... swift_hash_path_suffix = HASH_PATH_PREFIX swift_hash_path_prefix = HASH_PATH_SUFFIX


Replace $HASH_PATH_PREFIX and $HASH_PATH_SUFFIX with unique values. Keep these values secret and do not change or lose them.

2. In the `[storage-policy:0]` section, configure the default storage policy:

[storage-policy:0] ... name = Policy-0 default = yes


3. Copy the swift.conf file to the /etc/swift directory on each storage node and any additional nodes running the proxy service. On all nodes, ensure proper ownership of the configuration directory:

chown -R swift:swift /etc/swift


4. On the controller node and any other nodes running the proxy service, start the Object Storage proxy service including its dependencies and configure them to start when the system boots:

systemctl enable openstack-swift-proxy.service memcached.service

systemctl start openstack-swift-proxy.service memcached.service


5. On the storage nodes, start the Object Storage services and configure them to start when the system boots:

systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service



### Verify operation

Execute the next steps on the controller node.

1. Load credentials:

source admin-openrc.sh


2. Show the service status:

$ swift stat Account: AUTH_11b9758b7049476d9b48f7a91ea11493 Containers: 0 Objects: 0 Bytes: 0 Content-Type: text/plain; charset=utf-8 X-Timestamp: 1381434243.83760 X-Trans-Id: txdcdd594565214fb4a2d33-0052570383 X-Put-Timestamp: 1381434243.83760


3. Upload a file: 

swift upload demo-container1 $FILE

Replace $FILE with the name of a local file to upload to the demo-container1 container.

4. List containers:

swift list

demo-container1


5. List files:

swift list testcontainer

$FILE


6. Download a test file:

swift download demo-container1 $FILE

The same operations can be performed through the dashboard as shown in figure

![swift-dashboard](https://github.com/infn-bari-school/Swift/raw/master/screenshots/container-in-dashboard.png?raw=true)