Category Archives: Swift

Enabling Openstack Swift Object Storage Service

On OpenStack Kilo, when we use RDO to enable Swift Object Storage service its partially misconfigured (or lack of control in packstack file).

The Swift Proxy is setup on the storage node. I could not find if I can control which node the Swift Proxy can be installed on by packstack. The issue is the swift proxy service endpoint (points to control node) mismatches where the swift proxy really is (on storage node).

Check Swift Endpoint details

Ensure swift service is indeed created.

openstack service list
+----------------------------------+------------+---------------+
| ID                               | Name       | Type          |
+----------------------------------+------------+---------------+
. . .
| a43e0d3e0d3e0d3e0d3e0d3e0d3e0d3e | swift      | object-store  |
| a5a23a23a23a23a23a23a23a23a23a23 | swift_s3   | s3            |
... 
+----------------------------------+------------+---------------+

If swift does not show then you may not have installed it during packstack install. Edit your packstack file to install only swift.

openstack endpoint show swift
+--------------+------------------------------------------------+
| Field        | Value                                          |
+--------------+------------------------------------------------+
| adminurl     | http://controller_ip:8080/                     |
| enabled      | True                                           |
| id           | c243243243243243243243243243243                |
| internalurl  | http://controller_ip:8080/v1/AUTH_%(tenant_id)s|
| publicurl    | http://controller_ip:8080/v1/AUTH_%(tenant_id)s|
| region       | RegionOne                                      |
| service_id   | a43e0d3e0d3e0d3e0d3e0d3e0d3e0d3e               |
| service_name | swift                                          |
| service_type | object-store                                   |
+--------------+------------------------------------------------+

The above is wrong. This is because no one is listening on port 8080. Check this by : netstat -plnt | grep 8080
Luckily, everything seems to be setup on the storage node – port 8080 is up, iptables rule for 8080 is set and swift files are all almost good to go.

Correcting the Endpoint

Delete the current swift endpoint ID. On the control node,

openstack endpoint delete c243243243243243243243243243243

And recreate a new one pointing to the right server (remember, the proxy was setup on storage server by packstack)

openstack endpoint create \
 --publicurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --internalurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --adminurl http://storage_ip:8080 \
 --region RegionOne \
 swift

Adjust the  swift_s3 service endpoint as well if you plan to use S3 API.

openstack endpoint delete swift_s3_id
openstack endpoint create \
 --publicurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --internalurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --adminurl http://storage_ip:8080 \
 --region RegionOne \
 swift_s3

Adjusting the proxy-server.conf

The /etc/swift/proxy-server.conf file on the storage_ip must be edited as below. Especially, the identity_uri and auth_uri must point to Keystone IP. One other minor thing to check is if /var/cache/swift that is used for signing directory has correct selinux context. You may try sudo restorecon -R /var/cache/

. . . 
[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken keystoneauth container_sync bulk ratelimit staticweb tempurl slo formpost account_quotas container_quotas proxy-server
. . . 
[filter:authtoken]
log_name = swift
signing_dir = /var/cache/swift
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
 
identity_uri = http://controller_ip:35357/
auth_uri = http://controller_ip:5000

admin_tenant_name = services
admin_user = swift
admin_password = secret_pass
delay_auth_decision = true
cache = swift.cache
include_service_catalog = False
. . .

Restart the proxy server

sudo service openstack-swift-proxy status

On controller, check if swift stat works.

 swift stat
 Account: AUTH_idididid
 Containers: 1
 Objects: 0
 Bytes: 0
 X-Put-Timestamp: 1400000000.00015
 Connection: keep-alive
 X-Timestamp: 1400000000.00015
 X-Trans-Id: tx12314141-12312312
 Content-Type: text/plain; charset=utf-8

Enabling S3 API for Swift Object Storage

This post shows the details of enabling S3 API for Swift Object Storage on Openstack Kilo on CentOS 7.
The main documentation is here: http://docs.openstack.org/kilo/config-reference/content/configuring-openstack-object-storage-with-s3_api.html
As of July 2015, the page seems dated as some links are broken and steps are config options are unclear.

Install Swift3 Middleware

The Swift3 middleware seems to have shifted to https://github.com/stackforge/swift3

So the correct git clone command is

git clone https://github.com/stackforge/swift3.git

python setup.py install

At the end of the above command’s execution, you should see:

Copying swift3.egg-info to /usr/lib/python2.7/site-packages/swift3-1.8.0.dev8-py2.7.egg-info
running install_scripts

Adjust proxy-server.conf

For Keystone, add “swift3 ” and “s3token” to pipeline.

For others, add swauth instead of s3token (untested).

[pipeline:main]

pipeline = catch_errors healthcheck cache swift3 s3token authtoken keystoneauth ...

[filter:swift3]
use = egg:swift3#swift3
 
[filter:s3token]
paste.filter_factory = keystonemiddleware.s3_token:filter_factory
auth_port = 35357 
auth_host = keystone_ip_address 
auth_protocol = http

The important part is the filter_factory its —  keystonemiddleware and not keystone.middleware. Then restart the swift proxy service.

sudo service openstack-swift-proxy restart

Testing the Swift S3 API using S3Curl

S3Curl is a tool provided by Amazon. It can be downloaded from https://aws.amazon.com/code/128. Also note the comment in that page where you need to yum install perl-Digest-HMAC package.
You can use Horizon to create a test container and upload a small text file into it.
In our example, we have created a container called “test_container” and simple text file called “test_obj” inside the container.

Make sure you edit the s3curl.pl file to use Openstack’s Swift Proxy end point:

my @endpoints = ( '172.0.0.8');

Retrieve the access keys from Horizon dashboard

Go to Project -> Compute -> Access & Security. Click on the API Access tab.
Note the S3 Service endpoint. In our case: http://172.0.0.8:8080

On the top right click on view credentials:
on Horizon
“EC2 Access Key” –> Is your id for S3 tools such as S3Curl.
“EC2 Secret Key” –> Is your key for S3 tools such as S3Curl.

For instance, lets say:
EC2 Access Key = HorizonEC2AccessKeyA0919319
EC2 Secret Key = HorizonEC2SecretKeyS1121551

Get the list of containers

The S3Curl command is:
./s3curl.pl --id HorizonEC2AccessKeyA0919319 --key HorizonEC2SecretKeyS1121551 http://172.0.0.8:8080

Note: The above ID is the actual key not the personal .s3curl file reference. The tool will give a few warnings, but that ok we are just testing.

Expected output is:

 <?xml version='1.0' encoding='UTF-8'?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>admin:admin</ID><DisplayName>admin:admin</DisplayName></Owner><Buckets><Bucket><Name>test_container</Name><CreationDate>2009-02-03T16:45:09.000Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>

The above indicates the root of our storage contains a bucket by name test_container. Lets extract the files from that container (bucket).

Get the list of objects in the container

To get the list of object inside the container, execute:

./s3curl.pl --id HorizonEC2AccessKeyA0919319 --key HorizonEC2SecretKeyS1121551 http://172.0.0.8:8080/test_container

The output will have something like:

. . . <Contents><Key>test_obj</Key><LastModified>. . .

In above, key is the file. If you simply want to stream the contents of test_obj:

./s3curl.pl --id HorizonEC2AccessKeyA0919319 --key HorizonEC2SecretKeyS1121551 http://172.0.0.8:8080/test_container/test_obj

You should see test_obj’s contents printed out.

This concludes that our setup is working fine.