On OpenStack Kilo, when we use RDO to enable Swift Object Storage service its partially misconfigured (or lack of control in packstack file).
The Swift Proxy is setup on the storage node. I could not find if I can control which node the Swift Proxy can be installed on by packstack. The issue is the swift proxy service endpoint (points to control node) mismatches where the swift proxy really is (on storage node).
Check Swift Endpoint details
Ensure swift service is indeed created.
openstack service list
+----------------------------------+------------+---------------+ | ID | Name | Type | +----------------------------------+------------+---------------+ . . . | a43e0d3e0d3e0d3e0d3e0d3e0d3e0d3e | swift | object-store | | a5a23a23a23a23a23a23a23a23a23a23 | swift_s3 | s3 | ... +----------------------------------+------------+---------------+
If swift does not show then you may not have installed it during packstack install. Edit your packstack file to install only swift.
openstack endpoint show swift +--------------+------------------------------------------------+ | Field | Value | +--------------+------------------------------------------------+ | adminurl | http://controller_ip:8080/ | | enabled | True | | id | c243243243243243243243243243243 | | internalurl | http://controller_ip:8080/v1/AUTH_%(tenant_id)s| | publicurl | http://controller_ip:8080/v1/AUTH_%(tenant_id)s| | region | RegionOne | | service_id | a43e0d3e0d3e0d3e0d3e0d3e0d3e0d3e | | service_name | swift | | service_type | object-store | +--------------+------------------------------------------------+
The above is wrong. This is because no one is listening on port 8080. Check this by : netstat -plnt | grep 8080
Luckily, everything seems to be setup on the storage node – port 8080 is up, iptables rule for 8080 is set and swift files are all almost good to go.
Correcting the Endpoint
Delete the current swift endpoint ID. On the control node,
openstack endpoint delete c243243243243243243243243243243
And recreate a new one pointing to the right server (remember, the proxy was setup on storage server by packstack)
openstack endpoint create \ --publicurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \ --internalurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \ --adminurl http://storage_ip:8080 \ --region RegionOne \ swift
Adjust the swift_s3 service endpoint as well if you plan to use S3 API.
openstack endpoint delete swift_s3_id
openstack endpoint create \ --publicurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \ --internalurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \ --adminurl http://storage_ip:8080 \ --region RegionOne \ swift_s3
Adjusting the proxy-server.conf
The /etc/swift/proxy-server.conf file on the storage_ip must be edited as below. Especially, the identity_uri and auth_uri must point to Keystone IP. One other minor thing to check is if /var/cache/swift that is used for signing directory has correct selinux context. You may try sudo restorecon -R /var/cache/
. . . [pipeline:main] pipeline = catch_errors healthcheck cache authtoken keystoneauth container_sync bulk ratelimit staticweb tempurl slo formpost account_quotas container_quotas proxy-server . . . [filter:authtoken] log_name = swift signing_dir = /var/cache/swift paste.filter_factory = keystonemiddleware.auth_token:filter_factory identity_uri = http://controller_ip:35357/ auth_uri = http://controller_ip:5000 admin_tenant_name = services admin_user = swift admin_password = secret_pass delay_auth_decision = true cache = swift.cache include_service_catalog = False . . .
Restart the proxy server
sudo service openstack-swift-proxy status
On controller, check if swift stat works.
swift stat Account: AUTH_idididid Containers: 1 Objects: 0 Bytes: 0 X-Put-Timestamp: 1400000000.00015 Connection: keep-alive X-Timestamp: 1400000000.00015 X-Trans-Id: tx12314141-12312312 Content-Type: text/plain; charset=utf-8