Category Archives: Openstack

Long Term OpenStack Usage Summary

Its been about 9 months since we first kicked off a limited production install of OpenStack. There were few variables we were very interested in: How much specialized support was needed to maintain a small OpenStack and stability.

The workload and management structure were as follows:

  • 3 Different projects – production, development and test.
  • Each project had its own private network and a network to access public network.
  • 10 long term instances – alive for nearly the entire duration of 9 month run.
  • About 10 short term instances – alive for about month each time
  • Work loads in production were real – any disruption had serious consequences.


There is no doubt OpenStack is a huge beast. However, once it was running it rarely required constant maintenance. Over the last year documentation and consistency of commands has improved making life easier. So far everything has been stable – most common issue is overflowing logs. In fact this caused a critical control node failure only recently which is described in the rest of this post.

Ceilometer Space Issue

The default setting of Ceilometer saves all events and alarms forever. Given that Ceilometer uses mongodb this monotonically increases the hard disk space consumed. Over 9 months it occupied over 33GB  or about 99% of the hard disk. One simple solution is RDO packstack or other distributions to set a sane defaults in ceilometer –  /etc/ceilometer/ceilometer.conf. For starters,  most non-monitor critical installations, we may change unlimited quota to some limit.

  1. Change time to live from forever to some thing sane like a week or a month.
  2. Record history – can’t see a reason why we need this in normal cases.

# Number of seconds that samples are kept in the database for
# (<= 0 means forever). (integer value)

# Record alarm change events. (boolean value)

Restart ceilometer.

Reduce elements in the database

Now we need to reduce items(documents) in the database. Need install mongo tools. Simply download from their website.

./mongo --host
>show dbs;
admin (empty)
ceilometer 0.125GB
local 0.031GB
test (empty)
>use ceilometer;

But space occupancy is till 99% .

Ceilometer Space Issue – Continues

Enter Mongodb. It does not release space just because we reduced space. When you are at 99% occupancy by Mongodb, you can’t use db.repairDatabase() -which is supposed to return unused empty space. 

So, we have to take the hacky solution to drop and recreate the database.
./mongodump --host=
./mongo --host
>use ceilometer;
./mongorestore --host= --db=ceilometer dump/ceilometer/

This restores the database and reduces the consumed space. Given that we have limited the time upto which events are stored, we shouldn’t have to use this hack often.

Horizon Performance Optimizations

Some notes on Open Stack Horizon Performance optimizations on CentOS 7.1 install:
4 vCPU (2.3 GHz Intel Xeon E5 v3), 2 GB – 4 GB RAM, SSD backed 40 GB RAW image.

CentOS 7.1 ships with Apache 2.4.6 so there are some optimizations we’ll try.

Multi-Processing Module selection: Default is PreFork (atleast on Openstack installed system)
Event is apparently better for response times so try:
LoadModule mpm_event_module modules/

Ensure to enable exactly one in this file, then restart httpd: systemctl restart httpd

The side effect is if you have PHP code running, it may stop to work and need php-fpm setup.

Multi master Database Cluster on OpenStack with Load Balancing

Multi Master Database Replication

Multi Master database replication in a cluster of databases allows applications to write to any database node and data is available at other nodes within short order. The main advantage is high availability deployment, high read performance and  scalability.

Overall Design

We are aiming have an application layer accessing  a database cluster via a Load balancer as show in picture below:

Load Balancer for a Database Cluster

Fig. 1: Load Balancer for a Database Cluster


For providing databases services on OpenStack we considered Trove. However, its broken on Kilo. There is no easy way to get a ‘Trove Image’ and launch it.  There is a nice and automated script  located here at the RDO page that actually creates an image. However, after the image is registered, it errors out upon DB instance launch. Given that Open Stack Trove documentation was not helpful so there was no motivation for us to debug that further as it would be much more riskier for us to maintain any hacked code. Wish it worked. Moving on to other options… Enter Galera Cluster and MySQL Cluster products.

Using other options

In the world of MySQL based multi master replication cluster databases, there are few popular ones:

  • MariaDB Galera Cluster
  • Percona XtraDB Cluster
  • MySQL Cluster

Out of the three, we chose Percona XtraDB Cluster (PXC). Mainly because of slightly better support for tables without primary keys [1] [2] – Note Galera is used both in MariaDB and PXC. However, some users have still reported issues on not having PK on MariaDB. Generally, you must have PK for every table. We could have used MariaDB Galera Cluster, however, either the documentation is not maintained or has a pretty strict rule about primary keys required. Unfortunately, that is a significant restriction. MySQL Cluster on the other hand has a huge learning curve for setup and administration. This might be something to consider when scaling up to millions of queries per second. MySQL Cluster bears no resemblance to MariaDB or Percona’s cluster counterparts so its a completely different mindset.

Instance Preparation

We use CentOS 7.1 instances that  create a new volume for OS disk. The database volume itself is on a separate volume: vdb.

Swap File Preparation

Normally, the instances don’t have swap file enabled (check by swapon --summary). So prepare a swap file like so:

fallocate -l 1G /swapfile;
dd if=/dev/zero of=/swapfile bs=1M count=1024;
chmod 600 /swapfile;
mkswap /swapfile;
swapon /swapfile
swapon --summary

MySQL data directory preparation

Next, prepare the secondary hard that will hold the data directory of mysql

fdisk /dev/vdb
new partition, extended.
new partition, logical.
w (to write the partition table)

Now make a file system. Ensure you have a valid partion created (vdb5 – in this case).

mkfs.ext4 /dev/vdb5

Automount swap and data directory

Create mysql directory as we have not yet installed mysql and setup /etc/fstab

mkdir /var/lib/mysql
echo "/swapfile none swap defaults 0 0" >> /etc/fstab
echo "/dev/vdb5 /var/lib/mysql ext4 defaults 0 2" >> /etc/fstab

Mount the fstab file and make sub directory for data (I like to use non default directories so I know whats going on)

mount -av
mkdir /var/lib/mysql/mysql_data
touch /var/lib/mysql/mysql_data/test_file

Finally restore security context on the mysql directory

restorecon -R /var/lib/mysql

Database Node List

In our case we have 3 database servers all with CentOS 7.1.

DBNode1 -
DBNode2 -
DBNode3 -

Security Groups, Iptables & Selinux

We need to open these ports for each of the database nodes:

 TCP 873 (rsync)
 TCP 3306 (Mysql)
 TCP 4444 (State Transfer)
 TCP 4567 (Group Communication - GComm_port)
 TCP 4568 (Incremental State Transfer port = GComm_port+1)

Selinux was set to Permissive (setenforce 0) — temporarily while installation was done. Ensure the above ports allowed by a security group applied to the database instances.
For every node, we need to install the PXC database software. Install, but don’t start the mysql service yet.

Installing the Database Percona XtraDB Cluster Software

Before you install, there is a pre-requisite to install socat. This package should installed from the base repository. If you have epel, remove it (assuming this node is going to be used only for database).

sudo yum remove epel-release
sudo yum install -y socat;

Installing the Database Percona XtraDB Cluster Software

Install the Percona repo and software itself.

sudo yum install -y;

sudo yum install Percona-XtraDB-Cluster-56

First Node (Primary) in Cluster setup

In order to start a new cluster, the very first node should be started in specific way – aka bootstrapping. This will cause the node to assume its the primary of the DB cluster that we are going make come to life.

First edit the /etc/my.cnf so setup your requirements.

 # Edit to your requirements.
log_bin                        = mysql-bin
binlog_format                  = ROW
innodb_buffer_pool_size        = 200M
innodb_flush_log_at_trx_commit = 0
innodb_flush_method            = O_DIRECT
innodb_log_files_in_group      = 2
innodb_log_file_size           = 20M
innodb_file_per_table          = 1
wsrep_cluster_address          = gcomm://,,
wsrep_provider                 = /usr/lib64/galera3/
wsrep_slave_threads            = 2
wsrep_cluster_name             = SilverSkySoftDBClusterA
wsrep_node_name                = DBNode1
wsrep_node_address             =
wsrep_sst_method               = rsync
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode       = 2
pid-file = /run/mysqld/

Start the bootstrap service
systemctl start mysql@bootstrap.service

This special service uses the my.cnf with wsrep_cluster_address = gcomm://  (no IPs) and start the MySQL server as the first node. This creates a new cluster. Be sure to run this service only at create cluster time and not at node join time.

While this first node is running, login to each of the other nodes DBNode2 & DBNode3 and use the my.cnf from above as a template. For each node update the wsrep_node_name and wsrep_node_address. Note that The wsrep_cluster_address should contain all IP addresses of that node.

Start the mysql service on each of the nodes 2 & 3 while node 1 is still running:
systemctl start mysql

Verify Cluster is up and nodes are joined

It should show Value: 3 (indicating 3 nodes are joined)

mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: dbserver1.novalocal
1 row in set (0.00 sec)

*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 3
1 row in set (0.00 sec)

Start Node 1 back in normal mode

On the Node 1, restart in normal mode:
systemctl stop mysql@bootstrap.service; systemctl start mysql

Verify database and replication actually happens

In one of the node, say DBNode3, create a sample database and table.

mysql -u root -p
USE my_test_db;
CREATE TABLE my_test_table (test_year INT, test_name VARCHAR(255));
INSERT INTO my_test_table (test_year, test_name) values (1998, 'Hello year 1998');

On an another node, say DBNode2, check the table and rows are visible:

 mysql -u root -p 
 SELECT @@hostname\G SELECT * from my_test_db.my_test_table;
 *************************** 1. row ***************************
 @@hostname: dbserver2.novalocal
 1 row in set (0.00 sec)
 | test_year | test_name       |
 | 1998      | Hello year 1998 |
 1 row in set (0.00 sec)

This confirms our cluster is up and running.
Don’t forget to enable the mysql service to start automatically – systemctl enable mysql
Also set the root password for MySQL.

Managing Users in Clustered Database

In the cluster setup, the mysql.*  is not replicated so manually creating an user in mysql.* table will be limited to local. So you can use CREATE USER statements to create users that are replicated across the cluster. A sample is:

CREATE USER 'admin'@'%' IDENTIFIED BY 'plainpassword';
GRANT ALL ON *.* TO 'admin'@'%';

You can log into any other node to the new user is created.

In addition, you can use MySQL workbench to databases in the cluster.

OpenStack Load Balancer

OpenStack Load balancer as a service (LBaaS) is easily enabled in RDO packstack and other installs. To create a Load balancer for the database cluser we created above, click on the Load balancer menu under Network and click add pool as show in figure below:

Image of how to add add a New Load Balancing Pool in OpenStack
Adding a New Load Balancing Pool in OpenStack

Then fill in the pool details as show in below picture:

image of Setting the details of the Load Balancing Pool
Setting the details of the Load Balancing Pool

Note that we are using TCP protocol in the case as we need to allow MySQL connections. For simplicity of testing use ROUND_ROBIN balancing method.

Next, add the VIP for the load balancer from the Actions column. In the VIP setup choose protocol TCP and port as 3306

Next, add the members of the pool by selecting ‘Members’ tab and then selecting the Database Nodes. For now you can keep weight as 1.

Get the VIP address by clicking the VIP link at the Load balancer pool. Once you get the IP, you can optionally choose to associate a floating IP.  This can be done by going compute -> Access & Security. Allocate an IP to your project. Then click on Associate. In the drop down, you should the the vip’s name and IP you provided.

This completes the Load balancer setup.

Testing the Load Balancer

A simple test is to query the load balancer’s VIP with mySQL client. In our case the VIP is and result is seen below.

[centos@client1 etc]$ mysql -u root -p -h -e "SHOW VARIABLES LIKE 'wsrep_node_name';"
Enter password: 
| Variable_name | Value     |
| wsrep_node_name | DBNode1 |
[centos@client1 etc]$ mysql -u root -p -h -e "SHOW VARIABLES LIKE 'wsrep_node_name';"
Enter password: 
| Variable_name | Value     |
| wsrep_node_name | DBNode2 |

You can see that each query is being routed to different nodes.

Simplistic PHP Test App

On an another VM, install apache and PHP. Start Apache and insert a PHP file as below. The database is the one we create above.

 $user = "root";
 $pass = "your_password";
 $db_handle = new  PDO(";dbname=my_test_db", $user, $pass);
 print "<pre>";
 foreach ($db_handle->query("SELECT test_name FROM my_test_table") as $row) 
   print "Name from db " . $row['test_name'] . "<br />";
 print "\n";
 foreach ($db_handle->query("SHOW VARIABLES LIKE 'wsrep_%'") as $row) {
 print $row['Variable_name'] . " = " . $row['Value'];
 print "\n";
 print_r ($row);
 print "</pre>";
 $db_handle = null;

From the browser navigate to the URL where this file is.

This would show the data from the table and various wsrep variables. Each time you refresh the page you should see wsrep_node_address, wsrep_node_name changing so you know load balancer is working.


In general, the cluster needs to be monitored for crashed databases etc. The OpenStack load balancer can monitor the members in the pool and set it to inactive state.

Crashed Node Recovery

Recovery of crashed nodes with little impact to overall cluster is one of main reasons why we go with a cluster. A very nice article about various ways to recover a crashed node is on Percona’s site.


We described how to create a database cluster and configure a load balancer on top. Its not a very complex process. The entire environment was in OpenStack Kilo.

Enable SPICE HTML5 Console Access in OpenStack Kilo

Spice Console Access to Instances

Documentation is a bit sparse on what configuration parameters to enable for SPICE console access. This article provides our notes for enabling SPICE on CentOS 7.1 with OpenStack Kilo.

Essentially, the Control node acts a proxy to the Compute node which has the SPICE server. Control node is client of the compute node.

Required Packages

On both Control & Compute:

yum install spice-html5

On Control:

yum install openstack-nova-spicehtml5proxy

Config Files

The file to modify is


in compute and control nodes.

In both config files, ensure  vnc_enabled=False is explicitly set. If novnc is enabled, ensure that is disabled too.

Control IP =
Compute IP =   [Internal IP - ports may need to be opened if not already there]

On Control Node

. . .


# Enable spice related features (boolean value)
# Enable spice guest agent support (boolean value)
# Keymap for spice (string value)

Iptables rule on control node

Since we are allowing access to console via port 6082 on the control node, open this port in iptables.

iptables -I INPUT -p tcp -m multiport --dports 6082 -m comment --comment "Allow SPICE connections for console access " -j ACCEPT

You can make permanent by adding the above rule to /etc/sysconfig/iptables (before the reject rules) saving and restarting iptables.

Config Changes on Compute Node

. . .


# Enable spice related features (boolean value)
# Enable spice guest agent support (boolean value)
# Keymap for spice (string value)

Restart services

On Compute

# service openstack-nova-compute restart

On Control

# service httpd restart
# service openstack-nova-spicehtml5proxy start
# service openstack-nova-spicehtml5proxy status 
# systemctl enable openstack-nova-spicehtml5proxy


Here the control node is an HTML proxy that connects to the SPICE server+port that is running when a VM is instantiated.
Here are some notes on some of the unclear options:


This line indicates the HTML5 proxy should run on localhost without IP binding ( – control node in this case.


This indicates the base URL to use when you click ‘console’ on the Horizon dashboard. Its noted that this URL must be accessible in the same network as the Horizon dashboard. In our case, this URL is the control node.


Server listen specifies where the VM instances should listen for SPICE connections. This is the local IP address (compute node)


Server_proxyclient_address is the address which clients such as HTML5 proxy will use to connect to the VMs running on the Compute Node. This is an internal address most likely not accessible to the outside world but accessible to the control node. This address is the internal IP address of the compute node.


Be sure about what config change goes in which node. Iptables is another to look out for, if you plan to use consoles regularly, make the iptables rules permanent.

“console is currently unavailable. Please try again later.”
Under the hood,
You’ll see
“ERROR: Invalid console type spice-html5 (HTTP 400)”
when you do
nova get-spice-console spice-html5

This generally means, the VM did not start with SPICE enabled. The causes for that could be one of the services did not restart after config change.
Double check the config file – make sure ‘enabled=true’ is set.


Adding another External network to the Multi Node OpenStack Setup

The external networks for an OpenStack deployment can be a combination of your intranet corporate  IT network and internet facing external network. For common use case, we look a way to add another external network to OpenStack deployment.

The steps for creating multiple external networks are:

1. Create a routed Virtual Network with — This is needed if you are using a multi VM node on a single physical node.

2. Add a NIC connected to the new network to the neutron node

3.  Say the device show up as eth3. Create ifcfg-eth3 as  and add to a new bridge say br-ex2 on the neutron node.

On the Neutron Nodes, update /etc/neutron/l3_agent.ini to empty values

 gateway_external_network_id =
 external_network_bridge =

On the neutron node, update /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini with the new bridge br-ex2:

 bridge_mappings =physnet1:br-ex,physnet2:br-ex2

The physnet1 & physnet2 are labels that will be used to reference the external network when we are creating it. This mapping from label to bridge specifies how packets are moved.

Restart the services:

service neutron-l3-agent restart
service neutron-openvswitch-agent restart

You can verify this setup has worked by ovs-vsctl show.
Here you should see br-ex2 and the new NIC eth3 added as a port. In addition, br-int should have int-br-ex2 as port.

Back on the control node, in /etc/neutron/plugin.ini ensure the following:

. . .
type_drivers = vxlan,flat
flat_networks = physnet1,physnet2
network_vlan_ranges =physnet1:1000:2999,physnet2:3000:4999
. . .

(Note: plugin.ini usually is a link to /etc/neutron/plugins/ml2/ml2_conf.ini)

openstack-service restart neutron

Next, create the new network specifying the provider

neutron net-create public_intranet --router:external --provider:physical_network physnet2 --provider:network_type=flat 

Then add the subnet with an new allocation pool:

 neutron subnet-create --name public_intranet_subnet --enable_dhcp=False --allocation-pool=start=,end= --dns-nameserver= --gateway= public_intranet

Enabling Openstack Swift Object Storage Service

On OpenStack Kilo, when we use RDO to enable Swift Object Storage service its partially misconfigured (or lack of control in packstack file).

The Swift Proxy is setup on the storage node. I could not find if I can control which node the Swift Proxy can be installed on by packstack. The issue is the swift proxy service endpoint (points to control node) mismatches where the swift proxy really is (on storage node).

Check Swift Endpoint details

Ensure swift service is indeed created.

openstack service list
| ID                               | Name       | Type          |
. . .
| a43e0d3e0d3e0d3e0d3e0d3e0d3e0d3e | swift      | object-store  |
| a5a23a23a23a23a23a23a23a23a23a23 | swift_s3   | s3            |

If swift does not show then you may not have installed it during packstack install. Edit your packstack file to install only swift.

openstack endpoint show swift
| Field        | Value                                          |
| adminurl     | http://controller_ip:8080/                     |
| enabled      | True                                           |
| id           | c243243243243243243243243243243                |
| internalurl  | http://controller_ip:8080/v1/AUTH_%(tenant_id)s|
| publicurl    | http://controller_ip:8080/v1/AUTH_%(tenant_id)s|
| region       | RegionOne                                      |
| service_id   | a43e0d3e0d3e0d3e0d3e0d3e0d3e0d3e               |
| service_name | swift                                          |
| service_type | object-store                                   |

The above is wrong. This is because no one is listening on port 8080. Check this by : netstat -plnt | grep 8080
Luckily, everything seems to be setup on the storage node – port 8080 is up, iptables rule for 8080 is set and swift files are all almost good to go.

Correcting the Endpoint

Delete the current swift endpoint ID. On the control node,

openstack endpoint delete c243243243243243243243243243243

And recreate a new one pointing to the right server (remember, the proxy was setup on storage server by packstack)

openstack endpoint create \
 --publicurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --internalurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --adminurl http://storage_ip:8080 \
 --region RegionOne \

Adjust the  swift_s3 service endpoint as well if you plan to use S3 API.

openstack endpoint delete swift_s3_id
openstack endpoint create \
 --publicurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --internalurl 'http://storage_ip:8080/v1/AUTH_%(tenant_id)s' \
 --adminurl http://storage_ip:8080 \
 --region RegionOne \

Adjusting the proxy-server.conf

The /etc/swift/proxy-server.conf file on the storage_ip must be edited as below. Especially, the identity_uri and auth_uri must point to Keystone IP. One other minor thing to check is if /var/cache/swift that is used for signing directory has correct selinux context. You may try sudo restorecon -R /var/cache/

. . . 
pipeline = catch_errors healthcheck cache authtoken keystoneauth container_sync bulk ratelimit staticweb tempurl slo formpost account_quotas container_quotas proxy-server
. . . 
log_name = swift
signing_dir = /var/cache/swift
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
identity_uri = http://controller_ip:35357/
auth_uri = http://controller_ip:5000

admin_tenant_name = services
admin_user = swift
admin_password = secret_pass
delay_auth_decision = true
cache = swift.cache
include_service_catalog = False
. . .

Restart the proxy server

sudo service openstack-swift-proxy status

On controller, check if swift stat works.

 swift stat
 Account: AUTH_idididid
 Containers: 1
 Objects: 0
 Bytes: 0
 X-Put-Timestamp: 1400000000.00015
 Connection: keep-alive
 X-Timestamp: 1400000000.00015
 X-Trans-Id: tx12314141-12312312
 Content-Type: text/plain; charset=utf-8

Enabling S3 API for Swift Object Storage

This post shows the details of enabling S3 API for Swift Object Storage on Openstack Kilo on CentOS 7.
The main documentation is here:
As of July 2015, the page seems dated as some links are broken and steps are config options are unclear.

Install Swift3 Middleware

The Swift3 middleware seems to have shifted to

So the correct git clone command is

git clone

python install

At the end of the above command’s execution, you should see:

Copying swift3.egg-info to /usr/lib/python2.7/site-packages/swift3-1.8.0.dev8-py2.7.egg-info
running install_scripts

Adjust proxy-server.conf

For Keystone, add “swift3 ” and “s3token” to pipeline.

For others, add swauth instead of s3token (untested).


pipeline = catch_errors healthcheck cache swift3 s3token authtoken keystoneauth ...

use = egg:swift3#swift3
paste.filter_factory = keystonemiddleware.s3_token:filter_factory
auth_port = 35357 
auth_host = keystone_ip_address 
auth_protocol = http

The important part is the filter_factory its —  keystonemiddleware and not keystone.middleware. Then restart the swift proxy service.

sudo service openstack-swift-proxy restart

Testing the Swift S3 API using S3Curl

S3Curl is a tool provided by Amazon. It can be downloaded from Also note the comment in that page where you need to yum install perl-Digest-HMAC package.
You can use Horizon to create a test container and upload a small text file into it.
In our example, we have created a container called “test_container” and simple text file called “test_obj” inside the container.

Make sure you edit the file to use Openstack’s Swift Proxy end point:

my @endpoints = ( '');

Retrieve the access keys from Horizon dashboard

Go to Project -> Compute -> Access & Security. Click on the API Access tab.
Note the S3 Service endpoint. In our case:

On the top right click on view credentials:
on Horizon
“EC2 Access Key” –> Is your id for S3 tools such as S3Curl.
“EC2 Secret Key” –> Is your key for S3 tools such as S3Curl.

For instance, lets say:
EC2 Access Key = HorizonEC2AccessKeyA0919319
EC2 Secret Key = HorizonEC2SecretKeyS1121551

Get the list of containers

The S3Curl command is:
./ --id HorizonEC2AccessKeyA0919319 --key HorizonEC2SecretKeyS1121551

Note: The above ID is the actual key not the personal .s3curl file reference. The tool will give a few warnings, but that ok we are just testing.

Expected output is:

 <?xml version='1.0' encoding='UTF-8'?>
<ListAllMyBucketsResult xmlns=""><Owner><ID>admin:admin</ID><DisplayName>admin:admin</DisplayName></Owner><Buckets><Bucket><Name>test_container</Name><CreationDate>2009-02-03T16:45:09.000Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>

The above indicates the root of our storage contains a bucket by name test_container. Lets extract the files from that container (bucket).

Get the list of objects in the container

To get the list of object inside the container, execute:

./ --id HorizonEC2AccessKeyA0919319 --key HorizonEC2SecretKeyS1121551

The output will have something like:

. . . <Contents><Key>test_obj</Key><LastModified>. . .

In above, key is the file. If you simply want to stream the contents of test_obj:

./ --id HorizonEC2AccessKeyA0919319 --key HorizonEC2SecretKeyS1121551

You should see test_obj’s contents printed out.

This concludes that our setup is working fine.


Multi Node OpenStack Kilo On Single Physical Host


In our venture into IaaS, we considered a small experimental project to install OpenStack on a single host with a view that when we get budgeted for multiple physical machines, we should be able to scale up from our 1 physical machine to N-physical machine with ease without fear of losing configuration. AKA Project OpenStack-in-a-box. In this article, we use OpenStack Kilo, CentOS 7.1 release using RDO Packstack .

What we get with All-in-One or multi node RDO Packstack?

In this article, we provide a multi node OpenStack installation on a single physical node without using nested virtualization. In our implementation, except the compute node all other nodes are virtual nodes. RDO Packstack provides an all-in-one or multi node install option. Depending on if you are deploying on bare metal server or inside a VM, the lay of the land looks like in figure below for all VM deployment:

Image of how OpenStack Deployment looks in an all VM Environment
Openstack Deployment in an all VM Environment

In the above picture, the individual control nodes such as Neutron & Compute are Virtual Machines  themselves. The instances spawned from OpenStack are running inside the Compute VM. While nested virtualization fine, performance is  not the best.

So why not use Packstack all in on bare metal? Solves performance but makes it hard to migrate out easily.

One theory followed is, moving entire VMs or LVM blocks is one of the simpler and safer ways to migrate a system compared to moving configuration files or databases. This is especially so of control nodes such as Horizon/Neutron.

What do we want?

With performance and ease of scale out in mind, we wanted all control nodes as VMs and the single compute node as the host itself. Its easier to replace or add new compute node than to move data from to make a new control node. On the other hand, Compute Nodes are treated as more dispensable. So when we get our shiny new physical servers, theoretically, we move our control nodes to the new machines. Environment we are aiming for is shown in the figure below:

Image showing OpenStack Environment we are aiming for
OpenStack Environment we are aiming for

In the above figure, we want the physical host to be the Nova Compute node. The other control nodes such as Neutron, Horizon, Cinder etc are VMs that are manually created and running on the physical host.

Say we scale out, we leave the physical host as compute node and move the VMs to the new hardware still running as VM.

For sake of pushing the limits, we wanted to keep available physical NICs to exactly 1.

We had some requirements in this project:

  1. Instances must be as fast as a running a VM in KVM
    1. Nested KVM was considered, but many companies don’t offer support for their software if run in a nested KVM environment.
  2. Easy to implement in enterprise environments
  3. Easy to migrate to multi physical out when we get more physical machines.
  4. Must use only one physical NIC and 1 physical host
  5. Must be repeatable

 Security wise, iptables of the host is difficult to control because Nova updates iptables for instance launch. But at least its in one server for now. One big thing we lose with this configuration is High Availability – which by definition needs 2 physical nodes. While you can implement HA as VMs, but physical or environmental failure means the environment is lost. So beware of this case.

 We evaluated various options:

Method Limitation
Triple O Too complex for simple setups as it targets multi hypervisors. Hard to justify any experiments, unless in a heterogeneous environment.
All virtual machines with nested KVM Performance limited. Not supported in enterprise environment (Red Hat and such)
Packstack all in one We eventually use packstack with multi node config, but this requires a little pre configuration

 We need a refined approach.


The solution has a key concept: Use the physical host to do two things:

1. Be the compute host (Nova) and

2. Run few virtual machines that we manually made from XML via virsh or virt-manager  that are used as controller, network, storage, authentication (FreeIPA) nodes. Everything else was driven by OpenStack.

…And the Trick for Packstack

Packstack install scripts need the name of the vNIC on Compute Node to match the name of the NIC on neutron. So we rename our virtual bridge name to the vNIC’s name on Neutron.

The overall single physical host OpenStack setup is shown in the  following diagram.

Image of Single Physical Host Network Architecture
Single Physical Host Network Architecture

In the above picture, the important pieces are:

  1. The network setup (virbrx) – Manually defined networks in XML for virsh.
  2. Our OpenStack internal network usually gets named virbr1. We  edited the network XML to have bridge name = eth1.  This is important to satisfy a packstack install requirement that interface name on control and compute match.
  3. The manually created Virtual Machines for use as OpenStack nodes such as Network, Storage and Controller nodes. These are simple CentOS templates with static IPs.
  4. After packstack is run, the OpenStack spawned VMs (“Open Stack VM Instance x”) communicate via the virbrx that was setup for OpenStack.

Here are the things necessary for this setup

Step 1: Create few virtual machines by hand or virt-manager.

Step 2: These virtual machines are controller, network and storage

Step 3: Create LVMs that you can attach to the storage VM

Step 4: Create the networks that OpenStack uses via virsh

Step 5: The important part is the host should become a part of private network

External IP Addresses

In general, the external network is any network that is connected to network host/L3 host. It could be your upstream ISP network or another internal corporate network. Normally, you’ll have multiple IP addresses assigned so that can be used as floating IP address ranges. Managing how to connect these IP addresses to the Neutron host is a decision based on performance and features requirements.

Note that we are using only one NIC. Sometimes we may need the NIC to access the host so it may not be possible to do PCI device pass through of the NIC to the Neutron host. 

Here are few options for using the NIC:

  1. Bridging needs to be done carefully as it has a performance impact because the NIC becomes a slave to the bridge and enters promiscuous mode, listening to all traffic on the network.
  2. Use libvirt‘s routed network to route the traffic to the Neutron host. By itself, this is not enough because you need to set the routes of your public IP block to point to your host as the router. If you have control over the upstream router, its best to add static routes that direct the public IP to your physical host as the the gateway. Then the physical host will forward it to the internal virtual bridge serving the external network. This is a clean and more maintainable approach.
  3. If you have no control over the upstream router, Proxy ARP method can be used. This needs to be carefully handled.
    1. Proxy ARP is a mode of the NIC where it answers to ARP requests for all IP addresses for which it has a non-default route (i.e static route that is not a default gateway)
    2. proxy_arp can easily lead to network issues if used without care. Common problem is the host responds to ARP requests for answer for internal (virtual) networks to requests coming from the external network.
    3. To be good citizen, you may add bridge filter rules to drop outgoing ARP requests that are not sourced from your public IP list. Or drop incoming ARP requests that don’t match the public IP addresses that you intend to answer for.

 OpenStack Node layout

     Upstream Gateway
--peth0------------------------Physical Host---------------------------,
|    |                                                                 |
|    |                                                                 |
|    |----->Routing Table                                              |
|          _______________________                                     |
|         | e.x.t.0/24 -> virbr2 |           virbr1--rename--eth1      |
|         |    -> gatewy |                            |        |
|         |_____________________ |                            |        |
|                 |                                           |        |
|                 v  Packets to virbr2 and back to GW         |        |
|          ______________                              Nova PrivIF     |
|        |   virbr2    |                                Same Net as    |
|        |_____________|                               Data (veth1)    |
|                  ^                                          |        |
|                  |                                          v        |
|         br-ex <--'                                                   |
|         veth1                               veth1                    |
|         veth0 ___________                   veth0 ___________        |
|         |               |                   |               |        |
|         |   VM0 -Net    |                   |   VM2 -stor   |        |
|         |_______________|                   |_______________|        |
|                                                                      |
|                                                                      |
|        veth1                               veth1                     |
|        veth0 ___________                   veth0 ___________         |
|        |               |                   |               |         |
|        |   VM1 -ctrl   |                   |   VM3 -Auth   |         |
|        |_______________|                   |_______________|         |
|                                                                      |
|                                                                      |
| -------------------------------------------------------------------- |

When installing with packstack there are some requirements that make us do interesting things. The first one is to rename virbr1 on the physical host to eth1 by editing the XML used for creating the network in virsh. This is needed because the packstack expects the interface on Nova and Controller node to have the same name (for ex. eth1 on both). If you’re uncomfortable with that naming, after packstack install, you could destroy the ‘eth1’ virtual network and rename it back to ‘virbr1’ before going back to full operations. For general operation, it does not matter what the virtual network is named as.

 You’ll have to decide which way you want the naming to be. This depends mainly on what NIC and how many you have.   We decided ‘eth1’ will be our NIC’s name in the control node. Our server did not use eth1 so it saved an additional step of renaming our physical NICs.

Planning the networks

The NIC to network mapping are provided as a reference:

eth2 -> or /28  [External network, IP address provided by data center]
eth1 -> 172.16.xx.0/24       [OpenStack Data network]
eth0 -> 192.168.xx.0/24      [Corporate NAT access network to internet ]

IP address Management – Floating and Internal

An overall network diagram is shown in the following diagram. This also marks out IP addresses that are used in various networks. 111.0.0.x is external IP (floating IP). Other addresses are internal.

Image of Open Stack Network With IP Assigned
Open Stack Network With IP Assigned

The key aspect to understand about Floating IP network is the host owning the physical NIC (peth0) acts as a gateway. When the virtual network is started in libvirt. This causes libvirt to add the routes to the physical host’s routing table. This presence of route to network allows the physical NIC to accept packets arriving at the NIC for forwarding to its final destination in the Neutron host’s br-ex bridge which further routes it to the correct OpenStack VM instance that the IP has been assigned to. proxy_arp also causes the peth0 to respond to any ARP requests for the IP addresses in the route table. This means for security, we need an iptable rule to drop packets arriving at peth0 that are not destined for the floating IP or the peth0 range.

Any request for the network from the OpenStack instances goes all the way from the OpenStack router to Phys Host as they are really on connected bridge:

OpenStack Router -> Br-Ex -> virbr0 -> routing table on physical host -> final interface selection.

 Any request for other IP addresses uses the gateways:

Router -> Neutron host (br-ex) -> Physical host -> Upstream gateway

Public IP address recovery:

 IF       IP          Recoverable?   Why?
 peth0   No        Network (zero) address can't be used by VMs anyway.
 virbr0   No        Hop 0 routing gateway for OpenStack instances. 
                                This gateway has a route to datacenter/ISP gateway.
                                For security, we should have IP tables to reject 
                                INPUT to from instances.                  
 br-ex   Yes       Neutron host br-ex does not need a public IP. 
 OpStkRtr 111.0.0.x   No        Acts as gateway for instances

How to recover IP addresses?

Permanently: On the Neutron host (after you install OpenStack with Packstack) Update the ifcfg-br-ex to have no IP address and add a static route in route-br-ex. This means unless there is another NIC with access to external network, Neutron node itself will not have access to external network on br-ex. This is good for security no one can access Neutron host using external IP. It also saves a precious external IP address.

Prepare the host for packstack

Host is CentOS 7.1 with SELinux enabled.

Disable network manager:

service NetworkManager stop
chkconfig NetworkManager off
service network start
chkconfig network on

Disable firewalld because many scripts are not yet firewalld ready.

systemctl disable firewalld
systemctl stop firewalld

Enable iptables:

yum install iptables-services
systemctl start iptables
systemctl enable iptables

Control Node Virtual Machines

The main control nodes of OpenStack – Controller/Horizon, Neutron, Storage are all virtual machines. Before you begin make sure you have atleast virsh command installed. These VMs are created using virt-manager. If you plan not to install X window system, you can edit the XMLs based on some existing template XML after dumping it out using virsh-dumpxml command.

VM Sizing Guide

Overall, the size of OpenStack cloud is determined by the power of your CPU and memory. A dual socket with copious amount of RAM usually gives better runway for maintaining this setup for slightly longer. You’ll also need a lot of HDD space to hold all the VMs (including instances launched from OpenStack). We had 2 low end Haswell Xeon CPUs with 6 cores each. Giving us 24 cores with HT, 64GB of RAM & 1.5TB HDD.

Controller is heavy on various communications and CPU. General guideline is to allow atleast 4 vCPUs and 4 to 6 GB on controller. Neutron is CPU heavy so 2-4 vCPUs with 2 GB memory for a small sized cloud.

Roughly, 12 to 14 vCPUs were used for various control nodes (control, neutron, storage) with about 12 to 14 GB memory.

In reality only, 1-2 control VMs are actually actively used other are generally idle.

So most of the vCPUs can be over allocated. Memory may not be that flexible.


A separate VM which has attached LVMs created on the physical host. This allows for easy management of storage space for Cinder and Swift. Once we have a separate storage server, its easy to move and add more storage to the VM.

In our scenario, we use another separate VM for NFS for our internal reasons. But NFS can also be part of the VM. Note that for packstack, the LVM must already have a Volume Group: ‘cinder-volumes’ created.

Note that /etc/hosts and hostname must be be correctly set

External Network Setup

Conceptually, open stack connects to the external network via bridge (usually OVS). For production type usage,  you may have multiple external networks but basic concept is the same: create bridge to the NIC that is connected to the external network and tell Opens Stack about it.

Most of the material in this section is from RDO project’s page on external network.

On Neutron host, setup the external bridge.
External bridge name: br-ex
NIC used for external communications: eth2
File: ifcfg-br-ex


Next, set how to route traffic to br-ex. This is needed if neutron has a different default gateway than that would carry the external traffic.
So create a following file.
File: route-br-ex (make it chmod +x)

#Network route
#Gateway(Note can't add GW in 0
#  - route not found error - probably a bug)
GATEWAY1=  #Note this should be the reachable IP on virbr0

File: ifcfg-eth2

HWADDR=<your eth2’s HW address>

Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to associate the physical to bridge mappings.

bridge_mappings =physnet1:br-ex
service network restart

Better yet, reboot the neutron host VM
ensure: ovs-vsctl show
shows br-ex with eth2

Clean up any old / default routers

neutron subnet-delete public_subnet
neutron router-gateway-clear router1
neutron router-delete router1
neutron net-delete public

Setup the network, subnet and router

neutron net-create public_net --router:external
neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=,end=  --dns-nameserver= --gateway= public_net
neutron router-create public_router
neutron router-gateway-set public_router public_net

Opening for Business

If you used the static route method to forward packets, then you’re already in business.

If you used proxy_arp method, open the main physical NIC for business by allowing it to respond to ARP requests:

sysctl net.ipv4.conf.eno1.proxy_arp=1

Note this responds to all ARP requests that match the routes in

ip route show

So make sure your routes are really what you own.

Launch a VM

Launch an instance and assign a floating IP. For testing purposes, don’t forget to allow SSH / Ping in a security group and apply that to the instance.

Ping the floating IP!

Bugs/Things we hit
  • Packstack requires ‘default’ network to be present so that it can delete it else install fails [Juno & Kilo]
  • Error 500 “Oops something went wrong” on Horizon when we try to login after session expiry [Kilo]
  • Error 500 OR ISCSI Type Error during install of storage node [Juno]
  • Cinder Failure Due to Authentication [Juno]


We learnt quite a bit from the following:

Networking in Too Much Detail“. Retrieved Jun/7/2015.

Diving into OpenStack Network Architecture“. Ronen Kofman. Retrived Jun/7/2015

RDO Juno Set up Two Real Node…“. Boris Derzhavets. Retrived Nov/20/2014

OpenStack Juno on CentOS 7“. Retrived Nov/20/2014

OpenStack Documentation“. Retrived Jun/7/2015