In OroCRM 1.10 a new REST/JSON API was introduced. This post is about how to use it.
Basics:
1. You need to create an API key for the user you are going to use for API based access. Instructions are at OroCRM’s How to use WSSE authentication.
2. The full documentation about the API is auto generated. Go to [app_url]/api/doc/rest_json_api .
3. API Sandbox test is available in the page. This is crucial to understanding and using the API from your own apps.
4. Login as the API user so you can use the API Sandbox testing.
5. Click on “Sandbox” to try the API.
6. Enter the parameters and click “Try It”
7. The response returned gives the format for JSON you can use in your API.
Short post about OroCRM 1.10 on CentOS 7.2. The original install instructions are good. If you have the OroCRM behind a reverse proxy which is not HTTPS, then you need to configure trusted proxy otherwise, there will be errors.
Also, change sending address in System -> Email Configuration if it has taken localdomain or something not resolvable from outside. Otherwise outgoing mails may be blocked.
Within an organization using Free IPA there may be a need to create a subordinate CA. A subordinate CA can issue certificates on behalf of the Root CA in the organization so it should be treated with the same security as an organization’s Root CA. The main advantage is Subordinate CA can be revoked if it goes rogue and so keep part of the organization still working.
Free IPA 4.2.0 allows the use of certificate profiles (certificate template) to sign new certificate requests. This allows you to create your own profiles that will allow sign any type of certificate request. Earlier than FreeIPA 4.2.0, this was not possible, although the underlying dogtag PKI already allowed it.
RFC 5280 is the X.509 certificate description and specifically subordinate CA description. If you have openSSL installed, look at man x509v3_configfor detailed description of V3 extensions.
Characteristics of a Sub-CA
For all practical purposes a Sub-CA is a CA that ideally:
Has the CA flag set to true
Preferably not issue further Sub-CA certificates
Creating the Certificate Profile
A subordinate CA is a very powerful element in the PKI’s trust chain. So ensure people who have access to it have adequate knowledge of what’s happening.
Here is the raw config file we used for our Sub CA certificate profile. Some notes follow this long listing.
[root@idm01 fedora]# ipa certprofile-import caSubCertAuth2 --store=true --file=caSubCACert2.cfg
Profile description: This certificate profile is for enrolling Subordinate Certificate Authority certificates (v2).
---------------------------------
Imported profile "caSubCertAuth2"
---------------------------------
Profile ID: caSubCertAuth2
Profile description: This certificate profile is for enrolling Subordinate Certificate Authority certificates (v2).
Store issued certificates: TRUE
[root@idm01 fedora]#
Don’t forget to add the Certificate ACL allowing appropriate groups s access to the certificate profile either from the WebUI or command line.
Certificate Signing Request
The CSR is not different than other requests you can use OpenSSL to create a CSR. Just ensure the CN and others follow the restrictions that is set in the profile above.
The green bar that shows up when you visit certain e-commerce websites is a very nice marketing thing. As a result, some users would like to see the green bar. Our next quest is to see if we can create certificate profile that allows generating a certificate with EV flags set. Cursory reading of documentation shows its not well documented and probably not possible.
Some notes on Open Stack Horizon Performance optimizations on CentOS 7.1 install:
4 vCPU (2.3 GHz Intel Xeon E5 v3), 2 GB – 4 GB RAM, SSD backed 40 GB RAW image.
CentOS 7.1 ships with Apache 2.4.6 so there are some optimizations we’ll try.
Multi-Processing Module selection: Default is PreFork (atleast on Openstack installed system)
Event is apparently better for response times so try: /etc/httpd/conf.modules.d/00-mpm.conf LoadModule mpm_event_module modules/mod_mpm_event.so
Ensure to enable exactly one in this file, then restart httpd: systemctl restart httpd
The side effect is if you have PHP code running, it may stop to work and need php-fpm setup.
Multi Master database replication in a cluster of databases allows applications to write to any database node and data is available at other nodes within short order. The main advantage is high availability deployment, high read performance and scalability.
Overall Design
We are aiming have an application layer accessing a database cluster via a Load balancer as show in picture below:
Fig. 1: Load Balancer for a Database Cluster
Trove
For providing databases services on OpenStack we considered Trove. However, its broken on Kilo. There is no easy way to get a ‘Trove Image’ and launch it. There is a nice and automated script located here at the RDO page that actually creates an image. However, after the image is registered, it errors out upon DB instance launch. Given that Open Stack Trove documentation was not helpful so there was no motivation for us to debug that further as it would be much more riskier for us to maintain any hacked code. Wish it worked. Moving on to other options… Enter Galera Cluster and MySQL Cluster products.
Using other options
In the world of MySQL based multi master replication cluster databases, there are few popular ones:
MariaDB Galera Cluster
Percona XtraDB Cluster
MySQL Cluster
Out of the three, we chose Percona XtraDB Cluster (PXC). Mainly because of slightly better support for tables without primary keys [1] [2] – Note Galera is used both in MariaDB and PXC. However, some users have still reported issues on not having PK on MariaDB. Generally, you must have PK for every table. We could have used MariaDB Galera Cluster, however, either the documentation is not maintained or has a pretty strict rule about primary keys required. Unfortunately, that is a significant restriction. MySQL Cluster on the other hand has a huge learning curve for setup and administration. This might be something to consider when scaling up to millions of queries per second. MySQL Cluster bears no resemblance to MariaDB or Percona’s cluster counterparts so its a completely different mindset.
Instance Preparation
We use CentOS 7.1 instances that create a new volume for OS disk. The database volume itself is on a separate volume: vdb.
Swap File Preparation
Normally, the instances don’t have swap file enabled (check by swapon --summary). So prepare a swap file like so:
We need to open these ports for each of the database nodes:
TCP 873 (rsync)
TCP 3306 (Mysql)
TCP 4444 (State Transfer)
TCP 4567 (Group Communication - GComm_port)
TCP 4568 (Incremental State Transfer port = GComm_port+1)
Selinux was set to Permissive (setenforce 0) — temporarily while installation was done. Ensure the above ports allowed by a security group applied to the database instances. For every node, we need to install the PXC database software. Install, but don’t start the mysql service yet.
Installing the Database Percona XtraDB Cluster Software
Before you install, there is a pre-requisite to install socat. This package should installed from the base repository. If you have epel, remove it (assuming this node is going to be used only for database).
In order to start a new cluster, the very first node should be started in specific way – aka bootstrapping. This will cause the node to assume its the primary of the DB cluster that we are going make come to life.
First edit the /etc/my.cnf so setup your requirements.
Start the bootstrap service systemctl start mysql@bootstrap.service
This special service uses the my.cnf with wsrep_cluster_address = gcomm:// (no IPs) and start the MySQL server as the first node. This creates a new cluster. Be sure to run this service only at create cluster time and not at node join time.
While this first node is running, login to each of the other nodes DBNode2 & DBNode3 and use the my.cnf from above as a template. For each node update the wsrep_node_name and wsrep_node_address. Note that The wsrep_cluster_address should contain all IP addresses of that node.
Start the mysql service on each of the nodes 2 & 3 while node 1 is still running: systemctl start mysql
Verify Cluster is up and nodes are joined
It should show Value: 3 (indicating 3 nodes are joined)
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: dbserver1.novalocal
1 row in set (0.00 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 3
1 row in set (0.00 sec)
Start Node 1 back in normal mode
On the Node 1, restart in normal mode: systemctl stop mysql@bootstrap.service; systemctl start mysql
Verify database and replication actually happens
In one of the node, say DBNode3, create a sample database and table.
mysql -u root -p
CREATE DATABASE my_test_db;
USE my_test_db;
CREATE TABLE my_test_table (test_year INT, test_name VARCHAR(255));
INSERT INTO my_test_table (test_year, test_name) values (1998, 'Hello year 1998');
On an another node, say DBNode2, check the table and rows are visible:
mysql -u root -p
SELECT @@hostname\G SELECT * from my_test_db.my_test_table;
*************************** 1. row ***************************
@@hostname: dbserver2.novalocal
1 row in set (0.00 sec)
+-----------+-----------------+
| test_year | test_name |
+-----------+-----------------+
| 1998 | Hello year 1998 |
+-----------+-----------------+
1 row in set (0.00 sec)
This confirms our cluster is up and running.
Don’t forget to enable the mysql service to start automatically – systemctl enable mysql
Also set the root password for MySQL.
Managing Users in Clustered Database
In the cluster setup, the mysql.* is not replicated so manually creating an user in mysql.* table will be limited to local. So you can use CREATE USER statements to create users that are replicated across the cluster. A sample is:
CREATE USER 'admin'@'%' IDENTIFIED BY 'plainpassword';
GRANT ALL ON *.* TO 'admin'@'%';
You can log into any other node to the new user is created.
In addition, you can use MySQL workbench to databases in the cluster.
OpenStack Load Balancer
OpenStack Load balancer as a service (LBaaS) is easily enabled in RDO packstack and other installs. To create a Load balancer for the database cluser we created above, click on the Load balancer menu under Network and click add pool as show in figure below:
Then fill in the pool details as show in below picture:
Note that we are using TCP protocol in the case as we need to allow MySQL connections. For simplicity of testing use ROUND_ROBIN balancing method.
Next, add the VIP for the load balancer from the Actions column. In the VIP setup choose protocol TCP and port as 3306
Next, add the members of the pool by selecting ‘Members’ tab and then selecting the Database Nodes. For now you can keep weight as 1.
Get the VIP address by clicking the VIP link at the Load balancer pool. Once you get the IP, you can optionally choose to associate a floating IP. This can be done by going compute -> Access & Security. Allocate an IP to your project. Then click on Associate. In the drop down, you should the the vip’s name and IP you provided.
This completes the Load balancer setup.
Testing the Load Balancer
A simple test is to query the load balancer’s VIP with mySQL client. In our case the VIP is 172.16.99.35 and result is seen below.
[centos@client1 etc]$ mysql -u root -p -h 172.16.99.35 -e "SHOW VARIABLES LIKE 'wsrep_node_name';"
Enter password:
+-----------------+---------+
| Variable_name | Value |
+-----------------+---------+
| wsrep_node_name | DBNode1 |
+-----------------+---------+
[centos@client1 etc]$ mysql -u root -p -h 172.16.99.35 -e "SHOW VARIABLES LIKE 'wsrep_node_name';"
Enter password:
+-----------------+---------+
| Variable_name | Value |
+-----------------+---------+
| wsrep_node_name | DBNode2 |
+-----------------+---------+
You can see that each query is being routed to different nodes.
Simplistic PHP Test App
On an another VM, install apache and PHP. Start Apache and insert a PHP file as below. The database is the one we create above.
<?php
$user = "root";
$pass = "your_password";
$db_handle = new PDO("mysql:host=dbcluster1.testdomain.com;dbname=my_test_db", $user, $pass);
print "<pre>";
foreach ($db_handle->query("SELECT test_name FROM my_test_table") as $row)
{
print "Name from db " . $row['test_name'] . "<br />";
}
print "\n";
foreach ($db_handle->query("SHOW VARIABLES LIKE 'wsrep_%'") as $row) {
print $row['Variable_name'] . " = " . $row['Value'];
print "\n";
}
print_r ($row);
print "</pre>";
$db_handle = null;
?>
From the browser navigate to the URL where this file is.
This would show the data from the table and various wsrep variables. Each time you refresh the page you should see wsrep_node_address, wsrep_node_name changing so you know load balancer is working.
Monitoring
In general, the cluster needs to be monitored for crashed databases etc. The OpenStack load balancer can monitor the members in the pool and set it to inactive state.
Crashed Node Recovery
Recovery of crashed nodes with little impact to overall cluster is one of main reasons why we go with a cluster. A very nice article about various ways to recover a crashed node is on Percona’s site.
Conclusion
We described how to create a database cluster and configure a load balancer on top. Its not a very complex process. The entire environment was in OpenStack Kilo.
In the context of virtualization, backing up VM images to storage nodes involves moving very large files. Many VM images are just copies of the OS and data on top. So data deduplication and compression must offer great savings. In our search, we found various utilities which we list later down. But we settled into reviewing two popular ones zbackupand attic . Another popular tool bupwas considered but few things like unable to prune old versions was major point for us.
The main requirements were data deduplication, compression, easy to script with and encryption all in one tool. In this article, we will give a background on their usage on CentOS 7.1. We don’t plan on extensive evaluation of various other capabilities as we are looking for these basic features to be done well.
ZBackup
ZBackup describes itself as a globally-deduplicating backup tool originating its inspiration from bup and rsync tools. As you add more files to the archives, it will store duplicate regions once. It also supports AES encrypted files.
Installing ZBackup on CentOS 7.1
ZBackup is the easiest to install. Its available in the EPEL repos and you can simply do yum install zbackup.
Usage
The archive is called a repository in ZBackup. Its nothing but a folder created for the tool to use where it stores its metadata and all the files added into it for backup.
First step is to initialize the folder , say zbak, with metadata folders.
zbackup init --non-encrypted /kvmbackup/zbak
If you need encryption, you can enable it with a key file.
One immediate quirk feature of Attic is destination directory can’t be specified as of version 0.14. It will extract it to the current directory but will maintain the original path.
This makes scripted use of this tool a little inconvenient. This feature seems to be on their todo list. But would hope its available sooner.
Which One to choose?
This is the subject of our next post. In the next part, we will compare the speeds of both these tools on backup and on restore path.
Other backup utilities we considered
bup
Duplicity
rsync
rdiff-backup
backula
ZFS (filesystem, not tool)
Most were either lacking all features we were looking for or were too complex. Do let us know your thoughts in the comments.
When constructing complex network in the cloud, you run in to situations where packets seem to be silently dropped. One reason could because of Reverse Path Forwarding/Filter (RPF) check at the routing decision step. In RPF check, when the kernel is making the routing decision for a given packet, the kernel (1) notes the interface this packet arrived on, (2) verifies the source address is valid by finding which interface to use from its routing table, (3) and the interface that will be used for reaching back to the source address is the same as the interface this packet arrived.
If the above check passes, the packet is allowed to go forward.
Why?
To reduce spoofed addresses. Reverse Path Forwarding is actually defined in an IETF RFC document 3704 as a best practice. This is more applicable in edge routers and internet facing systems.
However, the default value in CentOS is set to strict. This causes packet drops.
In most cases, if you are not connected to the internet or untrusted networks, it should be fine to disable it or make it ‘loose’ source validation – which means if the packet has a valid route via any interface not just via the arrival interface, it is accepted.
Turn it off?
If your node is not internet facing, generally prefer to turn it off. If its on the internet edge, you may want to think twice as turning it off at that node indicates some network config isn’t natural.
Here’s how to check current values:
sysctl -a | grep \\.rp_filter
How to temporarily turn it off:
[root] # for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do > echo 2 > $i > done
In the above, we are setting a value of 2 which is ‘loose source check’.
Or to permanently set it
sysctl -w "net.ipv4.conf.all.rp_filter=2"
Note: Setting of all, may not be enough. You may have to set individual interfaces.
How to detect RPF is dropping packets
Enable log_martians to see log entries in /var/log/messages that the packets are being dropped :
Its hard to detect in Iptables. But it would be in the class of ‘missing log’ entries. Usually it would show as packet that exited the last rule in PREROUTING, but does not appear on the first rule of POSTROUTING . You can add Iptables log chain rules to trace this. But usually if you suspect drop in routing, enable the log_martians value is a much easier way.
Documentation is a bit sparse on what configuration parameters to enable for SPICE console access. This article provides our notes for enabling SPICE on CentOS 7.1 with OpenStack Kilo.
Essentially, the Control node acts a proxy to the Compute node which has the SPICE server. Control node is client of the compute node.
Required Packages
On both Control & Compute:
yum install spice-html5
On Control:
yum install openstack-nova-spicehtml5proxy
Config Files
The file to modify is
/etc/nova/nova.conf
in compute and control nodes.
In both config files, ensure vnc_enabled=False is explicitly set. If novnc is enabled, ensure that is disabled too.
Control IP = 192.168.1.100
Compute IP = 172.16.1.100 [Internal IP - ports may need to be opened if not already there]
On Control Node
/etc/nova/nova.conf
[DEFAULT]
web=/usr/share/spice-html5
. . .
[spice]
html5proxy_host=0.0.0.0
html5proxy_port=6082
html5proxy_base_url=https://192.168.1.100:6082/spice_auto.html
# Enable spice related features (boolean value)
enabled=True
# Enable spice guest agent support (boolean value)
agent_enabled=true
# Keymap for spice (string value)
keymap=en-us
Iptables rule on control node
Since we are allowing access to console via port 6082 on the control node, open this port in iptables.
You can make permanent by adding the above rule to /etc/sysconfig/iptables (before the reject rules) saving and restarting iptables.
Config Changes on Compute Node
/etc/nova/nova.conf
[DEFAULT]
web=/usr/share/spice-html5
. . .
[spice]
html5proxy_base_url=https://192.168.1.100:6082/spice_auto.html
server_listen=0.0.0.0
server_proxyclient_address=172.16.10.100
# Enable spice related features (boolean value)
enabled=True
# Enable spice guest agent support (boolean value)
agent_enabled=true
# Keymap for spice (string value)
keymap=en-us
Restart services
On Compute
# service openstack-nova-compute restart
On Control
# service httpd restart
# service openstack-nova-spicehtml5proxy start
# service openstack-nova-spicehtml5proxy status
# systemctl enable openstack-nova-spicehtml5proxy
Concepts
Here the control node is an HTML proxy that connects to the SPICE server+port that is running when a VM is instantiated.
Here are some notes on some of the unclear options:
html5proxy_host
This line indicates the HTML5 proxy should run on localhost without IP binding (0.0.0.0) – control node in this case.
html5proxy_base_url
This indicates the base URL to use when you click ‘console’ on the Horizon dashboard. Its noted that this URL must be accessible in the same network as the Horizon dashboard. In our case, this URL is the control node.
server_listen=0.0.0.0
Server listen specifies where the VM instances should listen for SPICE connections. This is the local IP address (compute node)
server_proxyclient_address=172.16.10.100
Server_proxyclient_address is the address which clients such as HTML5 proxy will use to connect to the VMs running on the Compute Node. This is an internal address most likely not accessible to the outside world but accessible to the control node. This address is the internal IP address of the compute node.
Gotchas
Be sure about what config change goes in which node. Iptables is another to look out for, if you plan to use consoles regularly, make the iptables rules permanent.
“console is currently unavailable. Please try again later.”
Under the hood,
You’ll see
“ERROR: Invalid console type spice-html5 (HTTP 400)”
when you do
nova get-spice-console spice-html5
This generally means, the VM did not start with SPICE enabled. The causes for that could be one of the services did not restart after config change.
Double check the config file – make sure ‘enabled=true’ is set.