In OroCRM 1.10 a new REST/JSON API was introduced. This post is about how to use it.
Basics:
1. You need to create an API key for the user you are going to use for API based access. Instructions are at OroCRM’s How to use WSSE authentication.
2. The full documentation about the API is auto generated. Go to [app_url]/api/doc/rest_json_api .
3. API Sandbox test is available in the page. This is crucial to understanding and using the API from your own apps.
4. Login as the API user so you can use the API Sandbox testing.
5. Click on “Sandbox” to try the API.
6. Enter the parameters and click “Try It”
7. The response returned gives the format for JSON you can use in your API.
Short post about OroCRM 1.10 on CentOS 7.2. The original install instructions are good. If you have the OroCRM behind a reverse proxy which is not HTTPS, then you need to configure trusted proxy otherwise, there will be errors.
Also, change sending address in System -> Email Configuration if it has taken localdomain or something not resolvable from outside. Otherwise outgoing mails may be blocked.
Its been about 9 months since we first kicked off a limited production install of OpenStack. There were few variables we were very interested in: How much specialized support was needed to maintain a small OpenStack and stability.
The workload and management structure were as follows:
3 Different projects – production, development and test.
Each project had its own private network and a network to access public network.
10 long term instances – alive for nearly the entire duration of 9 month run.
About 10 short term instances – alive for about month each time
Work loads in production were real – any disruption had serious consequences.
Summary
There is no doubt OpenStack is a huge beast. However, once it was running it rarely required constant maintenance. Over the last year documentation and consistency of commands has improved making life easier. So far everything has been stable – most common issue is overflowing logs. In fact this caused a critical control node failure only recently which is described in the rest of this post.
Ceilometer Space Issue
The default setting of Ceilometer saves all events and alarms forever. Given that Ceilometer uses mongodb this monotonically increases the hard disk space consumed. Over 9 months it occupied over 33GB or about 99% of the hard disk. One simple solution is RDO packstack or other distributions to set a sane defaults in ceilometer – /etc/ceilometer/ceilometer.conf. For starters, most non-monitor critical installations, we may change unlimited quota to some limit.
Change time to live from forever to some thing sane like a week or a month.
Record history – can’t see a reason why we need this in normal cases.
# Number of seconds that samples are kept in the database for
# (<= 0 means forever). (integer value)
time_to_live=604800
# Record alarm change events. (boolean value)
#record_history=true
record_history=True
Restart ceilometer.
Reduce elements in the database
Now we need to reduce items(documents) in the database. Need install mongo tools. Simply download from their website.
./mongo --host 172.0.0.10
>show dbs;
admin (empty)
ceilometer 0.125GB
local 0.031GB
test (empty)
>use ceilometer;
>db.meter.storageSize();
>db.meter.count();
>db.meter.remove({"timestamp":{"$lt":ISODate("2016-18-03T19:30:07.805Z")}});
But space occupancy is till 99% .
Ceilometer Space Issue – Continues
Enter Mongodb. It does not release space just because we reduced space. When you are at 99% occupancy by Mongodb, you can’t use db.repairDatabase() -which is supposed to return unused empty space.
So, we have to take the hacky solution to drop and recreate the database. ./mongodump --host=172.0.0.10
./mongo --host 172.0.0.10
>use ceilometer;
>db.dropDatabase();
>exit
./mongorestore --host=172.0.0.10 --db=ceilometer dump/ceilometer/
This restores the database and reduces the consumed space. Given that we have limited the time upto which events are stored, we shouldn’t have to use this hack often.
If you have your own FreeIPA install, you may try the marketing thing to get the green bar for some of your SSL certificates. Sadly, issuing your own Extended Validation (EV) certificates and getting the green bar on all browsers is not simple (without special recompiled code, its not possible). The exact reason is described in this article. In way, this is a good thing as it is intended to have stronger security by the the browsers.
We verified 3 browers: Firefox 40, Chrome 45, IE 11.
Why internally issued EV certs & Green bar is not allowed
The answer simply boils down to trust. For EV certs, browsers don’t trust anyone but a hardcoded list in their code. For example in Firefox’s code here : http://lxr.mozilla.org/firefox/source/security/manager/ssl/src/nsIdentityChecking.cpp
This is to ensure there is no accidentally inserted root certificates into the store that can verify EV certs.
Questionable ‘Higher’ Security/Trust model for hard coded EV Issuer Certificate Stores
Our opinion is this isn’t any real security or trust, it seems to be based in the notion that somehow hiding the EV Issuer Certificates in the browser executable is secure. We find this is slightly worse security than letting OS manage the certificate store. The our argument is, hard coding EV issuer certificates places ‘highly trusted’ certificates in the same trust zone as the browser process space. So a malware does not have mount a privilege escalation attack to mislead the user that a website is ‘highly trusted’. This leads to an inverse security where EV Issuer certificates are arguably less trusted than the ones in the OS store. Here the assumption we are making is hacking OS cert stores takes a lot more sophistication and escalation into ring 3 (compared to no escalation) to trick the user into believing a fake EV certificate.
This security only limits user naive mistakes and not sophisticated or above average malware.
We evaluated the source code of Firefox and you can see one of the starting functions for checking if the certificate is EV is: nsNSSCertificate::hasValidEVOidTag
This function checks if the certifcate’s EV OID (Extended Policy Object ID values) matches OID from a hard coded EV issuer certs in getRootsForOid(SECOidTag oid_tag) which accesses a static array static struct nsMyTrustedEVInfo myTrustedEVInfos[] . This hardcoding of root certs precludes adding your EV issuing CA certificate to the trusted root in Mozilla certificate store. So no green bar even if your certificate has all the requirements of an EV certificate and have added a trusted CA to Mozilla.
Edit – Mozilla’s security team was nice enough to comment on our analysis – Their take was they take this position to influence CA standards and EV expectations. Our analysis still remains unchanged
Chrome follows a similar policy. IE is different, if you can get to GPO on your CA system, you can set the external OIDs that will allow a green bar. Unfortunately, IE is not a full solution as many people in our org use other browsers.
However, you can generate an ‘EV’ certificate in FreeIPA
Why would one care if we can’t get green bar? If you can become an intermediate CA of a another CA that will allow you to generate EV certs (which I have not found any CA allowing this). Or if you plan to become an EV cert issuing CA yourself and want to embed your certificate in the browsers. If not for anything, for academic reasons.
FreeIPA Certificate Profile for EV Certificate Issuance
The main characteristics of an EV certificate is presence of OSCP, AIA and Certificate Subject in particular way and the certificate should have a Policy ID and CPS statement line.
Start with IPA’s default caIPAserviceCert (see ipa certprofile-find) Here are the important diffs in the profile to add the certificatePoliciesExt:
The policyID should be an ANSI OID (which can get expensive to obtain). For testing, you can use one from Wikipedia assigned to existing EV CAs. The other cheaper way is to get an Private Enterprise Number from IANA and use the OID from that point on.
Then import the config file into FreeIPA: ipa certprofile-import caIPAWebServiceEVCert --store=true --file=websslprofile
Certificate Signing Request
In your CSR, it must contain businessCategory=Private Organization/serialNumber=000
For in depth explanation what fields are required, visit the CA Browser forum https://cabforum.org and look for EV SSL specifications.
Also, thanks to a very nice CSR decode tool at CertLogik: https://certlogik.com/decoder/ that helped in our investigations.
Within an organization using Free IPA there may be a need to create a subordinate CA. A subordinate CA can issue certificates on behalf of the Root CA in the organization so it should be treated with the same security as an organization’s Root CA. The main advantage is Subordinate CA can be revoked if it goes rogue and so keep part of the organization still working.
Free IPA 4.2.0 allows the use of certificate profiles (certificate template) to sign new certificate requests. This allows you to create your own profiles that will allow sign any type of certificate request. Earlier than FreeIPA 4.2.0, this was not possible, although the underlying dogtag PKI already allowed it.
RFC 5280 is the X.509 certificate description and specifically subordinate CA description. If you have openSSL installed, look at man x509v3_configfor detailed description of V3 extensions.
Characteristics of a Sub-CA
For all practical purposes a Sub-CA is a CA that ideally:
Has the CA flag set to true
Preferably not issue further Sub-CA certificates
Creating the Certificate Profile
A subordinate CA is a very powerful element in the PKI’s trust chain. So ensure people who have access to it have adequate knowledge of what’s happening.
Here is the raw config file we used for our Sub CA certificate profile. Some notes follow this long listing.
[root@idm01 fedora]# ipa certprofile-import caSubCertAuth2 --store=true --file=caSubCACert2.cfg
Profile description: This certificate profile is for enrolling Subordinate Certificate Authority certificates (v2).
---------------------------------
Imported profile "caSubCertAuth2"
---------------------------------
Profile ID: caSubCertAuth2
Profile description: This certificate profile is for enrolling Subordinate Certificate Authority certificates (v2).
Store issued certificates: TRUE
[root@idm01 fedora]#
Don’t forget to add the Certificate ACL allowing appropriate groups s access to the certificate profile either from the WebUI or command line.
Certificate Signing Request
The CSR is not different than other requests you can use OpenSSL to create a CSR. Just ensure the CN and others follow the restrictions that is set in the profile above.
The green bar that shows up when you visit certain e-commerce websites is a very nice marketing thing. As a result, some users would like to see the green bar. Our next quest is to see if we can create certificate profile that allows generating a certificate with EV flags set. Cursory reading of documentation shows its not well documented and probably not possible.
Some notes on Open Stack Horizon Performance optimizations on CentOS 7.1 install:
4 vCPU (2.3 GHz Intel Xeon E5 v3), 2 GB – 4 GB RAM, SSD backed 40 GB RAW image.
CentOS 7.1 ships with Apache 2.4.6 so there are some optimizations we’ll try.
Multi-Processing Module selection: Default is PreFork (atleast on Openstack installed system)
Event is apparently better for response times so try: /etc/httpd/conf.modules.d/00-mpm.conf LoadModule mpm_event_module modules/mod_mpm_event.so
Ensure to enable exactly one in this file, then restart httpd: systemctl restart httpd
The side effect is if you have PHP code running, it may stop to work and need php-fpm setup.
Multi Master database replication in a cluster of databases allows applications to write to any database node and data is available at other nodes within short order. The main advantage is high availability deployment, high read performance and scalability.
Overall Design
We are aiming have an application layer accessing a database cluster via a Load balancer as show in picture below:
Fig. 1: Load Balancer for a Database Cluster
Trove
For providing databases services on OpenStack we considered Trove. However, its broken on Kilo. There is no easy way to get a ‘Trove Image’ and launch it. There is a nice and automated script located here at the RDO page that actually creates an image. However, after the image is registered, it errors out upon DB instance launch. Given that Open Stack Trove documentation was not helpful so there was no motivation for us to debug that further as it would be much more riskier for us to maintain any hacked code. Wish it worked. Moving on to other options… Enter Galera Cluster and MySQL Cluster products.
Using other options
In the world of MySQL based multi master replication cluster databases, there are few popular ones:
MariaDB Galera Cluster
Percona XtraDB Cluster
MySQL Cluster
Out of the three, we chose Percona XtraDB Cluster (PXC). Mainly because of slightly better support for tables without primary keys [1] [2] – Note Galera is used both in MariaDB and PXC. However, some users have still reported issues on not having PK on MariaDB. Generally, you must have PK for every table. We could have used MariaDB Galera Cluster, however, either the documentation is not maintained or has a pretty strict rule about primary keys required. Unfortunately, that is a significant restriction. MySQL Cluster on the other hand has a huge learning curve for setup and administration. This might be something to consider when scaling up to millions of queries per second. MySQL Cluster bears no resemblance to MariaDB or Percona’s cluster counterparts so its a completely different mindset.
Instance Preparation
We use CentOS 7.1 instances that create a new volume for OS disk. The database volume itself is on a separate volume: vdb.
Swap File Preparation
Normally, the instances don’t have swap file enabled (check by swapon --summary). So prepare a swap file like so:
We need to open these ports for each of the database nodes:
TCP 873 (rsync)
TCP 3306 (Mysql)
TCP 4444 (State Transfer)
TCP 4567 (Group Communication - GComm_port)
TCP 4568 (Incremental State Transfer port = GComm_port+1)
Selinux was set to Permissive (setenforce 0) — temporarily while installation was done. Ensure the above ports allowed by a security group applied to the database instances. For every node, we need to install the PXC database software. Install, but don’t start the mysql service yet.
Installing the Database Percona XtraDB Cluster Software
Before you install, there is a pre-requisite to install socat. This package should installed from the base repository. If you have epel, remove it (assuming this node is going to be used only for database).
In order to start a new cluster, the very first node should be started in specific way – aka bootstrapping. This will cause the node to assume its the primary of the DB cluster that we are going make come to life.
First edit the /etc/my.cnf so setup your requirements.
Start the bootstrap service systemctl start mysql@bootstrap.service
This special service uses the my.cnf with wsrep_cluster_address = gcomm:// (no IPs) and start the MySQL server as the first node. This creates a new cluster. Be sure to run this service only at create cluster time and not at node join time.
While this first node is running, login to each of the other nodes DBNode2 & DBNode3 and use the my.cnf from above as a template. For each node update the wsrep_node_name and wsrep_node_address. Note that The wsrep_cluster_address should contain all IP addresses of that node.
Start the mysql service on each of the nodes 2 & 3 while node 1 is still running: systemctl start mysql
Verify Cluster is up and nodes are joined
It should show Value: 3 (indicating 3 nodes are joined)
mysql> select @@hostname\G show global status like 'wsrep_cluster_size' \G
*************************** 1. row ***************************
@@hostname: dbserver1.novalocal
1 row in set (0.00 sec)
*************************** 1. row ***************************
Variable_name: wsrep_cluster_size
Value: 3
1 row in set (0.00 sec)
Start Node 1 back in normal mode
On the Node 1, restart in normal mode: systemctl stop mysql@bootstrap.service; systemctl start mysql
Verify database and replication actually happens
In one of the node, say DBNode3, create a sample database and table.
mysql -u root -p
CREATE DATABASE my_test_db;
USE my_test_db;
CREATE TABLE my_test_table (test_year INT, test_name VARCHAR(255));
INSERT INTO my_test_table (test_year, test_name) values (1998, 'Hello year 1998');
On an another node, say DBNode2, check the table and rows are visible:
mysql -u root -p
SELECT @@hostname\G SELECT * from my_test_db.my_test_table;
*************************** 1. row ***************************
@@hostname: dbserver2.novalocal
1 row in set (0.00 sec)
+-----------+-----------------+
| test_year | test_name |
+-----------+-----------------+
| 1998 | Hello year 1998 |
+-----------+-----------------+
1 row in set (0.00 sec)
This confirms our cluster is up and running.
Don’t forget to enable the mysql service to start automatically – systemctl enable mysql
Also set the root password for MySQL.
Managing Users in Clustered Database
In the cluster setup, the mysql.* is not replicated so manually creating an user in mysql.* table will be limited to local. So you can use CREATE USER statements to create users that are replicated across the cluster. A sample is:
CREATE USER 'admin'@'%' IDENTIFIED BY 'plainpassword';
GRANT ALL ON *.* TO 'admin'@'%';
You can log into any other node to the new user is created.
In addition, you can use MySQL workbench to databases in the cluster.
OpenStack Load Balancer
OpenStack Load balancer as a service (LBaaS) is easily enabled in RDO packstack and other installs. To create a Load balancer for the database cluser we created above, click on the Load balancer menu under Network and click add pool as show in figure below:
Then fill in the pool details as show in below picture:
Note that we are using TCP protocol in the case as we need to allow MySQL connections. For simplicity of testing use ROUND_ROBIN balancing method.
Next, add the VIP for the load balancer from the Actions column. In the VIP setup choose protocol TCP and port as 3306
Next, add the members of the pool by selecting ‘Members’ tab and then selecting the Database Nodes. For now you can keep weight as 1.
Get the VIP address by clicking the VIP link at the Load balancer pool. Once you get the IP, you can optionally choose to associate a floating IP. This can be done by going compute -> Access & Security. Allocate an IP to your project. Then click on Associate. In the drop down, you should the the vip’s name and IP you provided.
This completes the Load balancer setup.
Testing the Load Balancer
A simple test is to query the load balancer’s VIP with mySQL client. In our case the VIP is 172.16.99.35 and result is seen below.
[centos@client1 etc]$ mysql -u root -p -h 172.16.99.35 -e "SHOW VARIABLES LIKE 'wsrep_node_name';"
Enter password:
+-----------------+---------+
| Variable_name | Value |
+-----------------+---------+
| wsrep_node_name | DBNode1 |
+-----------------+---------+
[centos@client1 etc]$ mysql -u root -p -h 172.16.99.35 -e "SHOW VARIABLES LIKE 'wsrep_node_name';"
Enter password:
+-----------------+---------+
| Variable_name | Value |
+-----------------+---------+
| wsrep_node_name | DBNode2 |
+-----------------+---------+
You can see that each query is being routed to different nodes.
Simplistic PHP Test App
On an another VM, install apache and PHP. Start Apache and insert a PHP file as below. The database is the one we create above.
<?php
$user = "root";
$pass = "your_password";
$db_handle = new PDO("mysql:host=dbcluster1.testdomain.com;dbname=my_test_db", $user, $pass);
print "<pre>";
foreach ($db_handle->query("SELECT test_name FROM my_test_table") as $row)
{
print "Name from db " . $row['test_name'] . "<br />";
}
print "\n";
foreach ($db_handle->query("SHOW VARIABLES LIKE 'wsrep_%'") as $row) {
print $row['Variable_name'] . " = " . $row['Value'];
print "\n";
}
print_r ($row);
print "</pre>";
$db_handle = null;
?>
From the browser navigate to the URL where this file is.
This would show the data from the table and various wsrep variables. Each time you refresh the page you should see wsrep_node_address, wsrep_node_name changing so you know load balancer is working.
Monitoring
In general, the cluster needs to be monitored for crashed databases etc. The OpenStack load balancer can monitor the members in the pool and set it to inactive state.
Crashed Node Recovery
Recovery of crashed nodes with little impact to overall cluster is one of main reasons why we go with a cluster. A very nice article about various ways to recover a crashed node is on Percona’s site.
Conclusion
We described how to create a database cluster and configure a load balancer on top. Its not a very complex process. The entire environment was in OpenStack Kilo.
Short piece of code to return an array of DiskInfo tuple consisting of disk’s format , driver type and source file etc. The DiskInfo tuple is a user defined tuple to allow easy access to all info about a disk device.
import libvirt
import sys
from xml.etree import ElementTree
from xml.dom import minidom
from collections import namedtuple
DiskInfo = namedtuple('DiskInfo', ['device', 'source_file', 'format'])
#Return a list of block devices used by the domain
def get_target_devices(dom):
#Create a XML tree from the domain XML description.
tree=ElementTree.fromstring(dom.XMLDesc(0))
#The list of block device names.
devices=[]
#Iterate through all disk of the domain.
for target in tree.findall("devices/disk"):
#Within each disk found, get the source file
#for ex. /var/lib/libvirt/images/vmdisk01.qcow2
for src in target.findall("source"):
file=src.get("file")
#The driver type: For ex: qcow2/raw
for src in target.findall("driver"):
type=src.get("type")
#Target device like: vda/vdb etc.
for src in target.findall("target"):
dev=src.get("dev")
#Make them all into a tuple
Disk = DiskInfo(dev, file, type)
#Check if we have already found the device name for this domain.
if not Disk in devices:
devices.append(Disk)
#Completed device name list.
return devices
Here dom is the dom returned after call to conn.lookupByName or equivalent function.
Calling function was:
for dev in get_target_devices(dom):
print( "Processing Disk: %s",dev)
These are short posts on how to connect to qemu/KVM via libvirt using Python binding. Ability to talk to the hypervisor helps in various automation tasks. In this post, we show how to connect to hypervisor and display domain details. Its assumed that you have Qemu/KVM and have installed libvirt-python package. If not, its yum install libvirt-python to install it.
import libvirt import sys
#Open a readonly connection to libvirt
conn = libvirt.openReadOnly(None)
if conn == None: Logger.critical( 'Could not connect to the hypervisor') sys.exit(1) try: dom = conn.lookupByName("centos_vm") except: Logger.critical( 'Could not find the domain') sys.exit(1) print ("Domain : id %d running %s state = %d" % (dom.ID(), dom.OSType(), dom.state()[0])) print (dom.info())
Output
Domain 0: id -1 running hvm state = 5
[5, 2097152L, 0L, 2, 0L]
From libvirt source code:
# virDomainState (dom.state()[0])
VIR_DOMAIN_NOSTATE = 0 # no state
VIR_DOMAIN_RUNNING = 1 # the domain is running
VIR_DOMAIN_BLOCKED = 2 # the domain is blocked on resource
VIR_DOMAIN_PAUSED = 3 # the domain is paused by user
VIR_DOMAIN_SHUTDOWN = 4 # the domain is being shut down
VIR_DOMAIN_SHUTOFF = 5 # the domain is shut off
VIR_DOMAIN_CRASHED = 6 # the domain is crashed
VIR_DOMAIN_PMSUSPENDED = 7 # the domain is suspended by guest power management
More Help
Run this command to print out the entire libvirt API.
We put both ZBackup and Attic to two main tests: Backup and restore.
The input file generally was QEMU’s IMG or QCOW2 format containing CentOS or empty data. The hard disk was all SSD RAID1+0. The CPU was 2xHaswell Xeon 2.3 GHz with 6 cores each.
Backup Test
Attic
Backup Number
input Size (GB)
Num Files
Time (hh:mm:ss)
Size of folder (GB)
Effective Compression Ratio
Notes
1
50
3
00:09:54
2.1
23.81
2
50
3
00:00:18
2.1
23.81
No new files. No updates
3
50
3
00:01:15
2.1
23.81
No new files. But minor update to one of the larger files
4
470
5
00:50:16
2.16
217.59
2 new files
5
470
5
00:41:31
2.16
217.59
No new files. But minor update to one of the larger files
Total data processed = 1,090 GB.
Total time for data = 6,194 seconds
Attic takes 5.68 seconds per GB for data that is generally duplicate like IMG/QCOW2 files containing CentOS install.
ZBackup
Backup Number
input Size (GB)
Num Files
Time (hh:mm:ss)
Size of folder (GB)
Effective Compression Ratio
Notes
1
50
3
00:45:43
1.6
31.25
2
50
3
00:08:17
1.6
31.25
No new files. No updates
3
50
3
00:08:22
1.6
31.25
No new files. But minor update to one of the larger files
4
470
5
04:10:13
1.6
293.75
2 new files
5
470
5
04:08:00
1.6
293.75
No new files. But minor update to one of the larger files
Total data processed = 1,090 GB.
Total time for data = 33,635 seconds
ZBackup takes 30.86 seconds per GB for data that is generally duplicate like IMG/QCOW2 files containing CentOS install.
Restore Test
For restore, all the restored file must match the SHA1 fingerprint as the original file exactly. Both ZBackup and Attic passed this test.
Attic
Restore Number
Restore Size (GB)
Num files
Time (hh:mm:ss)
1
350
1
00:39:11
2
25
1
00:00:20
3
48
2
00:05:18
Total data processed = 423 GB.
Total time for data = 2,689 seconds
Attic takes 6.35 seconds per GB to restore data.
ZBackup
Restore Number
Restore Size (GB)
Num files
Time (hh:mm:ss)
1
350
1
00:24:29 (2 GB cache)
2
350
1
00:26:40 (40 MB cache)
3
25
1
00:01:19
4
48
2
00:06:02
Total data processed = 773 GB.
Total time for data = 3,510 seconds
ZBackup takes 4.54 seconds per GB to restore data.
Comparison
Attic
Zbackup
Attic vs Zbackup
Backup -seconds/GB
5.68
30.86
-443.31%
Backup Compression
217
293
35.02%
Restore-seconds/GB
6.35
4.54
-28.50%
Final selection depends on which factor has more weight. For instance, if you have a cheaper cost to store a GB but need fast backup time, Attic seems best. If you care about size, Zbackup seems best at the expense of time. I believe, ZBackup has selectable compression algorithms so it might even be faster if you choose a faster LZO compressor, however the author mentions LZO is a caveat. Our quick tests show LZO is definitely faster but compression ratio is lower than attic.
Do let us know you thoughts in the comments
Post Script – The Test script Files
Attic Create Backup Script
run=$1
if [ "$run" == "" ]; then
echo "Error run number is required."
exit
fi
attic create --stats /vm_backup/atticrepo.attic::$run /virtual_machines/images/file1.img /virtual_machines/images/file2.img . . .
du -h -d 1 /vm_backup/atticrepo.attic
echo "Done"
ZBackup CREATE BACKUP SCRIPT
. . . Preamble Same as attic . . .
zbackup backup --non-encrypted --threads 8 --cache-size 1024mb
/vm_backup/zbak/backups/file1.img.$run < /virtual_machines/images/file1.img
. . . other files . . .
sha1sum was used to calculate SHA1 on restored files.