Quick Update: Novajoin in the Undercloud

This is an update to the previous note, to account for changes in novajoin and puppet-novajoin as things have matured more and patches have landed.

In particular,

  • Novajoin has been simplified and caching is no longer done.
  • puppet-novajoin has been added to puppet-nova, rather than as a separate package.
  • puppet-ipa is no longer used.  This was primarily because Openstack would not accept puppet-ipa’s license.  Instead, some of the functionality needed (ability to register as a FreeIPA client) has been re-implemented in puppet-novajoin.
  • triple-o quickstart contains some in-review changes to set the DNS server for the undercloud node.  We want this to be the IPA server.


The basic steps needed are the same as before, albeit achieved slightly differently.  More details are provided in the subsequent details section.

  1. Create a new undercloud image containing a repo file for novajoin and a modified puppet-nova.
  2. Upload the image and checksum file to an accessible location.
  3. Register your undercloud node in IPA.
  4. Check out OOO Quickstart with the relevant patches.
  5. Modify the config file to update your IPA server etc.
  6. Run quickstart!


These are details on each of the steps above:

  1. You can create your own image by running the following script or you can use the image I created here.
  2. Make sure to upload both the image and the md5sum files.
  3. To register the IPA node, log onto the IPA server node and:
    kinit admin
    ipa host-add undercloud.example.com --password=MySecret --force
  4.   Do the following:
    git clone https://git.openstack.org/openstack/tripleo-quickstart
    cd tripleo-quickstart
    git fetch https://git.openstack.org/openstack/tripleo-quickstart refs/changes/72/398772/2 && git checkout FETCH_HEAD
  5.   Modify the file config/general_config/ha_ipa.yml:
    1. Set the proper ipa_domain, ipa_server, ipa_otp, ipa_principal and ipa_password.
    2. Set the IP address of the ipa server for the undercloud_dns_servers.
    3. Set the location of the undercloud image through:
    undercloud_image_url: 'https://vakwetu.fedorapeople.org/novajoin/undercloud.qcow2'
  6. Run quickstart.
    ./quickstart.sh --config config/general_config/ha_ipa.yml -e undercloud_image_url='https://vakwetu.fedorapeople.org/novajoin/undercloud.qcow2' -R master --no-clone cloud-machine.example.com

Quick Note: NovaJoin in the Overcloud

In the last post, we discussed how to — starting from quickstart — get an undercloud which was enrolled with FreeIPA and which had the novajoin service up and running.  As part of the work to get that going, we had to create some puppet modules for novajoin.  We can reuse those same puppet modules to help us deploy overcloud controllers which are also enrolled with FreeIPA and on which novajoin is installed and configured.

As before, this is a quick note, so there is stuff here that will likely change as we iterate through this.  All the configuration steps below take place on an undercloud which has been registered with FreeIPA, and on which novajoin is running.  You could, for example, use the methods in the previous section to create this undercloud.

Prepare the Overcloud Image

The overcloud image needs to be customized in a number of ways:

  1. The image needs to have a recent enough cloud-init in order to retrieve the vendor metadata to register with IPA.
  2. By default, package installs are disabled during an overcloud install.  Essentially, any package operations are replaced by a no-op.  This means that all required packages (novajoin in particular) need to be installed in the image ahead of time.

As root, run the following:

cd ~stack
source ./stackrc

cat > novajoin.repo << EOF
name=Copr repo for novajoin owned by rcritten

chmod 777 /home/stack
virt-copy-in -a overcloud-full.qcow2 novajoin.repo /etc/yum.repos.d
virt-customize -a overcloud-full.qcow2 --install https://vakwetu.fedorapeople.org/novajoin/cloud-init-0.7.6-9.el7.x86_64.rpm
virt-customize -a overcloud-full.qcow2 --install python-novajoin
openstack overcloud image upload --update-existing
chmod 700 /home/stack

Make sure DNS is set correctly on the undercloud

As the stack user,

source ./stackrc

# set nameserver to ipa server 
ID=$(openstack subnet list -f value -c ID)
openstack subnet set ${ID} --dns-nameserver  <ipa_server_address>

Add the relevant puppet modules

We need to add the puppet modules for IPA and novajoin to the overcloud image.  In addition, a new puppet manifest needs to be added to puppet-tripleo to call the novajoin puppet modules.  We’ll pull all these changes in from current gerrit reviews.

Rather than using virt-customize to copy in the relevant files, we will use the swift artifacts mechanism (https://hardysteven.blogspot.fi/2016/08/tripleo-deploy-artifacts-and-puppet.html) to deploy the puppet modules.

# set up puppet artifact mechanism
git config --global user.email "alee@redhat.com"
git config --global user.name "Ade Lee"

git clone https://git.openstack.org/openstack/tripleo-common
export PATH="$PATH:/home/stack/tripleo-common/scripts"

mkdir puppet-modules
cd puppet-modules
git clone https://github.com/openstack/puppet-tripleo tripleo

cd tripleo
git fetch https://git.openstack.org/openstack/puppet-tripleo \
  refs/changes/88/374288/1 && git cherry-pick FETCH_HEAD
cd ..
git clone https://github.com/purpleidea/puppet-ipa.git ipa
git clone https://github.com/vakwetu/puppet-novajoin.git novajoin
cd ~

upload-puppet-modules -d puppet-modules

Get heat-templates

A new profile needs to be added to tripleo-heat-templates for the novajoin service.  This profile then needs to be included as an optional component.

git clone https://github.com/openstack/tripleo-heat-templates
cd tripleo-heat-templates
git fetch https://git.openstack.org/openstack/tripleo-heat-templates \
  refs/changes/85/374285/2 && git checkout FETCH_HEAD
cd ..

Create Environment Files

We create two environment files – one for joining IPA and one for novajoin.

cat >  /home/stack/tripleo-heat-templates/environments/ipa-join.yaml << EOF
    ipa_hostClass: app_server
    ipa_enroll: True
  CloudDomain: alee.test.com

cat > novajoin.yaml << EOF
    OS::TripleO::Services::Novajoin: ./tripleo-heat-templates/puppet/services/novajoin.yaml
    IpaDomain: 'alee.test.com'
    IpaPassword: 'redhat123'
    IpaPrincipal: 'admin'
    IpaServer: '<ipa server hostname>'
    NovaPassword: 'DxpwEd4bXxtCQgPan8QDHQQMT'

Deploy the Overcloud!

openstack overcloud deploy \
  --templates ./tripleo-heat-templates \
   -e ./tripleo-heat-templates/environments/ipa-join.yaml \
   -e novajoin.yaml


Quick Note: IPA Nova Join in the Undercloud

Note: The exact steps in this post have been superseded by this update.

Rob Crittenden has been working on a new nova microservice called novajoin that would instances created by nova to be automatically registered to IPA during cloud-init. (https://github.com/rcritten/novajoin).  It works using the newly provided vendor_metadata mechanism (https://review.openstack.org/#/c/310904/6/specs/newton/approved/vendordata-reboot.rst).   There is a short description of the design for this in the README.md at the source link. This quick note is about steps that I took to integrate novajoin into an undercloud install as started by quickstart.  In particular, the goal here was to create an undercloud node which is registered to an IPA server, installs novajoin, configures nova to use novajoin and starts the novajoin service.  The undercloud novajoin service could then be used to deploy overcloud nodes that are automatically enrolled as IPA clients.

Create the undercloud image

First, it is necessary to create a new undercloud image.  This is required because the software we need is not yet present in the undercloud image.  The exact steps to create the undercloud image can be found here: https://vakwetu.fedorapeople.org/novajoin/create-undercloud-image.sh In particular, we do the following:

    1. Start with an undercloud image.  In the script, I download from the triple-O master branch.
  • Add some code to /usr/share/instack-undercloud/puppet-stack-config/puppet-stack-config.pp to get the instack puppet code to call the puppet modules in (1) and (2).
  • Add a slightly modified novajoin-install script called ipa-novajoin-install-ipa.  This is a temporary step.  Right now, the novajoin install script does all the openstack configuration, as well as the IPA config.  We want to use the openstack puppet modules to do the required openstack configuration, and have the install script do the IPA configuration steps only.  Rob will re-factor the install scripts shortly and this step will no longer be necessary.

All of the above steps will in time become unnecessary as the relevant modules and code are merged into the Occata code base. The modified undercloud image can be downloaded from here: https://vakwetu.fedorapeople.org/novajoin/undercloud.qcow2, https://vakwetu.fedorapeople.org/novajoin/undercloud.qcow2.md5

Add the undercloud node to IPA

Before invoking quickstart, we need to register the undercloud node to IPA providing it with an OTP.  On the IPA server, I did:

kinit admin

ipa host-add undercloud.alee.test.com --password=MySecret --force

Modifying Quickstart

The puppet code that we added in instack-undercloud needs parameters which are provided by hieradata in quickstart-hieradata-overrides.yaml.  We need to make a small change to quickstart to allow it to pass these parameters to the instack-undercloud puppet modules.

We add the following to roles/tripleo/undercloud/templates/quickstart-hieradata-overrides.yaml.j2

{% if undercloud_ipa_client_install is defined %}
 enable_ipa_client_install: true
 ipa_domain: '{{ipa_domain}}'
 ipa_server: '{{ipa_server}}'
 ipa_otp: '{{ipa_otp}}'
 {% endif %}

{% if undercloud_novajoin_install is defined %}
 enable_novajoin_install: true
 nova::api::vendordata_jsonfile_path: '/etc/nova/cloud-config.json'
 nova::api::vendordata_providers: ['StaticJSON', 'DynamicJSON']
 nova::api::vendordata_dynamic_targets: ['join@']
 nova::notification_topics: 'notifications'
 nova::notify_on_state_change: 'vm_state'
 novajoin::api::hostname: "undercloud.%{hiera('ipa_domain')}"
 novajoin::api::ipa_domain: "%{hiera('ipa_domain')}"
 novajoin::api::ipa_password: "%{hiera('ipa_password')}"
 novajoin::api::ipa_principal: "%{hiera('ipa_principal')}"
 novajoin::api::ipa_server: "%{hiera('ipa_server')}"
 novajoin::api::keystone_identity_uri: "%{hiera('keystone_identity_uri')}"
 novajoin::api::keystone_auth_url: "%{hiera('keystone_auth_uri')}"
 novajoin::api::keystone_auth_uri: "%{hiera('keystone_auth_uri')}"
 novajoin::api::nova_password: "%{hiera('nova::keystone::authtoken::password')}"
 novajoin::api::transport_url: "%{hiera('nova::default_transport_url')}"
 ipa_principal: '{{ipa_principal}}'
 ipa_password: '{{ipa_password}}'
 {% endif %}

Then, we can create an environment file that specifies the required parameters (config/general_config/ha_ipa.yml)

undercloud_vcpu: 4

# Create three controller nodes and one compute node.
 - name: control_0
 flavor: control
 - name: control_1
 flavor: control
 - name: control_2
 flavor: control

- name: compute_0
 flavor: compute

# We don't need introspection in a virtual environment (because we are
 # creating all the "hardware" we really know the necessary
 # information).
 step_introspect: false

# Tell tripleo about our environment.
 network_isolation: true
 extra_args: >-
 --control-scale 3 --neutron-network-type vxlan
 --neutron-tunnel-types vxlan
 --ntp-server pool.ntp.org
 test_tempest: false
 test_ping: true
 enable_pacemaker: true

#ipa settings
 ipa_domain: 'alee.test.com'
 ipa_server: 'ipa_server_host.alee.test.com'
 ipa_otp: 'MySecret'

#novajoin settings
 ipa_principal: 'admin'
 ipa_password: 'password123'

undercloud_novajoin_install: true
 undercloud_ipa_client_install: true

The new stuff is really the part at the bottom – from IPA settings downwards.  As you can see, you just need to pass in the IPA domain, server and OTP.

Run Quickstart

We can now run quickstart as follows:

./quickstart.sh --config config/general_config/ha_ipa.yml -e undercloud_image_url='https://vakwetu.fedorapeople.org/novajoin/undercloud.qcow2' -R master --no-clone cloud-machine.alee.test.com


Barbican and Volume Encryption

Setting up Barbican to authenticate with Keystone

Of course, the raison d’etre of Barbican is to interact with Openstack.
At this point, if you’ve been following along, you should have a
Barbican instance running on port 9312, with requests to the Barbican
endpoint of https://hostname:9311 proxied by haproxy. The Barbican
instance is configured to interact with the Dogtag CA and KRA on the
back-end, possibly in an IPA instance deployed in a container.

Now its time to add the rest of Openstack. For simplicity, I’m just
going to use Packstack here. On the Openstack controller, then:

sudo yum install -y openstack-packstack 
packstack --allinone

Next, we’re going to add the required users, services and endpoints
for Barbican to keystone. Note, of course, that we are using the
HTTPS endpoint for Barbican.::

 source /root/keystonerc_admin 
 openstack user create --password=orange \
   --email=barbican@example.com barbican 
 openstack role add --user=barbican --project=services admin 
 openstack service create --name=barbican \
   --description="Barbican Key Management Service" key-manager 
 openstack endpoint create --region RegionOne \
   --publicurl https://`hostname`:9311 \
   --internalurl https://`hostname`:9311 barbican

Finally, we need to set up Barbican to use Keystone as an authorization
source. Currently, we use the unauthenticated authz plugin which essentially just passes every request through.

crudini --set /etc/barbican/barbican-api-paste.ini \
  pipeline:barbican_api \
  pipeline "keystone_authtoken context apiapp" 
crudini --set /etc/barbican/barbican-api-paste.ini \
  filter:keystone_authtoken \
  identity_uri "http://`hostname`:35357" 
crudini --set /etc/barbican/barbican-api-paste.ini \
  filter:keystone_authtoken \
  admin_tenant_name services  
systemctl restart openstack-barbican-api.service

Setting up Cinder Volume Encryption Using Barbican

Cinder and Nova have integrated with Barbican to do volume encryption
in no small part due to the valiant efforts of the folks at Johns Hopkins
Applied Physics lab. The relevant blueprints are here [link].

I put together a short video on how the volume encryption works and
how to test it (encrypted_volumes_video).

Lets set it up.

1. Configure nova to use Barbican as a KeyManager.::

crudini --set /etc/nova/nova.conf keymgr \
  api_class "nova.keymgr.barbican.BarbicanKeyManager" 
crudini --set /etc/nova/nova.conf keymgr \
  encryption_auth_url "http://`hostname`:5000/v3" 
crudini --set /etc/nova/nova.conf barbican \
  catalog_info "key-manager:barbican:public" 
crudini --set /etc/nova/nova.conf barbican \
  endpoint_template "https://`hostname`:9311/v1" 
crudini --set /etc/nova/nova.conf barbican \
  os_region_name "RegionOne" 
systemctl restart openstack-nova-api.service

2. Configure cinder to use Barbican as a KeyManager. ::

crudini --set /etc/cinder/cinder.conf keymgr \
  api_class cinder.keymgr.barbican.BarbicanKeyManager 
crudini --set /etc/cinder/cinder.conf keymgr \
  encryption_auth_url "http://`hostname`:5000/v3" 
crudini --set /etc/cinder/cinder.conf keymgr \
  encryption_api_url "https://`hostname`:9311/v1" 
systemctl restart openstack-cinder-api.service

Testing Cinder Volume Encryption using Barbican

The video shows you how to set things up using the Horizon Web UI.
Lets see how to set this using CLIs.

1. Create a cinder volume encryption type. Right now the Openstack
client does not allow you to create encryption types, so we’ll
fall back to the cinder CLI. This operation needs to be done
as an admin user. ::

source /root/keystone_adminrc 
openstack volume type create LUKS 
cinder encryption-type-create --cipher aes-xts-plain64 \
  --key_size 512 --control_location front-end \
  LUKS nova.volume.encryptors.luks.LuksEncryptor

2. Create a volume with the encryption type. When this step is
performed, a symmetric encryption key should be generated by Barbican
and the reference to that encryption key should be stored in the
cinder metadata. You should be able to see this happening by tailing
the journal for the Barbican service. ::

openstack volume create --size 1 --type LUKS \
  --image cirros encrypted_volume

3. Create a compute VM.::

NET_ID=`openstack network list |awk '/ public / {print $2}'` 
openstack server create --flavor m1.tiny \
  --image cirros --nic "net-id=${NET_ID}" vm-test

4. Attach the volume to the VM. At this point, the hypervisor should
retrieve the encryption key from Barbican – and the volume should
be decrypted and attached. Again, the key retrieval should be visible
on the Barbican journal. ::

openstack server add volume --device /dev/vdc \
  testvm encrypted_volume

Setting up Barbican Behind TLS/SSL

Given that we are passing secrets back and forth between clients and the
Barbican server, it is absolutely imperative that the communications be
encrypted using TLS/SSL.

To be even more assured that the secrets are secure, one could use the
transport key mechanism that had been added to Barbican for use with
the Dogtag plugin. With this mechanism, the secret is encrypted with
a backend transport key that can only be decrypted on the back-end.

This means that secrets are always encrypted – even when there is no
SSL connection, and they are double-encrypted when there is an SSL

We will not focus on transport keys here, but rather on securing the
Barbican endpoint using TLS/SSL using haproxy. Haproxy will serve
https://hostname:9311, and will proxy the requests to the Barbican server
which will be listening on port 9312.

Note that in general, you are going to want to protect all the Openstack endpoints behind haproxy.  In this post, I’m only focusing on barbican.  For instructions on how to set up all the services (including Barbican), I recommend looking at the ansible scripts in rippowam

The steps are as follows:

  • Install haproxy::
sudo yum install haproxy
  • Get SSL certificate for haproxy from IPA using certmonger.  There are many possible ways of doing this.  I’m going to document how to do it using IPA (and registering the Openstack server as a client.  Ultimately, we’re going to end up registering the Openstack services in IPA.
sudo yum install ipa-client
sudo systemctl start certmonger.service 
echo <ipa_admin_password> | kinit admin@<ipa_realm> 
ipa service-add principal=HTTP/`hostname`@<ipa_realm> --force
setenforce 0 
ipa-getcert request -w -f /etc/haproxy/server.crt \
  -k /etc/haproxy/server.key -D "`hostname`" -K HTTP/`hostname`

cat /etc/haproxy/server.crt /etc/haproxy/server.key > /etc/haproxy/cert.pem 
chown haproxy: /etc/haproxy/cert.pem 
chmod 0600 /etc/haproxy/cert.pem
setenforce 1
  • . You could also get the certificates needed by contacting Dogtag directly using the pki CLI, or by using Barbican/Dogtag  to issue a server cert.
    • TODO – show how to use Barbican to get the cert from dogtag
  • Install haproxy config file in /etc/haproxy.cfg.
 # to have these messages end up in /var/log/haproxy.log you will 
 # need to: 
 # 1) configure syslog to accept network log events. This is done 
 # by adding the '-r' option to the SYSLOGD_OPTIONS in 
 # /etc/sysconfig/syslog 
 # 2) configure local2 events to go to the /var/log/haproxy.log 
 # file. A line like the following can be added to 
 # /etc/sysconfig/syslog 
 # local2.* /var/log/haproxy.log 
 log local2
chroot /var/lib/haproxy 
 pidfile /var/run/haproxy.pid 
 maxconn 4000 
 user haproxy 
 group haproxy 
# turn on stats unix socket 
 stats socket /var/lib/haproxy/stats
 # common defaults that all the 'listen' and 'backend' sections will 
 # use if not designated in their block 
 mode http 
 timeout connect 10s 
 timeout client 10s 
 timeout server 10s 
 maxconn 10000 
 balance roundrobin 
 option forwardfor
backend barbican-api 
 server barbican-01 check inter 10s
frontend barbican-api 
 bind ssl crt /etc/haproxy/cert.pem 
 default_backend barbican-api
  • Set up Barbican to bind locally to port 9312.::
crudini --set /etc/barbican/barbican.conf DEFAULT bind_host
crudini --set /etc/barbican/barbican.conf DEFAULT bind_port 9312 
crudini --set /etc/barbican/barbican.conf DEFAULT host_href https://`hostname`:9311
crudini --set /etc/barbican/vassals/barbican-api.ini uwsgi socket :9312 
sed -i 's/bind = '\'''\''/bind = '\'''\''/' /etc/barbican/gunicorn-config.py
systemctl restart openstack-barbican-api.service
systemctl restart haproxy.service

Barbican and Dogtag/IPA

The default back-end plugins that ship with Barbican are not really suitable for a production deployment. The simple storage plugin simply encrypts the secrets using a single symmetric key that is stored in a plain text file, and the snake oil plugin uses self signed certificates.

Over the last 10 years, I have been part of the Dogtag Certificate System team. Dogtag is the upstream project for the Red Hat Certificate System, which has been deployed and battle-hardened in some of the largest PKI deployments in the world. We use it to generate certificates, store private keys (encryption keys), issue certificates to tokens etc. It has also passed Common Criteria, works with the major HSMs, and provides robust  HA and DR solutions.

Over the last few years, we have augmented one of its components, the Key Recovery Agent (KRA) to be able to securely store more than just encryption keys. Now we can store symmetric keys (for volume encryption, for example) and passphrases. This makes the KRA an ideal back-end for Barbican.

Moreover, we have added REST interfaces and python client code to the KRA and CA components. This makes these components ideal to act as backends for Barbican, and a Barbican plugin has been written (mostly by me) to interface with the KRA.

In this posting, I’m going to describe the simple steps needed to integrate
Dogtag with Barbican, so that you can create a production-backend-backed Barbican.

Dogtag and IPA

Over the last few years, we have also integrated Dogtag into FreeIPA (that is Identity Management or IdM). IdM is part of RHEL and includes the Dogtag CA and KRA as key components used for issuing server certificates and providing the back-end store for the IPA Vault respectively.

While it is certainly possible to configure Barbican with a stand-alone Dogtag CA and KRA, there are many benefits that a full IPA instance provides to an Openstack deployment. A lot of those benefits were  explored in some of the demos we created for the Tokyo summit. <links>

In particular, we are able to:

  • Register each of the openstack services and service users in the
    undercloud with IPA providing them with kerberos credentials.
  • Use these credentials to issue SSL certs for haproxy, so that the service endpoints would be secured.
  • Secure communication with the database and the message queues.
  • Use a nova plugin (written by Rich Megginson) to register new VMs to IPA, providing them with an identity and kerberos credentials. This solves the problem that Kevin Fox is trying to solve in (Fox spec).  Certificates for each VM (and any other resources) can be retrieved using kerberos credentials. IPA can also be used to manage the host and provide useful features like access control.
  • IPA can be tied into existing identity stores like Active Directory.

The demos provide some compelling use cases for deploying Barbican
and Openstack with IPA. In this blog, I’ll describe how to configure
Barbican with IPA.

Creating an IPA instance

To use IPA, it is highly recommended that you set up IPA in a separate VM. IPA uses some of the same ports needed by Openstack, including ports 80 and 443 (which might be used by horizon), as well as ports 8080 and 8443 (used by Dogtag). A separate VM just makes things clearer and easier to debug, and keeps a crisp separation of concerns.

Using a container

One of the easiest ways to deploy IPA is through the use of a container. Once the container is created, an configuration and data is stored separately so that the container can be started/stopped or created and recreated without loss of data.

This also allows one to treat the IPA instance essentially like an appliance. Just as you would not necessarily care to know about the internal workings of your stove or fridge – you would not need to know about the internals of your IPA instance. Just turn it on and configure Barbican to work with it.

This is useful also for folks who are not as familiar with – or do not want to run Fedora or RHEL machines. They could, for instance, run IPA on RHEL/Centos/Fedora in a container, and run Barbican on Ubuntu.

  • Install and start docker.
sudo yum install docker
systemctl start docker
  • Pull down the IPA Docker image.
docker pull adelton/freeipa-server
  • Create a directory in which will contain server info (/var/lib/ipa-data for example).  This directory will be used during the initial install, and will be used to save any data needed to restore the container.
  • Create a file: ipa-server-install-options, which will contain command line parameters to the ipa-server-install command that is run after the container is created.  You want at least:

  • Run the container.  This will create the container and will execute ipa-server-install to install IPA and the Dogtag CA, and will start a console in which you are logged in to the server.
    docker run --name freeipa-server-container -ti \
       -h ipa.example.test \
       -v /var/lib/ipa-data:/data:Z freeipa-server
  • The current IPA containers do not yet install the KRA.  This is easy to do though.  While logged onto the ipa container:

Using a separate VM

IPA can easily be installed on a RHEL 7 (Centos 7) or Fedora 22/23 VM. Specify RHEL 7 or Fedora 22/23 to ensure that the right version of Dogtag is installed. You need at least Dogtag 10.2 to ensure that the Dogtag REST API is present.

The steps are:

  • Install a VM with RHEL 7 (Centos 7) or Fedora 23/22.
  • Install the IPA software
sudo yum install freeipa-server (Fedora) OR
sudo yum install ipa-server (RHEL/Centos)
  • Install IPA with the CA
sudo ipa-server-install
  • Install the KRA:
sudo ipa-kra-install

Setting up Barbican to use Dogtag (in IPA)

Setting up Barbican to use Dogtag in IPA is straightforward.

  • Copy over a PEM file containing the certificate and key for the IPA
    CA and KRA agent from the IPA machine. Copy this file to
    /etc/barbican/kra_agent.pem and set the ownership to root:barbican.  On the IPA machine, this file is located at /etc/httpd/alias/kra-agent.pem.  In the docker example above, this file can be found on the host machine at /var/lib/ipa-data/etc/httpd/alias/kra-agent.pem.

If integrating with Dogtag directly, you can create this file by executing the following OpenSSL commands to extract the agent certificate and key from the agent NSS database::

openssl pkcs12 -in ~/.dogtag/pki-tomcat/ca_admin_cert.p12 \ 
 -out ~/admin_cert.pem -nodes

Note: Ideally one should use the Dogtag admin tools to create a new agent specifically for Barbican usage. Better yet, we should use the kerberos credentials provided by IPA to interact with Dogtag and create this user. This requires some changes to Dogtag (kra acls design), which are currently in design and are slated to go out in Dogtag 10.3.

  • On the Barbican machine, install the Dogtag client code::
sudo yum install pki-base

Right now, this yum operation will pull in all the dependencies for both the Python and Java Dogtag clients. There is currently work underway to separate these clients, so that only the Python client would be installed. (Dogtag separation ticket). That work is expected to be completed soon.

For Debian systems, there should be Debian packages. Alternatively, one could try the pip -installable Dogtag client package at https://pypi.python.org/pypi/dogtag-pki/ .

Also, note that at the time I wrote this, Centos 7 had not yet updated to reflect the release of RHEL 7.2.  This means that Centos 7 pulled in RHEL 7.1 bits which includes pki-base 10.1.  This version does not have the required python client library calls.  We expect Centos 7 to update to pull in Dogtag 10.2 shortly.  In the meantime, Centos 7 builds can be obtained from the IPA copr repo.

  • Modify the /etc/barbican/barbican.conf configuration file to enable the Dogtag plugins for certificate generation and secret storage:
crudini --set /etc/barbican/barbican.conf secretstore enabled_secretstore_plugins dogtag_crypto
crudini --set /etc/barbican/barbican.conf dogtag_plugin dogtag_host ipa_host
crudini --set /etc/barbican/barbican.conf dogtag_plugin dogtag_port 8443
crudini --set /etc/barbican/barbican.conf dogtag_plugin pem_path /etc/barbican/kra-agent.pem
crudini --del --list /etc/barbican/barbican.conf certificate enabled_certificate_plugins
crudini --set /etc/barbican/barbican.conf certificate enabled_certificate_plugins dogtag
systemctl restart openstack-barbican-api.service
The final version of barbican.conf will contain sections that look like :

    namespace = barbican.secretstore.plugin 
    enabled_secretstore_plugins = dogtag_crypto 

    pem_path = /etc/barbican/kra-agent.pem 
    dogtag_host = ipa_host 
    dogtag_port = 8443 
    nss_db_path = '/etc/barbican/alias' 
    nss_db_path_ca = '/etc/barbican/alias-ca' 
    nss_password = 'password123' 
    simple_cmc_profile = 'caOtherCert' 
    ca_expiration_time = 1 
    plugin_working_dir = '/etc/barbican/dogtag' 

    namespace = barbican.certificate.plugin 
    enabled_certificate_plugins = dogtag

Testing Barbican/Dogtag

To test out Barbican/Dogtag, I usually just end up performing the tests
described in the Testing Barbican section above (previous post).  The Dogtag plugin currently does not support subordinate CAs, but that should change soon.

To make sure the certificate and secret storage and retrieval requests
actually being sent to the Dogtag CA and KRA, I usually tail the debug
logs of the CA and KRA while executing the tests.

On the IPA VM,

tail -f /var/log/pki/pki-tomcat/kra/debug 
tail -f /var/log/pki/pki-tomcat/ca/debug

Barbican in RDO

Introducing Barbican …

Barbican is an OpenStack project has been recently packaged for RDO.

It provides two services:

  • the Key Management Service (KMS), which provides secure management and storage of secrets (like encryption keys or
    passwords), including the generation of symmetric and asymmetric
  • the Certificate Management Service (CMS) wich provides issuance of x509 certificates.

KMS has been gaining traction in OpenStack over the last few years.  Some of the current use cases include:

  • Generation and storage of volume encryption keys by cinder, and their subsequent retrieval by nova in the hypervisor when an encrypted volume is mounted.
  • Storage and retrieval of certificates and private keys in Neutron Load Balancing as a Service (LBaaS).
  • Storage and retrieval of keys used to sign Glance images. (in progress).
  • Storage and retrieval of Swift objects (for encryption-at-rest) (in progress).
  • Storage of generated and user provided passwords for Sahara (in progress).

CMS has not yet gained a lot of traction, although a lot of projects have expressed some interest.  There is still a bit of work to do there, but there have been some interesting features that have been added (like the ability to create subordinate CAs).

Installing and Starting Barbican

Installing Barbican is super-easy now that its all packaged in RDO.  To be specific, we have packaged the Liberty version of Barbican, so you’ll need to use the Liberty RDO components if you install all the projects together on the same server.

To install Barbican:

sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
sudo yum install -y openstack-barbican-api openstack-barbican

This will install all the python libraries and server classes and configuration for a basic, operational Barbican server.  I’ll explain what that means shortly.

To start the server,

sudo systemctl start openstack-barbican-api.service

This should start up a gunicorn server serving the Barbican WSGI application, serving requests on port 9311::

[root@openstack2 ~]# ps -ef |grep gunicorn
 barbican  3695     1  0 01:00 ?        00:00:07 /usr/bin/python /usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini
 barbican  3700  3695  0 01:00 ?        00:00:01 /usr/bin/python /usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini
 barbican  3705  3695  0 01:00 ?        00:00:01 /usr/bin/python /usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini
 barbican  3710  3695  0 01:00 ?        00:00:00 /usr/bin/python /usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini
 barbican  3715  3695  0 01:00 ?        00:00:00 /usr/bin/python /usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini

The Barbican WSGI application is basically:

  • A REST front-end, that can be configured to accept authentication through middleware modules.  Barbican currently expects to receive the same headers (X-Project-Id for instance) as would be provided by a keystone token.
  • An application layer which processes requests, routes them to the appropriate backend plugin, and stores metadata in a local database.(sqlite database at /var/lib/barbican/barbican.sqlite, by default).  It is possible to configure other databases like postgres.
  • A backend plugin layer, which consists of plugins that interact with various back-end devices to actually store secrets or issue certificates.

For storing, retrieving and generating keys, for instance, there are
several plugins:

  • a simple storage plugin, which simply encrypts the secret using a static symmetric key stored in a configuration file.  This is great for testing but absolutely NOT for production.
  • a KMIP plugin, used to talk to a KMIP device.
  • a PKCS11 plugin, used to talk to an HSM using PKCS11.
  • a Dogtag plugin, used to talk to a Dogtag Key Recovery Authority (KRA).  I worked on this, and will have a lot more to say about this later.

For issuing certificates, we have:

  • A simple testing plugin that pretty much just pretends to send the
    certificate request somewhere, and sends back “Waiting for CA ..”.
  • Another simple testing plugin (called snakeoil) that processes the certificate request and returns a self signed certificate using OpenSSL.  Snakeoil has also implemented the subCA feature.
  • A Dogtag CA plugin, suitable for contacting Dogtag CA servers. Again, I’ll have much more to say about this later.
  • A Symantec plugin, suitable for contacting Symantec public CAs. (in progress).
  • A Digicert plugin (in progress).

Out of the box, the Barbican server will be configured to not expect authentication (although you still need to provide the X-Project-ID as a request header), will send secret storage/retrieval/generation requests to the simple storage plugin, and will send certificate requests to the simple testing and snakeoil plugins.

It will also serve requests over HTTP (NOT https!) on port 9311.   This is useful to do some basic integration and sanity tests though, as I’ll describe in the section below. In later posts, I’ll describe how  to secure the front-end using haproxy, to enable keystone authentication, and to configure Barbican to use some production-ready back-ends (Dogtag).

There is a puppet module in progress that I will be helping to shepherd through (with a lot of help from Emilien) over the next few weeks.  This puppet module will install Barbican as an apache module, rather than a standalone app.  The repo for this project is here.

Testing Barbican

Testing Secret Storage/Generation/Retrieval

To store a secret, we’ll invoke the REST API directly:

 curl -X POST \
      -H 'content-type:application/json' \ 
      -H 'X-Project-Id:12345'
      -d '{\
           "payload": "my-secret-here", 
           "payload_content_type": "text/plain"}' \ 
{"secret_ref": "http://localhost:9311/v1/secrets/886c7e43-19c7-460b-b695-c71ea1dcf1df"}

To retrieve metadata about the secret::

curl -X GET \
    -H 'content-type:application/json' \
    -H 'X-Project-Id:12345' \ 
{"status": "ACTIVE", 
 "secret_type": "opaque", 
 "updated": "2015-11-21T14:09:52.498425", 
 "name": null, 
 "algorithm": null, 
 "created": "2015-11-21T14:09:52.495135", 
 "secret_ref": "http://localhost:9311/v1/secrets/886c7e43-19c7-460b-b695-c71ea1dcf1df", 
 "content_types": {"default": "text/plain"}, 
 "creator_id": null, "mode": null, 
 "bit_length": null, "expiration": null

And to retrieve the secret itself::

curl -X GET -H 'accept:text/plain' -H 'X-Project-Id:12345' \ 

To generate a secret::

curl -X POST \
  -H 'content-type:application/json' \
  -H 'X-Project-Id:12345' 
  -d '{"payload": "my-secret-here", 
       "payload_content_type": "text/plain"}' \ 
{"secret_ref": "http://localhost:9311/v1/secrets/886c7e43-19c7-460b-b695-c71ea1dcf1df"}

and to retrieve it::

curl -X GET \
  -H 'accept:application/octet-string' \
  -H 'X-Project-Id:12345' \ 
<<binary data>>

Testing Certificate Generation

First, we need to determine which Certificate Authorities (CAs) are available.  A certificate plugin can provide access to many CAs. In this case, we expect to see at least two CAs (for the simple and snake oil plugins).::

curl -X GET \
  -H 'accept:application/json' \
  -H 'X-Project-Id:12345' \ 
{"cas": [
 "http://localhost:9311/v1/cas/c88c0178-1c48-4b40-abb7-5ffd9ac44110", .. 
 "total": 3

You can get more details about each CA – including the plugin name, signing  certificate and PKCS#7 certificate chain by doing a GET on each CA.:

curl -X GET \
  -H 'accept:application/json' \
  -H 'X-Project-Id:12345' \ 
{"status": "ACTIVE", 
 "updated": "2015-11-21T14:08:34.289206", 
 "created": "2015-11-20T06:01:02.853237", 
 "plugin_name": "barbican.plugin.snakeoil_ca.SnakeoilCACertificatePlugin", 
 "meta": [ 
   {"ca_signing_certificate": "-----BEGIN CERTIFICATE-----\nMIIC+zCC..."}, 
   {"intermediates": "-----BEGIN PKCS7-----\nMIIDLAYJKoZIhvcNAQcCoII..."}, 
   {"name": "Snakeoil CA"} 
 "ca_id": "72b53eb0-1d36-48f9-a26a-13449929003b", 
 "plugin_ca_id": "Snakeoil CA", 
 "expiration": "2015-11-22T14:08:34.282407"}

The UUID of the certificate  authority (the ca_id) can be used to specify the CA to which certificate orders should be directed. When the expiration time occurs, the server will query the certificate plugin to determine if the CA data is still valid. This is to allow Barbican to remain in sync with changes in the CA back-ends.

To generate a certificate, we will generate a certificate request, and order a certificate from the snake oil backend. This back-end always returns a self signed certificate. To avoid having to construct all the HTTP requests explicitly, we’ll use the Barbican client. By adding the verbose (-v) option, you can see the exact request sent to the server and the responses.

Note: As of now, the version of python-barbicanclient on Fedora 23 is too old to work with Liberty. Install version 3.3.0-1 from koji instead (package)

sudo yum install python-barbicanclient
openssl genrsa -out rsa.key 2048 
openssl req -new -sha256 -key rsa.key \
  -subj "/CN=server1/O=example.com" -out csr.txt 
cat csr.txt |base64 > csr.b64  
barbican --os-project-id 12345 --no-auth \
  --endpoint http://localhost:9311 \ 
  order create --type certificate 
  --ca-id 72b53eb0-1d36-48f9-a26a-13449929003b \ 
 --request-file csr.b64
| Field          | Value                                   | 
| Order href     | http://localhost:9311/v1/orders/73f69   |
|                | 01c-ffee-4e11-adfd-f7c6ce8ebe88         | 
| Type           | Certificate                             | 
| Container href | None                                    | 
| Secret href    | N/A                                     | 
| Created        | None                                    | 
| Status         | None                                    | 
| Error code     | None                                    | 
| Error message  | None                                    | 
barbican --os-project-id 12345 --no-auth --endpoint http://localhost:9311 \ 
 order get http://localhost:9311/v1/orders/73f6901c-ffee-4e11-adfd-f7c6ce8ebe88
| Field          | Value                                   | 
| Order href     | http://localhost:9311/v1/orders/73f6901 |
|                |   c-ffee-4e11-adfd-f7c6ce8ebe88         | 
| Type           | Certificate                             | 
| Container href | http://localhost:9311/v1/containers/31  |
|                |   d9166c-14b8-461c-b09c-b207bdd877a9    | 
| Secret href    | N/A                                     | 
| Created        | 2015-11-25 21:52:57.945367+00:00        | 
| Status         | ACTIVE                                  | 
| Error code     | None                                    | 
| Error message  | None                                    | 
barbican --os-project-id 12345 --no-auth --endpoint http://localhost:9311 \ 
 container get http://localhost:9311/v1/containers/31d9166c-14b8-461c-b09c-b207bdd877a9
| Field          | Value                                      | 
| Container href | http://localhost:9311/v1/containers/31d9   |
|                |    166c-14b8-461c-b09c-b207bdd877a9        | 
| Name           | None                                       | 
| Created        | 2015-11-25 21:52:58.038753+00:00           | 
| Status         | ACTIVE                                     | 
| Type           | certificate                                | 
| Certificate    | http://localhost:9311/v1/secrets/7dc81be5  |
|                |   -a513-4371-91be-6a6cfcc059af             | 
| Intermediates  | http://localhost:9311/v1/secrets/ab844561  |
|                |    -84cd-437a-b12f-c38b2cbc4b62            | 
| Private Key    | None                                       | 
| PK Passphrase  | None                                       | 
| Consumers      | None                                       | 
curl -X GET -H 'accept:application/pkix-cert' -H 'X-Project-Id:12345' \ 

In a later post, I will describe how to request a certificate using

Testing subordinate CAs

One exciting new feature that was added in Liberty was the ability to create
subordinate CAs.  These are CAs that have their signing certificate signed by the designated parent CA.  This opens up the possibility of having project specific CAs, and tightly controlled security domains where only a certificate chain is installed and accepted.

This is currently used in Magnum to ensure TLS security and authorization for containers belonging to a specific bay.  It can also be used to tightly
control puppet domains.

At this point, only the snakeoil CA supports this feature.  The latest
master branch of Dogtag (Dogtag 10.3) does support subCA creation, but this support has not yet been extended to the Barbican Dogtag plugin.  I expect to add this support early in the Mitaka cycle.

To create a new subordinate CA, do a POST on the /cas interface, specifying
a snake oil CA as the parent CA.::

curl -X POST \
  -H 'content-type:application/json' \
  -H 'X-Project-Id:12345' \  
  -d '{"name": "Subordinate CA", 
       "description": "Test Subordinate CA", \ 
       "parent_ca_ref": "http://localhost:9311/v1/cas/72b53eb0-1d36-48f9-a26a-13449929003b", \  
       "subject_dn": "CN=Subordinate CA, O=example.com"}' \ 
{"ca_ref": "http://localhost:9311/v1/cas/78ee20b6-4330-46c1-b747-8be9c1dd60e6"} 

This returns a new subordinate CA.  You can then get the signing certificate, and PKCS#7 certificate chain from the CA metadata, and use the ca_id to request certificates.