Interactive OpenStack RC files

In my own environment, I like to password protect my overcloudrc file by removing the line that exports in plain text the OS_PASSWORD value. To do this, you’ll need to first change your password to something more human memorable than the default.

openstack user set admin -⁠-password-prompt

Then change the line that looks something like the following:
export OS_PASSWORD=a268105c2af73c85

To something like this:

read -s -p ‘password: ‘ PASSWD && echo ”
export OS_PASSWORD=$PASSWD

This way, when you source the rc file, you’re asked once for your password, and then it’s immediately exported as an environmental variable. Keep in mind, your password is still available to be viewed in plain text by observing the environmental variables, until you either log out or manually run ‘unset OS_PASSWORD’. With some further modifications, you can have all of the environmental variables unset, along with a visual indication to your authentication status.

A visual authentication clue is provided by default if you install Openstack from scratch. If deployed with director however, the resulting overcloudrc (or stackrc) files do no such thing. This also helps distinguish sessions that are authenticated to the undercloud vs. overcloud – as I’ve seen many folks mired in configuration to realize they’re configuring the wrong environnment!

Your default PS1 variable probably looks somthing like this:

export PS1="[\u@\h \W]\\$ "

I like to add the following to stackrc (uc is for ‘undercloud’):

export PS1="[\u@\h \W](uc)\\$ "

Here’s what I add to overcloudrc:

export PS1="[\u@\h \W](oc)\\$ "

Finally, have all the variables unset by exiting from your session without closing out your current shell.

Start by adding this line at the bottom of the file:

/bin/bash

This opens up a new shell, and stops execution of the script. Below this, you can then unset all of your variables. Remember to also reset your prompt to it’s original state, else the visual indicator that was set up will cause more confusion than clarification.

The result? Source the file, type password and get visual confirmation that you’re authenticated. Type exit, and the shell remains open, with none of the cloud’s authentication information set. As an example, here’s what my stackrc file looks like in my lab:

[stack@undercloud ~](uc)$ cat stackrc
export PS1=”[\u@\h \W](uc)\\$ ”
NOVA_VERSION=1.1
export NOVA_VERSION
read -s -p ‘password: ‘ PASSWD && echo ”
export OS_PASSWORD=$PASSWD
OS_AUTH_URL=http://192.168.99.100:5000/v2.0
export OS_AUTH_URL
OS_USERNAME=admin
OS_TENANT_NAME=admin
COMPUTE_API_VERSION=1.1
OS_BAREMETAL_API_VERSION=1.15
OS_NO_CACHE=True
OS_CLOUDNAME=undercloud
OS_IMAGE_API_VERSION=1
export OS_USERNAME
export OS_TENANT_NAME
export COMPUTE_API_VERSION
export OS_BAREMETAL
export OS_NO_CACHE
export OS_CLOUDNAME
export OS_IMAGE_API_VERSION

/bin/bash

export PS1=”[\u@\h \W]\\$ ”
unset NOVA_VERSION
unset OS_PASSWORD
unset OS_AUTH_URL
unset OS_USERNAME
unset OS_TENANT_NAME
unset COMPUTE_API_VERSION
unset OS_BAREMETAL_API_VERSION
unset OS_NO_CACHE
unset OS_CLOUDNAME
unset OS_IMAGE_API_VERSION

Here’s what it looks like:
[stack@undercloud ~]$ source stackrc
password:
[stack@undercloud ~](uc)$ exit
exit
[stack@undercloud ~]$

Keep in mind, this isn’t real authentication – that happens when you attempt to run commands, and the password is read from the environment. So if you have trouble after sourcing your RC file, you may have entered the wrong password. Simply exit and re-source.

Bringing infrastructure-as-code to the home lab

I’m not new to automating the creation, nor configuaration of my own VMs, but recently I’ve become interested in looking into the best methods by which to combine these two. The idea being to have a “push button” process to recreate any sufficiently documented environment.

I’ve found that too often I need some common service (DNS for example) that I do not have, and that I don’t much care to keep unused VMs arount much longer than needed.

Enter infrastructure as code:

The first step in bootstrapping any given service is to create the VM. I avoid installing an OS by creating the virtual machine from a template QCOW2, like this:

/usr/bin/qemu-img create -f qcow2 -b ${MY-TEMPLATE} \
/var/lib/libvirt/images/dns-server.qcow2

Before Ansible can take over however, I need an IP address and SSH keys injected. There are two tools to choose from here, cloud-init and virt-customize (via the libguestfs-tools-c package); I use both. For me, virt-customize is best at creating customized templates for re-use, but more specific one-off changes I’m injecting with cloud-init.

I start with the default Fedora 25 cloud image, and run the following:

virt-customize \
--format qcow2 \
-a ./Fedora-Cloud-Base-25-1.3.x86_64.qcow2 \
--run-command 'sudo dnf -y install python libselinux-python python2-dnf' \
--selinux-relabel

You may want to copy and rename the default image before running the above, depending on your modus operandi.

Getting metadata together for cloud-init:

You’ll need two files to combine into a metadata configuration iso, meta-data and user-data. This metadata will apply a desired IP address as well an inject SSH keys. Here’s what these files might look like.

meta-data
———
instance-id: 1
local-hostname: dns-server
public-keys:
  - [PASTE SSH PUB KEY 1 HERE]
  - [PASTE SSH PUB KEY 2 HERE]
  - [...]

Strictly speaking, you need just a single public key injected; I like to copy both my user’s and root’s public ssh key into the image. This way if I need to ssh into the VM from either account, I won’t break my workflow.

user-data
———
#cloud-config
write_files:
  - path: /etc/sysconfig/network-scripts/ifcfg-eth0
    content: |
      DEVICE="eth0"
      BOOTPROTO="none"
      ONBOOT="yes"
      TYPE="Ethernet"
      USERCTL="yes"
      IPADDR="192.168.122.101"
      NETMASK="255.255.255.0"
      GATEWAY="192.168.122.1"
      DNS1="192.168.122.1"
runcmd:
  - ifdown eth0; ifup eth0

Combine these files into an ISO file that will be the configuration drive using the following command:

genisoimage -output config.iso -volid cidata -joliet -rock meta-data user-data

This concludes the prep work – now we just tie it in with a script that will do two things.

    1. It should create the VM, and check for network responsiveness.

    2. After recieving input that the VM is on the network, hand the logic over to your Ansible playbook.

Because the terminal will hang waiting on the creation of the VM to finish, seperate the VM creation into it’s own file:

/usr/bin/qemu-img create -f qcow2 -b ${MY-TEMPLATE} /var/lib/libvirt/images/dns-server.qcow2
/usr/bin/virt-install \
--disk path=/var/lib/libvirt/images/dns-server.qcow \
--disk path=./config.iso,device=cdrom \
--network network=default \
--name ${VM_NAME} \
--ram 1024 \
--import \
--os-type=linux \
--os-variant=rhel7

Then call it using a separate runme.sh script:

IMAGE_CREATE=/home/rheslop/offlineKB/Lab/automation/scripts/mkimage
INSTANCE_IP=192.168.122.1
$IMAGE_CREATE &
echo -n "Waiting for instance..."
function INIT_WAIT {
if ping -c 1 $INSTANCE_IP > /dev/null 2>&1 ; then
    sleep 2 && ansible-playbook dns-default-init.yaml
else
    echo -n "."
    INIT_WAIT
fi
}

Assuming your Ansible playbook fully configures the DNS server, it should be safe to delete it, and rerun ‘runme.sh’ to get the same result from scratch every time – with the following caveat: The host will be cached in your users’ ~/.ssh/known_hosts file. If the entry is not deleted, then Ansible fails to authenticate and run your configured playbook.

To sum:
    1. Start by getting a qcow2 image you can work with, customize if necessary.
    2. Create a config.iso file for metadata injection with cloud-init
    3. Create runme.sh bash script to:
        a. Create VM
        b. Launch playbook
    4. Test delete and recreate, document if necessary.

Route to libvirt networks WITHOUT bridging or NAT’ing

It took a few resources to figure this out, so I figured I’d document this in one place for anyone else looking to achieve something similar. The basic goal is to have a method to route out of libvirt virtual networks using the host as a router, and without using NAT, so that VMs can exist in their own network yet still be able to accept traffic initiated from another network. Some modification of the below steps may enable routing between virtual networks as well.

The first steps are basic preparations for normal routing – start by configuring your system to route by editing /etc/sysctl.conf:

net.ipv4.ip_forward=1

Run sysctl -p to make these changes take affect immediately.

Make firewalld aware of the virtual network by adding the libvirt bridge to the trusted zone. Based on experiences from a ‘mrguitar‘, this may be necessary for guests to be able to speak to the host, which will be necessary so that the host can route.1

firewall-cmd --permanent --zone=trusted --add-interface=virbr1
firewall-cmd --reload

You’ll want iptables-like rules inserted so that firewalld will allow routing between the desired interfaces. The first example I found was provdided by a ‘banjo67xxx‘ over at fedoraforum.org,2 (you can find official documentation here.

This amounts to placing an xml file with IPtables rules into firewallds configuration directory.

/etc/firewalld/direct.xml:
<?xml version="1.0" encoding="utf-8"?>
<direct>
  [
    <rule ipv="ipv4" table="filter" chain="FORWARD_direct" priority="0"> -i wlp2s0 -o virbr1 -j ACCEPT </rule>
    <rule ipv="ipv4" table="filter" chain="FORWARD_direct" priority="0"> -i virbr1 -o wlp2s0 -j ACCEPT </rule>
  ]
</direct>

Next, change the forward mode to ‘open’. This is new as of libvirt 2.20, released this last September. The virtual network driver in libvirt uses iptables to describe behaviors for forwarding modes like ‘route’ and ‘nat’.3 This new ‘open’ forwarding mode simply connects guest network traffic to the virtual bridge without any interfering firewall rules.4

Here’s what my network ‘net0‘ looks like. This can be easily replicated by creating an internal network with virt-manager, and using virsh net-edit <network> to add <forward mode=’open’/>.

virsh net-dumpxml net0
<network>
  <name>net0</name>
  <uuid>a5c64edd-e395-4303-91f3-c1c23ed6c401</uuid>
  <forward mode='open'/>
  <bridge name='virbr1' stp='on' delay='0'>
  <mac address='52:54:00:2e:1e:2d'/>
  <domain name='net0'/>
  <ip address='192.168.110.1' netmask='255.255.255.0'>
  </ip>
</network>

Any host on the network that will be connecting to the VM will need to have knowledge of where to send packets destined for your libvirt network. This means that your workstation running the VM will need a static IP. The IP of the laptop my VM is on is 192.168.0.4, so the route on the remote node (stored in /etc/sysconfig/network-scripts/route-<interface>) looks like this:

192.168.110.0/24 via 192.168.0.4

Once this is done VMs on a node, on their own network, can send and receive inbound traffic. This does have limitations. If you cannot insert a return route into your gateway for example, these systems won’t be able to get online. The ability to cleanly route between virtual networks however, opens up all kinds of neat possibilities. 🙂

1. http://mrguitar.net/blog/?p=720
2. http://forums.fedoraforum.org/showthread.php?t=294446
3. https://libvirt.org/firewall.html
4. https://libvirt.org/formatnetwork.html

KVM and Dynamic DNS

OR: Configuring a lab server, part III

Having a PXE install server is a traditional convenience for provisioning baremetal, but if you’re a KVM warrior, cloning is the more sensible option.

Using a hand full of scripts, we can combine cloning, hostname assignment, with dynamic DNS to have a full network configuration for any VM on first boot – effectively cloudifying things a bit. (By cloudify I only mean the process of automating and orchestrating the creation of VMs, treating them less like pets and more like cattle; this is not to say that KVM is by itself a proper cloud platform).

I’m sure there are a few ways to go about all of this, but first, you need your base image. Many vendors provide a qcow2 image for just this purpose, a few links have been curated by openstack.org here.

The image can then be customized using the virt-customize command provided by the libguestfs-tools. I like to have a simple default login, that will be allowed via ssh; I’ll also remove cloud-init, as I’m not using it for this setup:

virt-customize -a \
/var/lib/libvirt/images/templates/CentOS-7-x86_64-template.qcow2 \
--root-password password:redhat \
--edit /etc/ssh/sshd_config:s/PasswokrdAuthentication\ no/PasswordAuthentication\ yes/g \
--run-command '/bin/yum -y remove cloud-init'

Once this image is modified, another image can be cloned from it using the -b option of qemu-img, as shown:
qemu-img create -f qcow2 -b base_image.qcow2 image_you_are_creating.qcow

This assures that the only disk space used by the newly created disk are the differences between it and the base. This part of course will be scripted, as well as the creation of the VM. By using another virt-customize command, the hostname of the VM will match the VM’s name – specified by the first argument provided to the script, nice!

This is what I normally use, but experiment and see what works for you.

#!/bin/bash

if [ -z $1 ]; then
echo "Please provide the VMs name. Example:"
echo "$0 cent-1"
exit
fi

VM_NAME=$1
FILENAME=${VM_NAME}.qcow2
DOMAIN=workstation.local

# Use template stored in /var/lib/libvirt/images/templates/
TEMPLATE=/var/lib/libvirt/images/templates/CentOS-7-x86_64-template.qcow2

# Create image in /var/lib/libvirt/images/
IMAGES_PATH=/var/lib/libvirt/images

if [ -e $IMAGES_PATH/$FILENAME ]; then
echo "$FILENAME already exists in $IMAGES_PATH"
echo "Please choose another name"
exit
fi

/usr/bin/qemu-img create -f qcow2 -b $TEMPLATE $IMAGES_PATH/$FILENAME

# Set hostname
virt-customize -a $IMAGES_PATH/$FILENAME --hostname $1.$DOMAIN

# Create the virtual machine
/usr/bin/virt-install \
--disk path=/var/lib/libvirt/images/$FILENAME \
--network network=net1 \
--name $VM_NAME \
--ram 1024 \
--import \
--os-type=linux \
--os-variant=rhel7

With this, we have just about all qualifications mentioned above, except for DNS. For this I’ll set up a dynamic DNS server, so once an instance boots – it will insert it’s name into the record when it’s getting its address.

For this I developed a bash/ansible solution that will deploy such a server with minimal input. Before setting this up, I hadn’t used Dynamic DNS, so I based my installation steps on this instructional video posted by Lewis from SSN.

Here’s how it works – I’ve created a new network, it’s an isoloated network for my lab. I’ll edit that script above (called mkcent) and change this:

--network network=net1 \

to this:

--network network=default \
--network network=net1 \

That way the VM will have its primary NIC on the outside, and one NIC on the inside – it’ll route for the other VMs. Then just call the script to spin it up.

mkcent router-net1

Next clone the following project:

git clone https://github.com/rheslop/dynamic-dns.git

Navigating to the root directory, I’ll create a hosts file for ansible and populate it with the IP address the VM has recieved from the default network. (You can get this via virsh net-dhcp-leases defaut)

echo "router-net1 ansible_host=192.168.124.62" > hosts

Make sure your ssh key is copied over for Ansible communications:

ssh-copy-id root@192.168.124.62

Finally, configure the Dynamic DNS server by running the Ansible playbook from script. (Ansible will need to be installed on your system for this to work).

deploy-ddns

Remember to remove the aforementioned extra network line from the mkcent script, so additional VMs only boot with net1. Because ‘router-net1’ also routes, the VMs will have internet access, even while on an internal network.

Configuring a lab server (part II)

Continuing where we left off, the next services I’ll want to configure will be routing and DNS. It’ll be important going forward that the server have the correct FQDN, as well NICs assigned to the appropriate zones.

I’m going to set the ens4 interface to point to itself as the DNS server. DNS forwarding will be configured, so that the lab server will be able to address both lab servers and the outside network. Once more for reference, the ens4 interface in this setup faces our lab network, and eth0 connects to the greater external network.

nodes

Configure routing:

Our ens4 interface will remain in the firewalld zone ‘public’, and eth0 will be moved to ‘external’. Because the external zone has masquerading enabled by default, the only thing left will be enabling the kernel’s routing capabilities.

hostnamectl set-hostname pxe-boot.testlab.local
echo "ZONE=external" >> /etc/sysconfig/network-scripts/ifcfg-eth0
cat >> /etc/sysconfig/network-scripts/ifcfg-ens4 << EOF
DNS1=192.168.112.254
ZONE=public
EOF
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p

Install bind:

yum -y install bind

Configure DNS:

Because name services can be a bit involved, I’ve highlighted some key fields you may want to customize for your own setup.

cat > /etc/named.conf << EOF
options {
listen-on port 53 { 127.0.0.1; 192.168.112.254; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localhost; 192.168.112.0/24; };
recursion yes;
    forwarders {
        8.8.8.8;
        8.8.4.4;
    };

dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;

/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
};

logging {
channel default_debug {
file "data/named.run";
severity dynamic;
    };
};

zone "." IN {
type hint;
file "named.ca";
};

include "/etc/named.rfc1912.zones";

zone "testlab.local" IN {
type master;
file "/var/named/testlab.local.zone";
};

zone "112.168.192.in-addr.arpa" IN {
type master;
file "/var/named/zone.112.168.192.in-addr.arpa";
};
EOF

The station10 host entered in the configuration below is done only to test name resolution – currently this host does not exist on my network so I will expect pings to fail, but they should correctly resolve to 192.168.112.10.

cat > /var/named/testlab.local.zone << EOF
\$ORIGIN testlab.local.
\$TTL 86400
testlab.local. IN SOA pxe-boot.testlab.local. root.pxe-boot.testlab.local. (
    20120710 ; serial
    21600 ;refresh after 6 hours
    3600 ; retry after 1 hour
    604800 ; expire after 1 week
    86400 ) ; minimum TTL of 1 day

;DNS Server
testlab.local. IN NS pxe-boot.testlab.local.

;Clients
station10.testlab.local. IN A 192.168.112.10
pxe-boot.testlab.local. IN A 192.168.112.254
EOF

cat > /var/named/zone.112.168.192.in-addr.arpa << EOF
\$ORIGIN 112.168.192.in-addr.arpa.
\$TTL 86400
112.168.192.in-addr.arpa. IN SOA pxe-boot.testlab.local. root.pxe-boot.testlab.local. (
    20120710 ; serial
    21600 ; refresh after 6 hours
    3600 ; retry after 1 hour
    604800 ; expire after 1 week
    86400 ) ; minimum TTL of 1 day

;DNS Server
112.168.192.in-addr.arpa. IN NS pxe-boot.testlab.local.

;Clients
10 IN PTR station10.testlab.local.
254 IN PTR pxe-boot.testlab.local.
EOF

It will be important that permissions are set correctly on the following files:
chown root:named /var/named/zone.112.168.192.in-addr.arpa
chown root:named /var/named/testlab.local.zone

We can now restart NetworkManager, as well as the network interfaces. This way the zone and other configuration changes come into affect.

systemctl restart NetworkManager
ifdown eth0 && ifup eth0
ifdown ens4 && ifup ens4

Finally, let’s add DNS to our firewall rules and start named.

firewall-cmd --permanent --zone public --add-service="dns"
firewall-cmd --reload

systemctl enable named
systemctl start named

Go to: Part I

Configuring a lab server (part I)

Or: PXE install services

The example installation will be done on a KVM hosted virtual machine, and will mimic a setup wherin a single server will act as a gateway, kickstart server, as well as providing any other lab dependent services.

nodes

The below will focus exclusively on the configuration of kickstart and DHCP – further configuration will be done in another post.

Starting with a vanilla installation, start by configuring the network. Since DHCP and other default settings are fine for the public port, we’ll just set ‘ONBOOT’ to yes on that one.

sed -i 's/ONBOOT=no/ONBOOT=yes/' \
/etc/sysconfig/network-scripts/ifcfg-eth0
systemctl restart NetworkManager
ifdown eth0 && ifup eth0


cat > /etc/sysconfig/network-scripts/ifcfg-ens4 << EOF
DEVICE="ens4"
BOOTPROTO="none"
HWADDR="$(ip addr show ens4 | awk '/ether/ {print $2}')"
IPV6INIT="no"
MTU="1500"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR="192.168.112.254"
NETMASK="255.255.255.0"
DOMAIN="testlab.local"
EOF

systemctl restart NetworkManager; ifdown ens4

Install services:
yum -y install vsftpd dhcp tftp tftp-server* xinetd* syslinux

Next you’ll need a copy of the CentOS (or RedHat) install media to copy over to the FTP server. Create /var/ftp/inst and a /var/ftp/pub directories, which will hold installation packages and kickstart files respectively. I would recommend additional folders inside /var/ftp/inst for each operating system that will be offered by the server.

mkdir -p /var/ftp/{pub,inst/centos7}
mount -t iso9660 /dev/cdrom /mnt
cp -var /mnt/. /var/ftp/inst/centos7/
chcon -R -t public_content_t /var/ftp/inst/

Kickstart file:

cat > /var/ftp/pub/centos-7_ks.cfg << EOF
#version=DEVEL
install
url --url ftp://192.168.112.254/inst/centos7
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp
rootpw --plaintext redhat
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone US/Central --isUtc --nontp
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=vda
zerombr
clearpart --all --drives=vda
autopart --type=lvm
%packages --nobase
@core
%end
EOF

DHCP configuration:

cat > /etc/dhcp/dhcpd.conf << EOF
subnet 192.168.112.0 netmask 255.255.255.0 {
range 192.168.112.100 192.168.112.200 ;
option routers 192.168.112.254;
option domain-name "testlab.local";
option domain-name-servers 192.168.112.254;
default-lease-time 86400;
max-lease-time 129600;
allow booting;
allow bootp;
class "pxeclients" {
    match if substring(option vendor-class-identifier, 0,9) = "PXEClient";
    next-server 192.168.112.254;
    filename "pxelinux.0";
    }
}
EOF

TFTP server configuration:

mkdir -p /var/lib/tftpboot/centos7
mkdir -p /var/lib/tftpboot/pxelinux.cfg
cp /usr/share/syslinux/{pxelinux.0,menu.c32} /var/lib/tftpboot/
cp /var/ftp/inst/centos7/images/pxeboot/* /var/lib/tftpboot/centos7

sed -i 's/no/yes/' /etc/xinetd.d/tftp
cat > /var/lib/tftpboot/pxelinux.cfg/default << EOF
default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
LABEL local
    MENU LABEL Boot to hard drive
    LOCALBOOT 0
LABEL centos7
    MENU LABEL CentOS 7.2 x64
    kernel centos7/vmlinuz
    append initrd=centos7/initrd.img noipv6 ks=ftp://192.168.112.254/pub/centos-7_ks.cfg
EOF

Finally, open the firewall and launch services.

firewall-cmd --permanent --add-service="ftp"
firewall-cmd --permanent --add-service="tftp"
firewall-cmd --permanent --add-service="dhcp"
firewall-cmd --reload
ifup ens4
systemctl enable vsftpd
systemctl enable dhcpd
systemctl enable xinetd
systemctl enable tftp
systemctl start vsftpd
systemctl start dhcpd
systemctl start xinetd
systemctl start tftp

The only step left is to test the set up by network booting a system on the 192.168.112.0/24 network.

pxe_booting

🙂

Go to: Part II

Default restrictions on non-admin roles

Or: Where are role permissions even documented?

I’m not sure such information is documented anywhere. This may be due to the increasing number of Openstack services, each with tens, if not hundreds of API calls each. It could also be because additional API calls are added every six months, or that each non-vanilla distribution has variances. This doesn’t mean the information cannot be gotten, just that Google hasn’t been the place to find it.

The way policies work in Openstack, permissions are granted from a policy.json file that exists per service in /etc/$SERVICE/policy.json. Using these files one could get a good idea what kind of permissions, generally speaking, that a non-admin user will have. If more inclined, one could also document in detail the exact permissions lacking in a default non-admin role for a specific Openstack deployment, version, or distribution.

Here’s what I found when looking through the default policy files for a RHEL-OSP 7, a Kilo based Openstack distribution.

For most use cases, users can modify objects within their own project, or things that they own. Looking at the header of many of the services there is a line that looks like the following:

admin_or_owner": "is_admin:True or project_id:%(project_id)s"

In short, the admin role is allowed, as is someone within the scope of the project; determined by comparing the API caller’s project ID to the project ID of the object.

It’s important to note that it’s possible for defined rules to vary from policy file to policy file. For example, the admin_or_owner rule defined above was from the Cinder service – here’s what the same rule looks like for Keystone:

"admin_or_owner": "rule:admin_required or rule:owner"

The referenced ‘owner’ rule is defined as:

"owner" : "user_id:%(user_id)s"

While these types of restrictions make sense, the default limitations of a non-admin role are more expansive than simply objects outside of one’s scope.

A user cannot, for example:

  • Perform a reset-state on a volume, even that user had created that volume.
  • Update his/her own port’s mac address – an understandable security measure.
  • Create a shared image in glance, or update the properties of a glance image to be shareable.
  • These are the very restrictions I’m interested in, so here’s how I compiled a comprehensive list:

    First, check out the header of each policy file, paying attention to the definitions that describe an admin-only, or a service-only rule. Here are those definitions from /etc/cinder/policy.json:

    {
    "context_is_admin": "role:admin",
    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",
    "default": "rule:admin_or_owner",
    "admin_api": "is_admin:True",
    ...

    Both ‘context_is_admin’ and ‘admin_api’ are rules I want to search for, so I add each rule to a file I’ve named only_admin.txt. After going through each policy file, only_admin.txt has the following:

    admin_api
    admin_only
    admin_required
    context_is_admin
    context_is_advsvc
    service_or_admin

    Then I populate another file, services.txt with services I’m interested in:

    cinder
    glance
    heat
    keystone
    neutron
    nova

    After this is done, I search through each policy file, and identify all of the rules that only have admin or services access:

    for i in $(cat services.txt); do for j in $(cat only_admin.txt); do cat \
    /etc/${i}/policy.json | grep \"rule:${j}\" | awk -F\" '{print $2}' ; done ; done

    Below is a comprehensive list (from the above services) that are restricted to admin roles, in a default installation of RHEL-OSP 7 (Kilo).

    volume:get_volume_admin_metadata
    volume:delete_volume_admin_metadata
    volume:update_volume_admin_metadata
    volume_extension:types_manage
    volume_extension:types_extra_specs
    volume_extension:volume_type_access:addProjectAccess
    volume_extension:volume_type_access:removeProjectAccess
    volume_extension:volume_type_encryption
    volume_extension:quotas:update
    volume_extension:volume_admin_actions:reset_status
    volume_extension:snapshot_admin_actions:reset_status
    volume_extension:backup_admin_actions:reset_status
    volume_extension:volume_admin_actions:force_delete
    volume_extension:volume_admin_actions:force_detach
    volume_extension:snapshot_admin_actions:force_delete
    volume_extension:volume_admin_actions:migrate_volume
    volume_extension:volume_admin_actions:migrate_volume_completion
    volume_extension:volume_host_attribute
    volume_extension:volume_mig_status_attribute
    volume_extension:hosts
    volume_extension:services
    volume_extension:volume_manage
    volume_extension:volume_unmanage
    volume:services
    volume_extension:replication:promote
    volume_extension:replication:reenable
    backup:backup-import
    backup:backup-export
    scheduler_extension:scheduler_stats:get_pools
    service:index
    default
    identity:create_region
    identity:update_region
    identity:delete_region
    identity:get_service
    identity:list_services
    identity:create_service
    identity:update_service
    identity:delete_service
    identity:get_endpoint
    identity:list_endpoints
    identity:create_endpoint
    identity:update_endpoint
    identity:delete_endpoint
    identity:get_domain
    identity:list_domains
    identity:create_domain
    identity:update_domain
    identity:delete_domain
    identity:get_project
    identity:list_projects
    identity:create_project
    identity:update_project
    identity:delete_project
    identity:get_user
    identity:list_users
    identity:create_user
    identity:update_user
    identity:delete_user
    identity:get_group
    identity:list_groups
    identity:create_group
    identity:update_group
    identity:delete_group
    identity:list_users_in_group
    identity:remove_user_from_group
    identity:check_user_in_group
    identity:add_user_to_group
    identity:get_credential
    identity:list_credentials
    identity:create_credential
    identity:update_credential
    identity:delete_credential
    identity:get_role
    identity:list_roles
    identity:create_role
    identity:update_role
    identity:delete_role
    identity:check_grant
    identity:list_grants
    identity:create_grant
    identity:revoke_grant
    identity:list_role_assignments
    identity:get_policy
    identity:list_policies
    identity:create_policy
    identity:update_policy
    identity:delete_policy
    identity:check_token
    identity:create_consumer
    identity:get_consumer
    identity:list_consumers
    identity:delete_consumer
    identity:update_consumer
    identity:authorize_request_token
    identity:list_access_token_roles
    identity:get_access_token_role
    identity:list_access_tokens
    identity:get_access_token
    identity:delete_access_token
    identity:list_projects_for_endpoint
    identity:add_endpoint_to_project
    identity:check_endpoint_in_project
    identity:list_endpoints_for_project
    identity:remove_endpoint_from_project
    identity:create_endpoint_group
    identity:list_endpoint_groups
    identity:get_endpoint_group
    identity:update_endpoint_group
    identity:delete_endpoint_group
    identity:list_projects_associated_with_endpoint_group
    identity:list_endpoints_associated_with_endpoint_group
    identity:get_endpoint_group_in_project
    identity:add_endpoint_group_to_project
    identity:remove_endpoint_group_from_project
    identity:create_identity_provider
    identity:list_identity_providers
    identity:get_identity_providers
    identity:update_identity_provider
    identity:delete_identity_provider
    identity:create_protocol
    identity:update_protocol
    identity:get_protocol
    identity:list_protocols
    identity:delete_protocol
    identity:create_mapping
    identity:get_mapping
    identity:list_mappings
    identity:delete_mapping
    identity:update_mapping
    identity:create_service_provider
    identity:list_service_providers
    identity:get_service_provider
    identity:update_service_provider
    identity:delete_service_provider
    identity:create_policy_association_for_endpoint
    identity:check_policy_association_for_endpoint
    identity:delete_policy_association_for_endpoint
    identity:create_policy_association_for_service
    identity:check_policy_association_for_service
    identity:delete_policy_association_for_service
    identity:create_policy_association_for_region_and_service
    identity:check_policy_association_for_region_and_service
    identity:delete_policy_association_for_region_and_service
    identity:get_policy_for_endpoint
    identity:list_endpoints_for_policy
    identity:create_domain_config
    identity:get_domain_config
    identity:update_domain_config
    identity:delete_domain_config
    identity:validate_token
    identity:validate_token_head
    identity:revocation_list
    create_subnetpool:shared
    get_network:segments
    get_network:provider:network_type
    get_network:provider:physical_network
    get_network:provider:segmentation_id
    get_network:queue_id
    create_network:shared
    create_network:router:external
    create_network:segments
    create_network:provider:network_type
    create_network:provider:physical_network
    create_network:provider:segmentation_id
    update_network:segments
    update_network:shared
    update_network:provider:network_type
    update_network:provider:physical_network
    update_network:provider:segmentation_id
    update_network:router:external
    create_port:binding:host_id
    create_port:binding:profile
    get_port:queue_id
    get_port:binding:vif_type
    get_port:binding:vif_details
    get_port:binding:host_id
    get_port:binding:profile
    update_port:binding:host_id
    update_port:binding:profile
    get_router:ha
    create_router:external_gateway_info:enable_snat
    create_router:distributed
    create_router:ha
    get_router:distributed
    update_router:external_gateway_info:enable_snat
    update_router:distributed
    update_router:ha
    create_router:external_gateway_info:external_fixed_ips
    update_router:external_gateway_info:external_fixed_ips
    create_firewall:shared
    get_firewall:shared
    update_firewall:shared
    create_qos_queue
    get_qos_queue
    update_agent
    delete_agent
    get_agent
    create_dhcp-network
    delete_dhcp-network
    get_dhcp-networks
    create_l3-router
    delete_l3-router
    get_l3-routers
    get_dhcp-agents
    get_l3-agents
    get_loadbalancer-agent
    get_loadbalancer-pools
    get_agent-loadbalancers
    get_loadbalancer-hosting-agent
    create_floatingip:floating_ip_address
    create_network_profile
    update_network_profile
    delete_network_profile
    update_policy_profiles
    create_metering_label
    delete_metering_label
    get_metering_label
    create_metering_label_rule
    delete_metering_label_rule
    get_metering_label_rule
    get_lsn
    create_lsn
    admin_only
    compute:unlock_override
    compute_extension:accounts
    compute_extension:admin_actions
    compute_extension:admin_actions:resetNetwork
    compute_extension:admin_actions:injectNetworkInfo
    compute_extension:admin_actions:migrateLive
    compute_extension:admin_actions:resetState
    compute_extension:admin_actions:migrate
    compute_extension:aggregates
    compute_extension:agents
    compute_extension:baremetal_nodes
    compute_extension:cells
    compute_extension:cells:create
    compute_extension:cells:delete
    compute_extension:cells:update
    compute_extension:cells:sync_instances
    compute_extension:cloudpipe
    compute_extension:cloudpipe_update
    compute_extension:evacuate
    compute_extension:extended_server_attributes
    compute_extension:fixed_ips
    compute_extension:flavor_access:addTenantAccess
    compute_extension:flavor_access:removeTenantAccess
    compute_extension:flavorextraspecs:create
    compute_extension:flavorextraspecs:update
    compute_extension:flavorextraspecs:delete
    compute_extension:flavormanage
    compute_extension:floating_ips_bulk
    compute_extension:fping:all_tenants
    compute_extension:hosts
    compute_extension:hypervisors
    compute_extension:instance_actions:events
    compute_extension:instance_usage_audit_log
    compute_extension:networks
    compute_extension:networks_associate
    compute_extension:quotas:update
    compute_extension:quotas:delete
    compute_extension:security_group_default_rules
    compute_extension:server_diagnostics
    compute_extension:services
    compute_extension:shelveOffload
    compute_extension:simple_tenant_usage:list
    compute_extension:users
    compute_extension:availability_zone:detail
    compute_extension:used_limits_for_admin
    compute_extension:migrations:index
    compute_extension:os-assisted-volume-snapshots:create
    compute_extension:os-assisted-volume-snapshots:delete
    compute_extension:console_auth_tokens
    compute_extension:os-server-external-events:create
    network:attach_external_network
    os_compute_api:os-admin-actions
    os_compute_api:os-admin-actions:reset_network
    os_compute_api:os-admin-actions:inject_network_info
    os_compute_api:os-admin-actions:reset_state
    os_compute_api:os-aggregates:index
    os_compute_api:os-aggregates:create
    os_compute_api:os-aggregates:show
    os_compute_api:os-aggregates:update
    os_compute_api:os-aggregates:delete
    os_compute_api:os-aggregates:add_host
    os_compute_api:os-aggregates:remove_host
    os_compute_api:os-aggregates:set_metadata
    os_compute_api:os-agents
    os_compute_api:os-baremetal-nodes
    os_compute_api:os-cells
    os_compute_api:os-cells:create
    os_compute_api:os-cells:delete
    os_compute_api:os-cells:update
    os_compute_api:os-cells:sync_instances
    os_compute_api:os-cloudpipe
    os_compute_api:os-evacuate
    os_compute_api:os-extended-server-attributes
    os_compute_api:os-fixed-ips
    os_compute_api:os-flavor-access:remove_tenant_access
    os_compute_api:os-flavor-access:add_tenant_access
    os_compute_api:os-flavor-extra-specs:create
    os_compute_api:os-flavor-extra-specs:update
    os_compute_api:os-flavor-extra-specs:delete
    os_compute_api:os-flavor-manage
    os_compute_api:os-floating-ips-bulk
    os_compute_api:os-fping:all_tenants
    os_compute_api:os-hosts
    os_compute_api:os-hypervisors
    os_compute_api:os-instance-actions:events
    os_compute_api:os-instance-usage-audit-log
    os_compute_api:os-migrate-server:migrate
    os_compute_api:os-migrate-server:migrate_live
    os_compute_api:os-networks
    os_compute_api:os-networks-associate
    os_compute_api:os-pci:index
    os_compute_api:os-pci:detail
    os_compute_api:os-pci:show
    os_compute_api:os-quota-sets:update
    os_compute_api:os-quota-sets:delete
    os_compute_api:os-quota-sets:detail
    os_compute_api:os-security-group-default-rules
    os_compute_api:os-server-diagnostics
    os_compute_api:os-services
    os_compute_api:os-shelve:shelve_offload
    os_compute_api:os-simple-tenant-usage:list
    os_compute_api:os-availability-zone:detail
    os_compute_api:os-used-limits
    os_compute_api:os-migrations:index
    os_compute_api:os-assisted-volume-snapshots:create
    os_compute_api:os-assisted-volume-snapshots:delete
    os_compute_api:os-console-auth-tokens
    os_compute_api:os-server-external-events:create

    Metering for block storage

    Or: What the hell is control_exchange?

    If you’re looking at Ceilometer for the first time, you might notice that there is no information about volumes or disk usage by default, at least that is the case with RHEL-OSP 7. This particular release being based on Kilo, the meters are certainly available.1

    There are two settings that need to be changed to enable cinder messaging.2 The first is to modify the notification_driver in /etc/cinder/cinder.conf and set it equal to messagingv2.

    notification_driver = messagingv2

    By default this setting is commented out, and without value; cinder is not even configured to have its notifications sent out on the bus. The 2.0 message format is an updated format that specifies messages are sent with what is called an envelope in documentation.3

    The next thing to adjust is setting the control_exchange to cinder. This is set to openstack by default.

    control_exchange = cinder

    The explanation for this, as read by the comment in the configuration file is:

    The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. (string value)

    This note looks like its aimed more at developers than administrators, and has to do with how olso_messaging works. There is a class, oslo_messaging.Target that encapsulates information regarding a message’s destination, and what I would think of as a channel. One of the parameters of oslo_messaging.Target is exchange, a string.4 Here’s what the class looks like:

    class oslo_messaging.Target(exchange=None, topic=None, namespace=None, version=None, server=None, fanout=None, legacy_namespaces=None)

    The exchange parameter is defined as a scope for topics. Leave unspecified to default to the control_exchange configuration option.4 This is exactly what we’re configuring.

    This is a lot to take in at once, so examples provided by wiki.openstack.org will help a bit.5

  • nova-api invokes the ‘run_instance’ method on the ‘scheduler’ topic in the ‘nova’ exchange and one of the nova-scheduler services handles it
  • cinder-api invokes the ‘run_instance’ method on the ‘scheduler’ topic in the ‘cinder’ exchange and one of the cinder-scheduler services handles it
  • heat-api invokes the ‘identify_stack’ method on the ‘engine’ topic in the ‘heat’ exchange and the heat-engine service handles it
  • Notice how each service invokes a method on a specific topic, and signals the exchange as equivalent to the service type that handles it. It should be no surprise then that ‘cinder’ is the exchange that Ceilometer listens to, to gather block service telemetry by default; sans additional configuration.

    Does anything listen on an openstack exchange (remember the default)? In short, not that I’ve found anywhere. This appears to be a default exchange for unconfigured notification settings; possibly remnants of backwards compatibility – but I’d love to hear from anyone with historic knowledge on this topic.

    All you will need to do then is restart the cinder-volume services, and enjoy telemetry 🙂


    References

    1. http://docs.openstack.org/admin-guide/telemetry-measurements.html
    2. http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-cinder.html
    3. ttp://docs.openstack.org/admin-guide/networking_adv-operational-features.html
    4. http://docs.openstack.org/developer/oslo.messaging/target.html
    5. https://wiki.openstack.org/wiki/Oslo/Messaging#Use_Cases