Openstack AIO deployment Guide – Train Release

In Openstack AIO deployment all the openstack services like keystone, nova, neutron, cinder, horizon, swift, heat etc are installed in a single node.

Openstack Release : Train 

Deployment Method: OpenStack-Ansible 

OS : CentOS Linux release 7.8.2003 (Core)

Server : Azure Virtual Machine

Openstack Ref Deployment guide :

Steps to deploy openstack aio 

Login to your CentOS instance with the root user.

Execute the following commands and scripts as the root user.

Prepare the host

## CentOS
# yum upgrade
# yum install git
# reboot

Note: Before rebooting, in /etc/sysconfig/selinux, make sure that SELINUX=enforcing“is changed to “SELINUX=disabled. SELinux enabled is not currently supported in OpenStack-Ansible for CentOS/RHEL due to a lack of maintainers for the feature.

Bootstrap Ansible and the required roles

Start by cloning the OpenStack-Ansible repository and changing into the repository root directory:

# git clone \
# cd /opt/openstack-ansible

Next switch the applicable branch/tag to be deployed from. Note that deploying from the head of a branch may result in an unstable build due to changes in flight and upstream OpenStack changes

# # List all existing tags.
# git tag -l

# # Checkout the stable branch and find just the latest tag
# git checkout stable/train
# git describe --abbrev=0 --tags

# # Checkout the latest tag from either method of retrieving the tag.
# git checkout 20.1.2

.The next step is to bootstrap Ansible and the Ansible roles for the development environment.

Run the following to bootstrap Ansible and the required roles:

# scripts/

Bootstrap the AIO configuration

By default the AIO bootstrap scripts deploy a base set of OpenStack services with sensible defaults for the purpose of a gate check, development or testing system.

For the default AIO scenario, the AIO configuration preparation is completed by executing:

# scripts/

How to add openstack services other than default services

To add OpenStack Services over and above the bootstrap-aio default services for the applicable scenario, copy the conf.d files with the .aio file extension into /etc/openstack_deploy and rename them to .yml files. For example, in order to enable the OpenStack Heat services, execute the following:

# cd /opt/openstack-ansible/
# cp etc/openstack_deploy/conf.d/heat.yml.aio /etc/openstack_deploy/conf.d/# mv /etc/openstack_deploy/conf.d/heat.yml.aio /etc/openstack_deploy/conf.d/heat.yml

Once you copy the additional services yml files to /etc/openstack_deploy/conf.d/ then you can run the bootstrap script.

# scripts/

Enable all openstack endpoint with http protocol

The /etc/openstack_deploy/user_variables.yml file defines the global overrides for the default variables.

For this environment, if you want to use the same IP address for the internal and external endpoints, you will need to ensure that the internal and public OpenStack endpoints are served with the same protocol. This is done with the following content. You can add the below contents to your /etc/openstack_deploy/user_variables.yml 

# This file contains an example of the global variable overrides
# which may need to be set for a production environment.
## OpenStack public endpoint protocol
openstack_service_publicuri_proto: http

It will configure all endpoints (internal and public) with http protocol only.

Ref. link :

Run playbooks

Finally, run the playbooks by executing:

# cd /opt/openstack-ansible/playbooks
# openstack-ansible setup-hosts.yml
# openstack-ansible setup-infrastructure.yml
# openstack-ansible setup-openstack.yml

The installation process will take a while to complete, but here are some general estimates:

  • Bare metal systems with SSD storage: ~ 30-50 minutes
  • Virtual machines with SSD storage: ~ 45-60 minutes
  • Systems with traditional hard disks: ~ 90-120 minutes

Rebooting an AIO

After reboot all the openstack services might not come up by itself, Thus we need to execute the following commands.

# cd /opt/openstack-ansible/playbooks
# openstack-ansible -e galera_ignore_cluster_state=true galera-install.yml

As the AIO includes all three cluster members of MariaDB/Galera, the cluster has to be re-initialized after the host is rebooted.

If this fails to get the database cluster back into a running state, then please make use of the Galera Cluster Recovery </admin/maintenance-tasks.html#galera-cluster-recovery> section in the operations guide.

Rebuilding an AIO

Sometimes it may be useful to destroy all the containers and rebuild the AIO. While it is preferred that the AIO is entirely destroyed and rebuilt, this isn’t always practical. As such the following may be executed instead:

Destroy all openstack lxc containers, delete service directories and logs

# # Move to the playbooks directory.
# cd /opt/openstack-ansible/playbooks

# # Destroy all of the running containers.
# openstack-ansible lxc-containers-destroy.yml
# # Uninstall the core services that were installed.
# for i in $(pip freeze | grep -e "nova\|neutron\|keystone\|swift\|cinder"); do \
    pip uninstall -y $i; done

# # Remove crusty directories.
# rm -rf /openstack /etc/{neutron,nova,swift,cinder} \

# # Remove the pip configuration files on the host
# rm -rf /root/.pip

# # Remove the apt package manager proxy
# rm /etc/apt/apt.conf.d/00apt-cacher-proxy

Delete all the Volume groups, mount points and loopback devices

During AIO installation, it creates some loopback devices and mount some files and directories to those loopback devices and it always creates volumes groups for cinder volume.

Run the below command to list the volume groups

# vgdisplay

It will list all the volume groups, whatever has been created by aio setup.

Now we need to delete these volume groups using below command

# vgremove <volume-group-name>

Note: volume-group-name, you will get from vgdisplay command

Now we need to unmount all the files and directories mounted by aio setup

# df -h 

Use the below command to list all the mount points.

Look for all the directories which are mounted on loopback devices. Below is an example snippet for the same.

[opnfv@aio1 ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        252G     0  252G   0% /dev
tmpfs           252G     0  252G   0% /dev/shm
tmpfs           252G  3.2M  252G   1% /run
tmpfs           252G     0  252G   0% /sys/fs/cgroup
/dev/sda2       880G   23G  813G   3% /
/dev/sda1      1022M   12M 1011M   2% /boot/efi
/dev/loop1      1.0T   49M  1.0T   1% /var/lib/nova/instances
/dev/loop2      1.0T   36M  1.0T   1% /srv/swift1.img
/dev/loop3      1.0T   36M  1.0T   1% /srv/swift2.img
/dev/loop4      1.0T   36M  1.0T   1% /srv/swift3.img
tmpfs            51G     0   51G   0% /run/user/0
/dev/loop6      879G  225M  877G   1% /var/lib/machines
tmpfs            51G     0   51G   0% /run/user/1000
[opnfv@aio1 ~]$
[opnfv@aio1 ~]$

Now use umount command to unmount all the files are directories from loopback devices.

# umount /var/lib/machines

After unmounting all the all files and directories, we need to delete these files and directories 

Use rm command to delete these

# rm -rf /var/lib/machines

After deleting these files and directories, we need to delete the loopback devices using the below command.

# losetup -d /dev/loop0

Likewise we need to delete all loopback devices

Delete all the virtual network interfaces which is created by aio setup

# # Command to list all the network interfaces
# ifconfig

# # Command to delete the network interfaces
# ip link delete <Interface-name>


# ip link delete br-vlan

Use the following commands for the same.

Note: Don’t try to delete the physical interface e.g eth0 or ens interface i.e the interface using which you are accessing the machine via ip address.

You might see some of the interfaces are not getting deleted, so you can leave them as it is.

That’s it, now your clean up is done. You can follow the steps to redeploy the aio setup

Install openstack clients

You need to manually install the openstack client on the machine to run the openstack commands.

First we need to install pip which is required to install openstack client

1. Add the EPEL Repository

Pip is not available in CentOS 7 core repositories. To install pip we need to enable the EPEL repository:

# sudo yum install epel-release

2. Install pip

Once the EPEL repository is enabled we can install pip and all of its dependencies with the following command:

pip install pip --upgrade

3. Upgrade pip

# pip install pip --upgrade

4. Now Install openstack client

The following example shows the command for installing the OpenStack client with pip, which supports multiple services.

# pip install python-openstackclient

The following individual clients are deprecated in favor of a common client. Instead of installing and learning all these clients, we recommend installing and using the OpenStack client. You may need to install an individual project’s client because coverage is not yet sufficient in the OpenStack client. If you need to install an individual client’s project, replace the <project> name in this pip install command using the list below.

# pip install python-<project>client
  • barbican – Key Manager Service API
  • ceilometer – Telemetry API
  • cinder – Block Storage API and extensions
  • cloudkitty – Rating service API
  • designate – DNS service API
  • fuel – Deployment service API
  • glance – Image service API
  • gnocchi – Telemetry API v3
  • heat – Orchestration API
  • keystone – Identity service API and extensions
  • magnum – Containers service API
  • manila – Shared file systems API
  • mistral – Workflow service API
  • monasca – Monitoring API
  • murano – Application catalog API
  • neutron – Networking API
  • nova – Compute API and extensions
  • sahara – Data Processing API
  • senlin – Clustering service API
  • swift – Object Storage API
  • trove – Database service API

Install heat client

With openstack client commands you might not be able to run the openstack heat commands.Thus you need to install it separately using the below command.

# pip install python-heatclient

Issues Faced and their workarounds

  1. Openstack client issue

After Installing openstack client, try to run openstack commands. If you are able to run it without any issue then it’s good. We faced the issue while running the openstack cli command. Below is the screen shot for the same.

This issue is coming because of a python module named “queue” . Python is not able to import this module.Thus after looking at the errors, we find out which python files are trying to import this and we edit those files. We replaced “import queue” with “from multiprocessing import Queue

We edited these below mentioned two files and the openstack client started working.

# /home/opnfv/.local/lib/python2.7/site-packages/openstack/


  1. VM launched failed with error “No Valid host was found”

If you face this issue, follow the below steps.

  1. If you are running everything on a virtual machine, then we need to  set virt_type=qemu in /etc/nova/nova.conf 
  2. Modify /etc/nova/nova.conf file and increase the allocation ratio . cpu_allocation_ratio and memory_allocation_ratio in the node  and  reboot the node
  3. This will resolve this issue.

3. Not able to run openstack commands with keystone’s  public endpoint

We were not able to run openstack commands using stackrc file downloaded from horizon We have made all public and internal urls as http while aio installation.

We were getting the error as mentioned below

[opnfv@aio1 ~]$ openstack stack list
Failed to discover available identity versions when contacting Attempting to parse version from URL.
Unable to establish connection to ('Connection aborted.', BadStatusLine("''",))
[opnfv@aio1 ~]$ 

Thus after debugging we came to know that haproxy is still pointing to https for all the public endpoints. Thus we edited the haproxy files and it worked properly.

Workaround steps

  1. Login to the node with root user and go to /etc/haproxy 
  2.  Go to /etc/haproxy/conf.d
  3. Open keystone_service file using vi editor. vi keystone_service
  4. In that file you will find “frontend keystone_service-front-1” and below this you will see a bind followed by your public endpoint ip and ssl information. Here you need to remove this ssl information and for reqadd X-Forwarded-Proto:\ https, you need to make it http


The original file contents will look like this

# Ansible managed

frontend keystone_service-front-1
    bind ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    option httplog
    option forwardfor except
    reqadd X-Forwarded-Proto:\ https
    mode http
    default_backend keystone_service-back

frontend keystone_service-front-2
    option httplog
    option forwardfor except
    mode http
    default_backend keystone_service-back

backend keystone_service-back
    mode http
    balance leastconn
    stick store-request src
    stick-table type ip size 256k expire 30m
    option forwardfor
    option httplog
    option httpchk HEAD / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck

    server aio1_keystone_container-27a17f33 check port 5000 inter 12000 rise 1 fall 1

After making changes it will look like the below

# Ansible managed

frontend keystone_service-front-1
    option httplog
    option forwardfor except
    reqadd X-Forwarded-Proto:\ http
    mode http
    default_backend keystone_service-back

frontend keystone_service-front-2
    option httplog
    option forwardfor except
    mode http
    default_backend keystone_service-back

backend keystone_service-back
    mode http
    balance leastconn
    stick store-request src
    stick-table type ip size 256k expire 30m
    option forwardfor
    option httplog
    option httpchk HEAD / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck

    server aio1_keystone_container-27a17f33 check port 5000 inter 12000 rise 1 fall 1

Note: Same change we need to make in  /etc/haproxy/haproxy.cfg file. We need to make the same changes for all other services except for Horizon.

After making changes we need to restart the haproxy service using the below command.

# systemctl restart haproxy

After implementing this work around you will be able run all openstack commands with public endpoints with http protocol

8 thoughts on “Openstack AIO deployment Guide – Train Release”

Leave a comment