[openstack-announce] [new][openstackansible] openstack-ansible 14.0.0 release (newton)

no-reply at openstack.org no-reply at openstack.org
Thu Oct 20 18:24:06 UTC 2016


We are eager to announce the release of:

openstack-ansible 14.0.0: Ansible playbooks for deploying OpenStack

This release is part of the newton release series.

The source is available from:

    http://git.openstack.org/cgit/openstack/openstack-ansible

Download the package from:

    https://tarballs.openstack.org/openstack-ansible/

For more details, please see below.

14.0.0
^^^^^^


New Features
************

* LXC containers will now have a proper RFC1034/5 hostname set
  during post build tasks. A localhost entry for 127.0.1.1 will be
  created by converting all of the "_" in the "inventory_hostname" to
  "-". Containers will be created with a default domain of
  *openstack.local*. This domain name can be customized to meet your
  deployment needs by setting the option "lxc_container_domain".

* The option "openstack_domain" has been added to the
  **openstack_hosts** role. This option is used to setup proper
  hostname entries for all hosts within a given OpenStack deployment.

* The **openstack_hosts** role will setup an RFC1034/5 hostname and
  create an alias for all hosts in inventory.

* Added new parameter "`cirros_img_disk_format" to support disk
  formats other than qcow2.

* Ceilometer can now use Gnocchi for storage. By default this is
  disabled. To enable the service, set "ceilometer_gnocchi_enabled:
  yes". See the Gnocchi role documentation for more details.

* The os_horizon role now has support for the horizon ironic-ui
  dashboard. The dashboard may be enabled by setting
  "horizon_enable_ironic_ui" to "True" in
  "/etc/openstack_deploy/user_variables.yml".

* Adds support for the horizon ironic-ui dashboard. The dashboard
  will be automatically enabled if any ironic hosts are defined.

* The os_horizon role now has support for the horizon magnum-ui
  dashboard. The dashboard may be enabled by setting
  "horizon_enable_magnum_ui" to "True" in
  "/etc/openstack_deploy/user_variables.yml".

* Adds support for the horizon magnum-ui dashboard. The dashboard
  will be automatically enabled if any magnum hosts are defined.

* The "horizon_keystone_admin_roles" variable is added to support
  the "OPENSTACK_KEYSTONE_ADMIN_ROLES" list in the
  horizon_local_settings.py file.

* A new variable has been added to allow a deployer to control the
  restart of containers via the handler. This new option is
  "lxc_container_allow_restarts" and has a default of "yes". If a
  deployer wishes to disable the auto-restart functionality they can
  set this value to "no" and automatic container restarts that are not
  absolutely required will be disabled.

* Experimental support has been added to allow the deployment of the
  OpenStack Magnum service when hosts are present in the host group
  "magnum-infra_hosts".

* Deployers can now blacklist certain Nova extensions by providing a
  list of such extensions in "horizon_nova_extensions_blacklist"
  variable, for example:

     horizon_nova_extensions_blacklist:
       - "SimpleTenantUsage"

* The os_nova role can now deploy the nova-lxd hypervisor. This can
  be achieved by setting "nova_virt_type" to "lxd" on a per-host basis
  in "openstack_user_config.yml" or on a global basis in
  "user_variables.yml".

* The os_nova role can now deploy the a custom
  /etc/libvirt/qemu.conf file by defining "qemu_conf_dict".

* The role now enables auditing during early boot to comply with the
  requirements in V-38438. By default, the GRUB configuration
  variables in "/etc/default/grub.d/" will be updated and the active
  "grub.cfg" will be updated.

  Deployers can opt-out of the change entirely by setting a variable:

     security_enable_audit_during_boot: no

  Deployers may opt-in for the change without automatically updating
  the active "grub.cfg" file by setting the following Ansible
  variables:

     security_enable_audit_during_boot: yes
     security_enable_grub_update: no

* A task was added to disable secure ICMP redirects per the
  requirements in V-38526. This change can cause problems in some
  environments, so it is disabled by default. Deployers can enable the
  task (which disables secure ICMP redirects) by setting
  "security_disable_icmpv4_redirects_secure" to "yes".

* A new task was added to disable ICMPv6 redirects per the
  requirements in V-38548. However, since this change can cause
  problems in running OpenStack environments, it is disabled by
  default. Deployers who wish to enable this task (and disable ICMPv6
  redirects) should set "security_disable_icmpv6_redirects" to "yes".

* AIDE is configured to skip the entire "/var" directory when it
  does the database initialization and when it performs checks. This
  reduces disk I/O and allows these jobs to complete faster.

  This also allows the initialization to become a blocking process and
  Ansible will wait for the initialization to complete prior to
  running the next task.

* In order to reduce the time taken for fact gathering, the default
  subset gathered has been reduced to a smaller set than the Ansible
  default. This may be changed by the deployer by setting the
  "ANSIBLE_GATHER_SUBSET" variable in the bash environment prior to
  executing any ansible commands.

* A new option has been added to "bootstrap-ansible.sh" to set the
  role fetch mode. The environment variable "ANSIBLE_ROLE_FETCH_MODE"
  sets how role dependencies are resolved.

* The auditd rules template included a rule that audited changes to
  the AppArmor policies, but the SELinux policy changes were not being
  audited. Any changes to SELinux policies in "/etc/selinux" are now
  being logged by auditd.

* The container cache preparation process now allows "copy-on-write"
  to be set as the "lxc_container_backing_method" when the
  "lxc_container_backing_store" is set to "lvm". When this is set a
  base container will be created using a name of the form *<linux-
  distribution>*-*distribution-release>*-*<host-cpu-architecture>*.
  The container will be stopped as it is not used for anything except
  to be a backing store for all other containers which will be based
  on a snapshot of the base container.

* When using copy-on-write backing stores for containers, the base
  container name may be set using the variable
  "lxc_container_base_name" which defaults to *<linux-distribution
  >*-*distribution-release>*-*<host-cpu-architecture>*.

* The container cache preparation process now allows "overlayfs" to
  be set as the "lxc_container_backing_store". When this is set a base
  container will be created using a name of the form *<linux-
  distribution>*-*distribution-release>*-*<host-cpu-architecture>*.
  The container will be stopped as it is not used for anything except
  to be a backing store for all other containers which will be based
  on a snapshot of the base container. The "overlayfs" backing store
  is not recommended to be used for production unless the host kernel
  version is 3.18 or higher.

* Containers will now bind mount all logs to the physical host
  machine in the "/openstack/log/{{ inventory_hostname }}" location.
  This change will ensure containers using a block backed file system
  (lvm, zfs, bfrfs) do not run into issues with full file systems due
  to logging.

* Added new variable "tempest_img_name".

* Added new variable "tempest_img_url". This variable replaces
  "cirros_tgz_url" and "cirros_img_url".

* Added new variable "tempest_image_file". This variable replaces
  the hard-coded value for the "img_file" setting in tempest.conf.j2.
  This will allow users to specify images other than cirros.

* Added new variable "tempest_img_disk_format". This variable
  replaces "cirros_img_disk_format".

* The "rsyslog_server" role now has support for CentOS 7.

* Support had been added to install the ceph_client packages and
  dependencies from Ceph.com, Ubuntu Cloud Archive (UCA), or the
  operating system's default repository.

  The "ceph_pkg_source" variable controls the install source for the
  Ceph packages. Valid values include:

  * "ceph": This option installs Ceph from a ceph.com repo.
    Additional variables to adjust items such as Ceph release and
    regional download mirror can be found in the variables files.

  * "uca": This option installs Ceph from the Ubuntu Cloud Archive.
    Additional variables to adjust items such as the OpenStack/Ceph
    release can be found in the variables files.

  * "distro": This options installs Ceph from the operating system's
    default repository and unlike the other options does not attempt
    to manage package keys or add additional package repositories.

* The pip_install role can now configure pip to be locked down to
  the repository built by OpenStack-Ansible. To enable the lockdown
  configuration, deployers may set "pip_lock_to_internal_repo" to
  "true" in "/etc/openstack_deploy/user_variables.yml".

* The *dynamic_inventory.py* file now takes a new argument, "--
  check", which will run the inventory build without writing any files
  to the file system. This is useful for checking to make sure your
  configuration does not contain known errors prior to running Ansible
  commands.

* The ability to support MultiStrOps has been added to the
  config_template action plugin. This change updates the parser to use
  the "set()" type to determine if values within a given key are to be
  rendered as "MultiStrOps". If an override is used in an INI config
  file the set type is defined using the standard yaml construct of
  "?" as the item marker.

     # Example Override Entries
     Section:
       typical_list_things:
         - 1
         - 2
       multistrops_things:
         ? a
         ? b

     # Example Rendered Config:
     [Section]
     typical_list_things = 1,2
     multistrops_things = a
     multistrops_things = b

* Although the STIG requires martian packets to be logged, the
  logging is now disabled by default. The logs can quickly fill up a
  syslog server or make a physical console unusable.

  Deployers that need this logging enabled will need to set the
  following Ansible variable:

     security_sysctl_enable_martian_logging: yes

* The "rabbitmq_server" now supports a configurable inventory host
  group. Deployers can override the "rabbitmq_host_group" variable if
  they wish to use the role to create additional RabbitMQ clusters on
  a custom host group.

* The "lxc-container-create" role now consumes the variable
  "lxc_container_bind_mounts" which should contain a list of bind
  mounts to apply to a newly created container. The appropriate host
  and container directory will be created and the configuration
  applied to the container config. This feature is designed to be used
  in group_vars to ensure that containers are fully prepared at the
  time they are created, thus cutting down the number of times
  containers are restarted during deployments and upgrades.

* The "lxc-container-create" role now consumes the variable
  "lxc_container_config_list" which should contain a list of the
  entries which should be added to the LXC container config file when
  the container is created. This feature is designed to be used in
  group_vars to ensure that containers are fully prepared at the time
  they are created, thus cutting down the number of times containers
  are restarted during deployments and upgrades.

* The "lxc-container-create" role now consumes the variable
  "lxc_container_commands" which should contain any shell commands
  that should be executed in a newly created container. This feature
  is designed to be used in group_vars to ensure that containers are
  fully prepared at the time they are created, thus cutting down the
  number of times containers are restarted during deployments and
  upgrades.

* The container creation process now allows "copy-on-write" to be
  set as the "lxc_container_backing_method" when the
  "lxc_container_backing_store" is set to "lvm". When this is set it
  will use a snapshot of the base container to build the containers.

* The container creation process now allows "overlayfs" to be set as
  the "lxc_container_backing_store". When this is set it will use a
  snapshot of the base container to build the containers. The
  "overlayfs" backing store is not recommended to be used for
  production unless the host kernel version is 3.18 or higher.

* LXC containers will now generate a fixed mac address on all
  network interfaces when the option *lxc_container_fixed_mac* is set
  to **true**. This feature was implemented to resolve issues with
  dynamic mac addresses in containers generally experienced at scale
  with network intensive services.

* All of the database and database user creates have been removed
  from the roles into the playbooks. This allows the roles to be
  tested independently of the deployed database and also allows the
  roles to be used independently of infrastructure choices made by the
  integrated OSA project.

* Host security hardening is now applied by default using the
  "openstack-ansible-security" role. Developers can opt out by setting
  the "apply_security_hardening" Ansible variable to "false". For more
  information about the role and the changes it makes, refer to the
  openstack-ansible-security documentation
  (http://docs.openstack.org/developer/openstack-ansible-security/).

* If there are swift hosts in the environment, then the value for
  "cinder_service_backup_program_enabled" will automatically be set to
  "True". This negates the need to set this variable in
  "user_variables.yml", but the value may still be overridden at the
  deployer discretion.

* If there are swift hosts in the environment, then the value for
  "glance_default_store" will automatically be set to "swift". This
  negates the need to set this variable in "user_variables.yml", but
  the value may still be overridden at the deployer discretion.

* The os_nova role can now detect a PowerNV environment and set the
  virtualization type to 'kvm'.

* The security role now has tasks that will disable the graphical
  interface on a server using upstart (Ubuntu 14.04) or systemd
  (Ubuntu 16.04 and CentOS 7). These changes take effect after a
  reboot.

  Deployers that need a graphical interface will need to set the
  following Ansible variable:

     security_disable_x_windows: no

* Yaml files used for ceilometer configuration will now allow a
  deployer to override a given list. If an override is provided that
  matches an already defined list in one of the ceilometer default
  yaml files the entire list will be replaced by the provided
  override. Previously, a nested lists of lists within the default
  ceilometer configration files would extend should a deployer provide
  an override matching an existing pipeline. The extension of the
  defaults had a high probability to cause undesirable outcomes and
  was very unpredictable.

* An Ansible was added to disable the "rdisc" service on CentOS
  systems if the service is installed on the system.

  Deployers can opt-out of this change by setting
  "security_disable_rdisc" to "no".

* Whether ceilometer should be enabled by default for each service
  is now dynamically determined based on whether there are any
  ceilometer hosts/containers deployed. This behaviour can still be
  overridden by toggling "<service>_ceilometer_enabled" in
  "/etc/openstack_deploy/user_variables.yml".

* The "os_neutron" role now determines the default configuration for
  openvswitch-agent "tunnel_types" and the presence or absence of
  "local_ip" configuration based on the value of
  "neutron_ml2_drivers_type". Deployers may directly control this
  configuration by overriding the "neutron_tunnel_types" variable .

* The "os_neutron" role now configures neutron ml2 to load the
  "l2_population" mechanism driver by default based on the value of
  "neutron_l2_population". Deployers may directly control the neutron
  ml2 mechanism drivers list by overriding the "mechanisms" variable
  in the "neutron_plugins" dictionary.

* LBaaSv2 is now enabled by default in all-in-one (AIO) deployments.

* The Linux Security Module (LSM) that is appropriate for the Linux
  distribution in use will be automatically enabled by the security
  role by default. Deployers can opt out of this change by setting the
  following Ansible variable:

     security_enable_linux_security_module: False

  The documentation for STIG V-51337 has more information about how
  each LSM is enabled along with special notes for SELinux.

* An export flag has been added to the "inventory-manage.py" script.
  This flag allows exporting of host and network information from an
  OpenStack-Ansible inventory for import into another system, or an
  alternate view of the existing data. See the developer docs for more
  details.

* Variable "ceph_extra_confs" has been expanded to support
  retrieving additional ceph.conf and keyrings from multiple ceph
  clusters automatically.

* Additional libvirt ceph client secrets can be defined to support
  attaching volumes from different ceph clusters.

* New variable "ceph_extra_confs" may be defined to support
  deployment of extra Ceph config files.  This is useful for cinder
  deployments that utilize multiple Ceph clusters as cinder backends.

* The "py_pkgs" lookup plugin now has strict ordering for
  requirement files discovered. These files are used to add additional
  requirements to the python packages discovered. The order is defined
  by the constant, "REQUIREMENTS_FILE_TYPES" which contains the
  following entries, 'test-requirements.txt', 'dev-requirements.txt',
  'requirements.txt', 'global-requirements.txt', 'global-requirement-
  pins.txt'. The items in this list are arranged from least to most
  priority.

* The "openstack-ansible-galera_server" role will now prevent
  deployers from changing the "galera_cluster_name" variable on
  clusters that already have a value set in a running galera cluster.
  You can set the new "galera_force_change_cluster_name" variable to
  "True" to force the "galera_cluster_name" variable to be changed. We
  recommend setting this by running the galera-install.yml playbook
  with "-e galera_force_change_cluster_name=True", to avoid changing
  the "galera_cluster_name" variable unintentionally. Use with
  caution, changing the "galera_cluster_name" value can cause your
  cluster to fail, as the nodes won't join if restarted sequentially.

* The repo build process is now able to make use of a pre-staged git
  cache. If the "/var/www/repo/openstackgit" folder on the repo server
  is found to contain existing git clones then they will be updated if
  they do not already contain the required SHA for the build.

* The repo build process is now able to synchronize a git cache from
  the deployment node to the repo server. The git cache path on the
  deployment node is set using the variable "repo_build_git_cache". If
  the deployment node hosts the repo container, then the folder will
  be symlinked into the bind mount for the repo container. If the
  deployment node does not host the repo container, then the contents
  of the folder will be synchronised into the repo container.

* The "os_glance" role now supports Ubuntu 16.04 and SystemD.

* Gnocchi is available for deploy as a metrics storage service. At
  this time it does not integrate with Aodh or Ceilometer. To deploy
  Aodh or Ceilometer to use Gnocchi as a storage / query API, each
  must be configured appropriately with the use of overrides as
  described in the configuration guides for each of these services.

* CentOS 7 and Ubuntu 16.04 support have been added to the "haproxy"
  role.

* The "haproxy" role installs *hatop* from source to ensure that the
  same operator tooling is available across all supported
  distributions. The download URL for the source can be set using the
  variable "haproxy_hatop_download_url".

* Added a boolean var *haproxy_service_enabled* to the
  *haproxy_service_configs* dict to support toggling haproxy endpoints
  on/off.

* Added a new "haproxy_extra_services" var which will allow extra
  haproxy endpoint additions.

* The repo server will now be used as a package manager cache.

* The HAProxy role provided by OpenStack-Ansible now terminates SSL
  using a self-signed certificate by default. While this can be
  disabled the inclusion of SSL services on all public endpoints as a
  default will help make deployments more secure without any
  additional user interaction. More information on SSL and certificate
  generation can be found here (http://docs.openstack.org/developer
  /openstack-ansible/install-guide/configure-haproxy.html#securing-
  haproxy-communication-with-ssl-certificates).

* The "rabbitmq_server" role now supports configuring HiPE
  compilation of the RabbitMQ server Erlang code. This configuration
  option may improve server performance for some workloads and
  hardware. Deployers can override the "rabbitmq_hipe_compile"
  variable, setting a value of "True" if they wish to enable this
  feature.

* Horizon now has the ability to set arbitrary configuration options
  using global option "horizon_config_overrides" in YAML format. The
  overrides follow the same pattern found within the other OpenStack
  service overrides. General documentation on overrides can be found
  here (http://docs.openstack.org/developer/openstack-ansible/install-
  guide/configure-openstack.html#overriding-openstack-configuration-
  defaults).

* The "os_horizon" role now supports configuration of custom themes.
  Deployers can use the new "horizon_custom_themes" and
  "horizon_default_theme" variables to configure the dashboard with
  custom themes and default to a specific theme respectively.

* CentOS 7 support has been added to the "galera_server" role.

* Implemented support for Ubuntu 16.04 Xenial.  percona-xtrabackup
  packages will be installed from distro repositories, instead of
  upstream percona repositories due to lack of available packages
  upstream at the time of implementing this feature.

* A task was added that restricts ICMPv4 redirects to meet the
  requirements of V-38524 in the STIG. This configuration is disabled
  by default since it could cause issues with LXC in some
  environments.

  Deployers can enable this configuration by setting an Ansible
  variable:

     security_disable_icmpv4_redirects: yes

* The audit rules added by the security role now have key fields
  that make it easier to link the audit log entry to the audit rule
  that caused it to appear.

* pip can be installed via the deployment host using the new
  variable "pip_offline_install". This can be useful in environments
  where the containers lack internet connectivity. Please refer to the
  limited connectivity installation guide
  (http://docs.openstack.org/developer /openstack-ansible/install-
  guide/app-no-internet-connectivity.html #install-pip-through-
  deployment-host) for more information.

* The env.d directory included with OpenStack-Ansible is now used as
  the first source for the environment skeleton, and
  "/etc/openstack_deploy/env.d" will be used only to override values.
  Deployers without customizations will no longer need to copy the
  env.d directory to /etc/openstack_deploy. As a result, the env.d
  copy operation has been removed from the node bootstrap role.

* A new debug flag has been added to "dynamic_inventory.py". This
  should make it easier to understand what's happening with the
  inventory script, and provide a way to gather output for more
  detailed bug reports. See the developer docs for more details.

* The "ironic" role now supports Ubuntu 16.04 and SystemD.

* Experimental support has been added to allow the deployment of the
  OpenStack Bare Metal Service (Ironic). Details for how to set it up
  are available in the OpenStack-Ansible Install Guide for Ironic
  (http://docs.openstack.org/developer/openstack-ansible/install-guide
  /configure-ironic.html).

* To ensure the deployment system remains clean the Ansible
  execution environment is contained within a virtual environment. The
  virtual environment is created at "/opt/ansible-runtime" and the
  "ansible.*" CLI commands are linked within /usr/local/bin to ensure
  there is no interruption in the deployer workflow.

* There is a new default configuration for keepalived, supporting
  more than 2 nodes.

* In order to make use of the latest stable keepalived version, the
  variable "keepalived_use_latest_stable" must be set to "True"

* The ability to support login user domain and login project domain
  has been added to the keystone module.

     # Example usage
     - keystone:
         command: ensure_user
         endpoint: "{{ keystone_admin_endpoint }}"
         login_user: admin
         login_password: admin
         login_project_name: admin
         login_user_domain_name: custom
         login_project_domain_name: custom
         user_name: demo
         password: demo
         project_name: demo
         domain_name: custom

* The new LBaaS v2 dashboard is available in Horizon. Deployers can
  enable the panel by setting the following Ansible variable:

     horizon_enable_neutron_lbaas: True

* The LBaaSv2 service provider configuration can now be adjusted
  with the "neutron_lbaasv2_service_provider" variable. This allows a
  deployer to choose to deploy LBaaSv2 with Octavia in a future
  version.

* The config_template action plugin now has a new option to toggle
  list extension for JSON or YAML formats. The new option is
  "list_extend" and is a boolean. The default is True which maintains
  the existing API.

* The lxc_hosts role can now make use of a primary and secondary gpg
  keyserver for gpg validation of the downloaded cache. Setting the
  servers to use can be done using the
  "lxc_image_cache_primary_keyserver" and
  "lxc_image_cache_secondary_keyserver" variables.

* The "lxc_container_create" role will now build a container based
  on the distro of the host OS.

* The "lxc_container_create" role now supports Ubuntu 14.04, 16.04,
  and RHEL/CentOS 7

* The LXC container creation process now has a configurable delay
  for the task which waits for the container to start. The variable
  "lxc_container_ssh_delay" can be set to change the default delay of
  five seconds.

* The "lxc_host" cache prep has been updated to use the LXC download
  template. This removes the last remaining dependency the project has
  on the rpc-trusty-container.tgz image (http://rpc-
  repo.rackspace.com/container_images/rpc-trusty-container.tgz).

* The "lxc_host" role will build lxc cache using the download
  template built from images found here
  (https://images.linuxcontainers.org). These images are upstream
  builds from the greater LXC/D community.

* The "lxc_host" role introduces support for CentOS 7 and Ubuntu
  16.04 container types.

* The inventory script will now dynamically populate the "lxc_hosts"
  group dynamically based on which machines have container affinities
  defined. This group is not allowed in user-defined configuration.

* Neutron HA router capabilities in Horizon will be enabled
  automatically if the neutron plugin type is ML2 and environment has
  >=2 L3 agent nodes.

* Horizon now has a boolean variable named
  "horizon_enable_ha_router" to enable Neutron HA router management.

* Horizon's IPv6 support is now enabled by default. This allows
  users to manage subnets with IPv6 addresses within the Horizon
  interface. Deployers can disable IPv6 support in Horizon by setting
  the following variable:

     horizon_enable_ipv6: False

  Please note: Horizon will still display IPv6 addresses in various
  panels with IPv6 support disabled. However, it will not allow any
  direct management of IPv6 configuration.

* memcached now logs with multiple levels of verbosity, depending on
  the user variables. Setting "debug: True" enables maximum verbosity
  while setting "verbose: True" logs with an intermediate level.

* The openstack-ansible-memcached_server role includes a new
  override, "memcached_connections" which is automatically calculated
  from the number of memcached connection limit plus additional 1k to
  configure the OS nofile limit. Without proper nofile limit
  configuration, memcached will crash in order to support higher
  parallel connection TCP/Memcache counts.

* The repo build process is now able to support building and
  synchronizing artifacts for multiple CPU architectures. Build
  artifacts are now tagged with the appropriate CPU architecture by
  default, and synchronization of build artifacts from secondary,
  architecture-specific repo servers back to the primary repo server
  is supported.

* The repo install process is now able to support building and
  synchronizing artifacts for multiple CPU architectures. To support
  multiple architectures, one or more repo servers must be created for
  each CPU architecture in the deployment. When multiple CPU
  architectures are detected among the repo servers, the repo-
  discovery process will automatically assign a repo master to perform
  the build process for each architecture.

* CentOS 7 support has been added to the "galera_client" role.

* Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge
  Agent should be enabled is now dynamically determined based on the
  "neutron_plugin_type" and the "neutron_ml2_mechanism_drivers" that
  are set. This aims to simplify the configuration of Neutron services
  and eliminate the need for deployers to override the entire
  "neutron_services" dict variable to disable these services.

* Neutron BGP dynamic routing plugin can now optionally be deployed
  and configured. Please see OpenStack Networking Guide: BGP dynamic
  routing (http://docs.openstack.org/networking-guide/config-bgp-
  dynamic-routing.html) for details about what the service is and what
  it provides.

* The Project Calico Neutron networking plugin is now integrated
  into the deployment. For setup instructions please see "os_neutron"
  role documentation.

* A conditional has been added to the "_local_ip" settings used in
  the "neutron_local_ip" which removes the hard requirement for an
  overlay network to be set within a deployment. If no overlay network
  is set within the deployment the "local_ip" will be set to the value
  of "ansible_ssh_host".

* Deployers can now configure tempest public and private networks by
  setting the following variables, 'tempest_private_net_provider_type'
  to either vxlan or vlan and 'tempest_public_net_provider_type' to
  flat or vlan. Depending on what the deployer sets these variables
  to, they may also need to update other variables accordingly, this
  mainly involves 'tempest_public_net_physical_type' and
  'tempest_public_net_seg_id'. Please refer to
  http://docs.openstack.org/mitaka/networking-guide/intro-basic-
  networking.html for more neutron networking information.

* The Project Calico Neutron networking plugin is now integrated
  into the "os_neutron" role. This can be activated using the
  instructions located in the role documentation.

* The "os_neutron" role will now default to the OVS firewall driver
  when "neutron_plugin_type" is "ml2.ovs" and the host is running
  Ubuntu 16.04 on PowerVM. To override this default behavior,
  deployers should define "neutron_ml2_conf_ini_overrides" and
  'neutron_openvswitch_agent_ini_overrides' in 'user_variables.yml'.
  Example below

     neutron_ml2_conf_ini_overrides:
       securitygroup:
         firewall_driver: neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
     neutron_openvswitch_agent_ini_overrides:
       securitygroup:
         firewall_driver: iptables_hybrid

* Neutron VPN as a Service (VPNaaS) can now optionally be deployed
  and configured. Please see the OpenStack Networking Guide
  (http://docs.openstack.org/mitaka/networking-guide/) for details
  about the what the service is and what it provides. See the VPNaaS
  Install Guide (http://docs.openstack.org/developer/openstack-ansible
  /install-guide/configure-network-services.html#virtual-private-
  network-service-optional) for implementation details.

* Support for Neutron distributed virtual routing has been added to
  the "os_neutron" role. This includes the implementation of
  Networking Guide's suggested agent configuration. This feature may
  be activated by setting "neutron_plugin_type: ml2.ovs.dvr" in
  "/etc/openstack_deploy/user_variables.yml".

* The horizon next generation instance management panels have been
  enabled by default. This changes horizon to use the upstream
  defaults instead of the legacy panels. Documentation can be found
  here
  (http://docs.openstack.org/developer/horizon/topics/settings.html
  #launch-instance-ng-enabled).

* The nova SSH public key distribution has been made a lot faster
  especially when deploying against very large clusters. To support
  larger clusters the role has moved away from the "authorized_key"
  module and is now generating a script to insert keys that may be
  missing from the authorized keys file. The script is saved on all
  nova compute nodes and can be found at "/usr/local/bin/openstack-
  nova-key.sh". If ever there is a need to reinsert keys or fix issues
  on a given compute node the script can be executed at any time
  without directly running the ansible playbooks or roles.

* The os_nova role can now detect and support basic deployment of a
  PowerVM environment. This sets the virtualization type to 'powervm'
  and installs/updates the PowerVM NovaLink package and nova-powervm
  driver.

* Nova UCA repository support is implemented by default. This will
  allow the users to benefit from the updated packages for KVM. The
  "nova_uca_enable" variable controls the install source for the KVM
  packages. By default this value is set to "True" to make use of UCA
  repository. User can set to "False" to disable.

* A new configuration parameter "security_ntp_bind_local_interfaces"
  was added to the security role to restrict the network interface to
  which chronyd will listen for NTP requests.

* The LXC container creation and modification process now supports
  online network additions. This ensures a container remains online
  when additional networks are added to a system.

* Open vSwitch driver support has been implemented. This includes
  the implementation of the appropriate Neutron configuration and
  package installation. This feature may be activated by setting
  "neutron_plugin_type: ml2.ovs" in
  "/etc/openstack_deploy/user_variables.yml".

* An opportunistic Ansible execution strategy has been implemented.
  This allows the Ansible linear strategy to skip tasks with
  conditionals faster by never queuing the task when the conditional
  is evaluated to be false.

* The Ansible SSH plugin has been modified to support running
  commands within containers without having to directly ssh into them.
  The change will detect presence of a container. If a container is
  found the physical host will be used as the SSH target and commands
  will be run directly. This will improve system reliability and speed
  while also opening up the possibility for SSH to be disabled from
  within the container itself.

* Added "horizon_apache_custom_log_format" tunable to the os-horizon
  role for changing CustomLog format. Default is "combined".

* Added keystone_apache_custom_log_format tunable for changing
  CustomLog format. Default is "combined".

* Apache MPM tunable support has been added to the os-keystone role
  in order to allow MPM thread tuning. Default values reflect the
  current Ubuntu default settings:

     keystone_httpd_mpm_backend: event
     keystone_httpd_mpm_start_servers: 2
     keystone_httpd_mpm_min_spare_threads: 25
     keystone_httpd_mpm_max_spare_threads: 75
     keystone_httpd_mpm_thread_limit: 64
     keystone_httpd_mpm_thread_child: 25
     keystone_httpd_mpm_max_requests: 150
     keystone_httpd_mpm_max_conn_child: 0

* Introduced option to deploy Keystone under Uwsgi. A new variable
  "keystone_mod_wsgi_enabled" is introduced to toggle this behavior.
  The default is "true" which continues to deploy with mod_wsgi for
  Apache. The ports used by Uwsgi for socket and http connection for
  both public and admin Keystone services are configurable (see also
  the "keystone_uwsgi_ports" dictionary variable). Other Uwsgi
  configuration can be overridden by using the
  "keystone_uwsgi_ini_overrides" variable as documented under
  "Overriding OpenStack configuration defaults" in the OpenStack-
  Ansible Install Guide. Federation features should be considered
  _experimental_ with this configuration at this time.

* Introduced option to deploy Keystone behind Nginx. A new variable
  "keystone_apache_enabled" is introduced to toggle this behavior. The
  default is "true" which continues to deploy with Apache. Additional
  configuration can be delivered to Nginx through the use of the
  "keystone_nginx_extra_conf" list variable. Federation features are
  not supported with this configuration at this time. Use of this
  option requires "keystone_mod_wsgi_enabled" to be set to "false"
  which will deploy Keystone under Uwsgi.

* The "os_cinder" role now supports Ubuntu 16.04.

* CentOS7/RHEL support has been added to the os_cinder role.

* CentOS7/RHEL support has been added to the os_glance role.

* CentOS7/RHEL support has been added to the os_keystone role.

* The "os_magnum" role now supports deployment on Ubuntu 16.04 using
  systemd.

* The galera_client role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "galera_client_package_state" to "present".

* The ceph_client role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "ceph_client_package_state" to "present".

* The os_ironic role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "ironic_package_state" to "present".

* The os_nova role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "nova_package_state" to "present".

* The memcached_server role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "memcached_package_state" to "present".

* The os_heat role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "heat_package_state" to "present".

* The rsyslog_server role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "rsyslog_server_package_state" to "present".

* The pip_install role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "pip_install_package_state" to "present".

* The repo_build role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "repo_build_package_state" to "present".

* The os_rally role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "rally_package_state" to "present".

* The os_glance role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "glance_package_state" to "present".

* The security role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "security_package_state" to "present".

* A new global option to control all package install states has been
  implemented. The default action for all distribution package
  installations is to ensure that the latest package is installed.
  This may be changed to only verify if the package is present by
  setting "package_state" to "present".

* The os_keystone role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "keystone_package_state" to "present".

* The os_cinder role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "cinder_package_state" to "present".

* The os_gnocchi role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "gnocchi_package_state" to "present".

* The os_magnum role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "magnum_package_state" to "present".

* The rsyslog_client role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "rsyslog_client_package_state" to "present".

* The os_sahara role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "sahara_package_state" to "present".

* The repo_server role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "repo_server_package_state" to "present".

* The haproxy_server role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "haproxy_package_state" to "present".

* The os_aodh role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "aodh_package_state" to "present".

* The openstack_hosts role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "openstack_hosts_package_state" to "present".

* The galera_server role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "galera_server_package_state" to "present".

* The rabbitmq_server role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "rabbitmq_package_state" to "present".

* The lxc_hosts role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "lxc_hosts_package_state" to "present".

* The os_ceilometer role now supports the ability to configure
  whether apt/yum tasks install the latest available package, or just
  ensure that the package is present. The default action is to ensure
  that the latest package is present. The action taken may be changed
  to only ensure that the package is present by setting
  "ceilometer_package_state" to "present".

* The os_swift role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "swift_package_state" to "present".

* The os_neutron role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "neutron_package_state" to "present".

* The os_horizon role now supports the ability to configure whether
  apt/yum tasks install the latest available package, or just ensure
  that the package is present. The default action is to ensure that
  the latest package is present. The action taken may be changed to
  only ensure that the package is present by setting
  "horizon_package_state" to "present".

* The PATH environment variable that is configured on the remote
  system can now be set using the "openstack_host_environment_path"
  list variable.

* The repo build process now has the ability to store the pip
  sources within the build archive. This ability is useful when
  deploying environments that are "multi-architecture", "multi-
  distro", or "multi-interpreter" where specific pre-build wheels may
  not be enough to support all of the deployment. To enable the
  ability to store the python source code within a given release, set
  the new option "repo_build_store_pip_sources" to "true".

* The repo server now has a Package Cache service for distribution
  packages. To leverage the cache, deployers will need to configure
  the package manager on all hosts to use the cache as a proxy. If a
  deployer would prefer to disable this service, the variable
  "repo_pkg_cache_enabled" should be set to "false".

* The "rabbitmq_server" role now supports deployer override of the
  RabbitMQ policies applied to the cluster. Deployers can override the
  "rabbitmq_policies" variable, providing a list of desired policies.

* The RabbitMQ Management UI is now available through HAProxy on
  port 15672. The default userid is "monitoring". This user can be
  modified by changing the parameter "rabbitmq_monitoring_userid" in
  the file "user_variables.yml". Please note that ACLs have been added
  to this HAProxy service by default, such that it may only be
  accessed by common internal clients.  Reference
  "playbooks/vars/configs/haproxy_config.yml"

* Added playbook for deploying Rally in the utility containers

* Our general config options are now stored in an "/usr/local/bin
  /openstack-ansible.rc" file and will be sourced when the "openstack-
  ansible" wrapper is invoked. The RC file will read in BASH
  environment variables and should any Ansible option be set that
  overlaps with our defaults the provided value will be used.

* The LBaaSv2 device driver is now set by the Ansible variable
  "neutron_lbaasv2_device_driver". The default is set to use the
  "HaproxyNSDriver", which allows for agent-based load balancers.

* The GPG key checks for package verification in V-38476 are now
  working for Red Hat Enterprise Linux 7 in addition to CentOS 7. The
  checks only look for GPG keys from Red Hat and any other GPG keys,
  such as ones imported from the EPEL repository, are skipped.

* CentOS7 support has been added to the "rsyslog_client" role.

* The options of application logrotate configuration files are now
  configurable. "rsyslog_client_log_rotate_options" can be used to
  provide a list of directives, and
  "rsyslog_client_log_rotate_scripts" can be used to provide a list of
  postrotate, prerotate, firstaction, or lastaction scripts.

* Experimental support has been added to allow the deployment of the
  Sahara data-processing service. To deploy sahara hosts should be
  present in the host group "sahara-infra_hosts".

* The Sahara dashboard is available in Horizon. Deployers can enable
  the panel by setting the following Ansible variable:

     horizon_enable_sahara_ui: True

* Tasks were added to search for any device files without a proper
  SELinux label on CentOS systems. If any of these device labels are
  found, the playbook execution will stop with an error message.

* The repo build process now selectively clones git repositories
  based on whether each OpenStack service group has any hosts in it.
  If there are no hosts in the group, the git repo for the service
  will not be cloned. This behaviour can be optionally changed to
  force all git repositories to be cloned by setting
  "repo_build_git_selective" to "no".

* The repo build process now selectively builds venvs based on
  whether each OpenStack service group has any hosts in it. If there
  are no hosts in the group, the venv will not be built. This
  behaviour can be optionally changed to force all venvs to be built
  by setting "repo_build_venv_selective" to "yes".

* The repo build process now selectively builds python packages
  based on whether each OpenStack service group has any hosts in it.
  If there are no hosts in the group, the list of python packages for
  the service will not be built. This behaviour can be optionally
  changed to force all python packages to be built by setting
  "repo_build_wheel_selective" to "no".

* A new variable is supported in the "neutron_services" dictionary
  called "service_conf_path". This variable enables services to deploy
  their config templates to paths outside of /etc/neutron by
  specifying a directory using the new variable.

* The openstack-ansible-security role supports the application of
  the Red Hat Enterprise Linux 6 STIG configurations to systems
  running CentOS 7 and Ubuntu 16.04 LTS.

* The "fallocate_reserve` option can now be set (in bytes or as a
  percentage) for swift by using the ``swift_fallocate_reserve"
  variable in "/etc/openstack_deploy/user_variables.yml". This value
  is the amount of space to reserve on a disk to prevent a situation
  where swift is unable to remove objects due to a lack of available
  disk space to work with. The default value is 1% of the total disk
  size.

* The "openstack-ansible-os_swift" role will now prevent deployers
  from changing the "swift_hash_path_prefix" and
  "swift_hash_path_suffix" variables on clusters that already have a
  value set in "/etc/swift/swift.conf". You can set the new
  "swift_force_change_hashes" variable to "True" to force the
  "swift_hash_path_" variables to be changed. We recommend setting
  this by running the os-swift.yml playbook with "-e
  swift_force_change_hashes=True", to avoid changing the
  "swift_hash_path_" variables unintentionally. Use with caution,
  changing the "swift_hash_path_" values causes end-user impact.

* The "os_swift" role has 3 new variables that will allow a deployer
  to change the hard, soft and fs.file-max limits. the hard and soft
  limits are being added to the limits.conf file for the swift system
  user. The fs.file-max settings are added to storage hosts via kernel
  tuning. The new options are "swift_hard_open_file_limits" with a
  default of 10240 "swift_soft_open_file_limits" with a default of
  4096 "swift_max_file_limits" with a default of 24 times the value of
  "swift_hard_open_file_limits".

* The "pretend_min_part_hours_passed" option can now be passed to
  swift-ring-builder prior to performing a rebalance. This is set by
  the "swift_pretend_min_part_hours_passed" boolean variable. The
  default for this variable is False. We recommend setting this by
  running the os-swift.yml playbook with "-e
  swift_pretend_min_part_hours_passed=True", to avoid resetting
  "min_part_hours" unintentionally on every run. Setting
  "swift_pretend_min_part_hours_passed" to True will reset the clock
  on the last time a rebalance happened, thus circumventing the
  min_part_hours check. This should only be used with extreme caution.
  If you run this command and deploy rebalanced rings before a
  replication pass completes, you may introduce unavailability in your
  cluster. This has an end-user imapct.

* While default python interpreter for swift is cpython, pypy is now
  an option. This change adds the ability to greatly improve swift
  performance without the core code modifications. These changes have
  been implemented using the documentation provided by Intel and
  Swiftstack. Notes about the performance increase can be seen here
  (https://software.intel.com/en-us/blogs/2016/05/06/doubling-the-
  performance-of-openstack-swift-with-no-code-changes).

* Change the port for devices in the ring by adjusting the port
  value for services, hosts, or devices. This will not involve a
  rebalance of the ring.

* Changing the port for a device, or group of devices, carries a
  brief period of downtime to the swift storage services for those
  devices. The devices will be unavailable during period between when
  the storage service restarts after the port update, and the ring
  updates to match the new port.

* Enable rsync module per object server drive by setting the
  "swift_rsync_module_per_drive" setting to "True". Set this to
  configure rsync and swift to utilise individual configuration per
  drive. This is required when disabling rsyncs to individual disks.
  For example, in a disk full scenario.

* The "os_swift" role will now include the swift "staticweb"
  middleware by default.

* The os_swift role now allows the permissions for the log files
  created by the swift account, container and object servers to be
  set. The variable is "swift_syslog_log_perms" and is set to "0644"
  by default.

* Support added to allow deploying on ppc64le architecture using the
  Ubuntu distributions.

* Support had been added to allow the functional tests to pass when
  deploying on ppc64le architecture using the Ubuntu distributions.

* Support for the deployment of Unbound caching DNS resolvers has
  been added as an optional replacement for /etc/hosts management
  across all hosts in the environment. To enable the Unbound DNS
  containers, add "unbound_hosts" entries to the environment.

* The "repo_build" role now provides the ability to override the
  upper-constraints applied which are sourced from OpenStack and from
  the global-requirements-pins.txt file. The variable
  "repo_build_upper_constraints_overrides" can be populated with a
  list of upper constraints. This list will take the highest
  precedence in the constraints process, with the exception of the
  pins set in the git source SHAs.


Known Issues
************

* Deployments on ppc64le are limited to Ubuntu 16.04 for the Newton
  release of OpenStack-Ansible.

* The variables "haproxy_keepalived_(internal|external)_cidr" now
  has a default set to "169.254.(2|1).1/24". This is to prevent
  Ansible undefined variable warnings. Deployers must set values for
  these variables for a working haproxy with keepalived environment
  when using more than one haproxy node.

* In the latest stable version of keepalived there is a problem with
  the priority calculation when a deployer has more than five
  keepalived nodes. The problem causes the whole keepalived cluster to
  fail to work. To work around this issue it is recommended that
  deployers limit the number of keepalived nodes to no more than five
  or that the priority for each node is set as part of the
  configuration (cf. "haproxy_keepalived_vars_file" variable).

* Paramiko version 2.0 Python requires the Python cryptography
  library. New system packages must be installed for this library. For
  OpenStack-Ansible versions <12.0.12, <11.2.15, <13.0.2 the system
  packages must be installed on the **deployment host** manually by
  executing "apt-get install -y build-essential libssl-dev libffi-
  dev".


Upgrade Notes
*************

* LXC containers will now have a proper RFC1034/5 hostname set
  during post build tasks. A localhost entry for 127.0.1.1 will be
  created by converting all of the "_" in the "inventory_hostname" to
  "-". Containers will be created with a default domain of
  *openstack.local*. This domain name can be customized to meet your
  deployment needs by setting the option "lxc_container_domain".

* A new global variable has been created named "openstack_domain".
  This variable has a default value of "openstack.local".

* The "ca-certificates" package has been included in the LXC
  container build process in order to prevent issues related to trying
  to connect to public websites which make use of newer certificates
  than exist in the base CA certificate store.

* In order to reduce the time taken for fact gathering, the default
  subset gathered has been reduced to a smaller set than the Ansible
  default. This may be changed by the deployer by setting the
  "ANSIBLE_GATHER_SUBSET" variable in the bash environment prior to
  executing any ansible commands.

* The environment variable "FORKS" is no longer used. The standard
  Ansible environment variable "ANSIBLE_FORKS" should be used instead.

* The Galera client role now has a dependency on the apt package
  pinning role.

* The variable "security_audit_apparmor_changes" is now renamed to
  "security_audit_mac_changes" and is enabled by default. Setting
  "security_audit_mac_changes" to "no" will disable syscall auditing
  for any changes to AppArmor policies (in Ubuntu) or SELinux policies
  (in CentOS).

* When upgrading deployers will need to ensure they have a backup of
  all logging from within the container prior to running the
  playbooks. If the logging node is present within the deployment all
  logs should already be sync'd with the logging server and no action
  is required. As a pre-step it's recommended that deployers clean up
  logging directories from within containers prior to running the
  playbooks. After the playbooks have run the bind mount will be in
  effect at "/var/log" which will mount over all previous log files
  and directories.

* Due to a new bind mount at "/var/log" all containers will be
  restarted. This is a required restart. It is recommended that
  deployers run the container restarts in serial to not impact
  production workloads.

* The default value of "service_credentials/os_endpoint_type" within
  ceilometer's configuration file has been changed to **internalURL**.
  This may be overridden through the use of the
  "ceilometer_ceilometer_conf_overrides" variable.

* The default database collation has changed from *utf8_unicode_ci*
  to *utf8_general_ci*. Existing databases and tables will need to be
  converted.

* The LXC container cache preparation process now copies package
  repository configuration from the host instead of implementing its
  own configuration. The following variables are therefore unnecessary
  and have been removed:

  * "lxc_container_template_main_apt_repo"

  * "lxc_container_template_security_apt_repo"

  * "lxc_container_template_apt_components"

* The LXC container cache preparation process now copies DNS
  resolution configuration from the host instead of implementing its
  own configuration. The "lxc_cache_resolvers" variable is therefore
  unnecessary and has been removed.

* The MariaDB wait_timeout setting is decreased to 1h to match the
  SQL Alchemy pool recycle timeout, in order to prevent unnecessary
  database session buildups.

* The variable "repo_server_packages" that defines the list of
  packages required to install a repo server has been replaced by
  "repo_server_distro_packages".

* If there are swift hosts in the environment, then the value for
  "cinder_service_backup_program_enabled" will automatically be set to
  "True". This negates the need to set this variable in
  "user_variables.yml", but the value may still be overridden at the
  deployer discretion.

* If there are swift hosts in the environment, then the value for
  "glance_default_store" will automatically be set to "swift". This
  negates the need to set this variable in "user_variables.yml", but
  the value may still be overridden at the deployer discretion.

* The variable "security_sysctl_enable_tcp_syncookies" has replaced
  "security_sysctl_tcp_syncookies" and it is now a boolean instead of
  an integer.  It is still enabled by default, but deployers can
  disable TCP syncookies by setting the following Ansible variable:

     security_sysctl_enable_tcp_syncookies: no

* The "glance_apt_packages" variable has been renamed to
  "glance_distro_packages" so that it applies to multiple operating
  systems.

* Within the "haproxy" role *hatop* has been changed from a package
  installation to a source-based installation. This has been done to
  ensure that the same operator tooling is available across all
  supported distributions. The download URL for the source can be set
  using the variable "haproxy_hatop_download_url".

* Haproxy has a new backend to support using the repo server nodes
  as a git server. The new backend is called "repo_git" and uses port
  "9418". Default ACLs have been created to lock down the port's
  availability to only internal networks originating from an RFC1918
  address.

* Haproxy has a new backend to support using the repo server nodes
  as a package manager cache. The new backend is called "repo_cache"
  and uses port "3142" and a single active node. All other nodes
  within the pool are backups and will be promoted if the active node
  goes down. Default ACLs have been created to lock down the port's
  availability to only internal networks originating from an RFC1918
  address.

* SSL termination is assumed enabled for all public endpoints by
  default. If this is not needed it can be disabled by setting the
  "openstack_external_ssl" option to **false** and the
  "openstack_service_publicuri_proto" to **http**.

* If HAProxy is used as the loadbalancer for a deployment it will
  generate a self-signed certificate by default. If HAProxy is NOT
  used, an SSL certificate should be installed on the external
  loadbalancer. The installation of an SSL certificate on an external
  load balancer is not covered by the deployment tooling.

* In previous releases connections to Horizon originally terminated
  SSL at the Horizon container. While that is still an option, SSL is
  now assumed to be terminated at the load balancer. If you wish to
  terminate SSL at the horizon node change the "horizon_external_ssl"
  option to **false**.

* Public endpoints will need to be updated using the Keystone admin
  API to support secure endpoints. The Keystone ansible module will
  not recreate the endpoints automatically. Documentation on the
  Keystone service catalog can be found here
  (http://docs.openstack.org/developer/keystone/configuration.html
  #service-catalog).

* Upgrades will not replace entries in the
  /etc/openstack_deploy/env.d directory, though new versions of
  OpenStack-Ansible will now use the shipped env.d as a base, which
  may alter existing deployments.

* The variable used to store the mysql password used by the ironic
  service account has been changed.  The following variable:

     ironic_galera_password: secrete

  has been changed to:

     ironic_container_mysql_password: secrete

* There is a new default configuration for keepalived. When running
  the haproxy playbook, the configuration change will cause a
  keepalived restart unless the deployer has used a custom
  configuration file. The restart will cause the virtual IP addresses
  managed by keepalived to be briefly unconfigured, then reconfigured.

* A new version of keepalived will be installed on the haproxy nodes
  if the variable "keepalived_use_latest_stable" is set to "True" and
  more than one haproxy node is configured. The update of the package
  will cause keepalived to restart and therefore will cause the
  virtual IP addresses managed by keepalived to be briefly
  unconfigured, then reconfigured.

* Adding a new nova.conf entry, live_migration_uri. This entry will
  default to a "qemu-ssh://" uri, which uses the ssh keys that have
  already been distributed between all of the compute hosts.

* The "lxc_container_create" role no longer uses the distro specific
  lxc container create template.

* The following variable changes have been made in the  "lxc_host"
  role:

  * **lxc_container_template**: Removed because the template option
    is now contained within the operating system specific variable
    file loaded at runtime.

  * **lxc_container_template_options**: This option was renamed to
    *lxc_container_download_template_options*. The deprecation filter
    was not used because the values provided from this option have
    been fundamentally changed and old overrides will cause problems.

  * **lxc_container_release**: Removed because image is now tied
    with the host operating system.

  * **lxc_container_user_name**: Removed because the default users
    are no longer created when the cached image is created.

  * **lxc_container_user_password**: Removed because the default
    users are no longer created when the cached image is created.

  * **lxc_container_template_main_apt_repo**: Removed because this
    option is now being set within the cache creation process and is
    no longer needed here.

  * **lxc_container_template_security_apt_repo**: Removed because
    this option is now being set within the cache creation process and
    is no longer needed here.

* The "lxc_host" role no longer uses the distro specific lxc
  container create template.

* The following variable changes have been made in the  "lxc_host"
  role:

  * **lxc_container_user_password**: Removed because the default lxc
    container user is no longer created by the lxc container template.

  * **lxc_container_template_options**: This option was renamed to
    *lxc_cache_download_template_options*. The deprecation filter was
    not used because the values provided from this option have been
    fundamentally changed and potentially old overrides will cause
    problems.

  * **lxc_container_base_delete**: Removed because the cache will be
    refreshed upon role execution.

  * **lxc_cache_validate_certs**: Removed because the Ansible
    "get_url" module is no longer used.

  * **lxc_container_caches**: Removed because the container create
    process will build a cached image based on the host OS.

* LXC package installation and cache preparation will now occur by
  default only on hosts which will actually implement containers.

* The dynamic_inventory script previously set the provider network
  attributes "is_container_address" and "is_ssh_address" to True for
  the management network regardless of whether a deployer had them
  configured this way or not. Now, these attributes must be configured
  by deployers and the dynamic_inventory script will fail if they are
  missing or not True.

* During upgrades, container and service restarts for the
  mariadb/galera cluster were being triggered multiple times and
  causing the cluster to become unstable and often unrecoverable. This
  situation has been improved immensely, and we now have tight control
  such that restarts of the galera containers only need to happen
  once, and are done so in a controlled, predictable and repeatable
  way.

* The memcached log is removed from /var/log/memcached.log and is
  now stored in the /var/log/memcached folder.

* The variable "galera_client_apt_packages" has been replaced by
  "galera_client_distro_packages".

* Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge
  Agent should be enabled is now dynamically determined based on the
  "neutron_plugin_type" and the "neutron_ml2_mechanism_drivers" that
  are set. This aims to simplify the configuration of Neutron services
  and eliminate the need for deployers to override the entire
  "neutron_services" dict variable to disable these services.

* Database migration tasks have been added for the dynamic routing
  neutron plugin.

* As described in the Mitaka release notes
  (http://docs.openstack.org/releasenotes/neutron/mitaka.html) Neutron
  now correctly calculates for and advertises the MTU to instances.
  The default DHCP configuration to advertise an MTU to instances has
  therefore been removed from the variable "neutron_dhcp_config".

* As described in the Mitaka release notes
  (http://docs.openstack.org/releasenotes/neutron/mitaka.html) Neutron
  now correctly calculates for and advertises the MTU to instances. As
  such the "neutron_network_device_mtu" variable has been removed and
  the hard-coded values in the templates for "advertise_mtu",
  "path_mtu", and "segment_mtu" have been removed to allow upstream
  defaults to operate as intended.

* The new host group "neutron_openvswitch_agent" has been added to
  the "env.d/neutron.yml" and "env.d/nova.yml" environment
  configuration files in order to support the implementation of Open
  vSwitch. Deployers must ensure that their environment configuration
  files are updated to include the above group name. Please see the
  example implementations in env.d/neutron.yml
  (https://github.com/openstack /openstack-
  ansible/blob/stable/newton/etc/openstack_deploy/env.d/neutron.yml)
  and env.d/nova.yml (https://github.com/openstack/openstack-
  ansible/blob/stable/newton/etc/openstack_deploy/env.d/nova.yml).

* The variable "neutron_agent_mode" has been removed from the
  "os_neutron" role. The appropriate value for "l3_agent.ini" is now
  determined based on the "neutron_plugin_type" and host group
  membership.

* The default horizon instance launch panels have been changed to
  the next generation panels. To enable legacy functionality set the
  following options accordingly:

     horizon_launch_instance_legacy: True
     horizon_launch_instance_ng: False

* A new nova admin endpoint will be registered with the suffix
  "/v2.1/%(tenant_id)s". The nova admin endpoint with the suffix
  "/v2/%(tenant_id)s" may be manually removed.

* Cleanup tasks are added to remove the nova console git directories
  "/usr/share/novnc" and "/usr/share/spice-html5", prior to cloning
  these inside the nova vnc and spice console playbooks. This is
  necessary to guarantee that local modifications do not break git
  clone operations, especially during upgrades.

* The variable "neutron_linuxbridge" has been removed as it is no
  longer used.

* The variable "neutron_driver_interface" has been removed. The
  appropriate value for "neutron.conf" is now determined based on the
  "neutron_plugin_type".

* The variable "neutron_driver_firewall" has been removed. The
  appropriate value for "neutron.conf" is now determined based on the
  "neutron_plugin_type".

* The variable "neutron_ml2_mechanism_drivers" has been removed. The
  appropriate value for ml2_conf.ini is now determined based on the
  "neutron_plugin_type".

* Installation of glance and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "glance_venv_bin", "glance_venv_enabled", "glance_venv_etc_dir", and
  "glance_non_venv_etc_dir" variables have been removed.

* Installation of glance and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "gnocchi_venv_bin", "gnocchi_venv_enabled", "gnocchi_venv_etc_dir",
  and "gnocchi_non_venv_etc_dir" variables have been removed.

* Installation of heat and its dependent pip packages will now only
  occur within a Python virtual environment. The "heat_venv_bin" and
  "heat_venv_enabled" variables have been removed.

* Installation of horizon and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "horizon_venv_bin", "horizon_venv_enabled", "horizon_venv_lib_dir",
  and "horizon_non_venv_lib_dir" variables have been removed.

* Installation of ironic and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "ironic_venv_bin" and "ironic_venv_enabled" variables have been
  removed.

* Installation of keystone and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "keystone_venv_enabled" variable has been removed.

* The Neutron L3 Agent configuration for the
  handle_internal_only_routers variable is removed in order to use the
  Neutron upstream default setting. The current default for
  handle_internal_only_routers is True, which does allow Neutron L3
  router without external networks attached (as discussed per
  https://bugs.launchpad.net/neutron/+bug/1572390).

* Installation of aodh and its dependent pip packages will now only
  occur within a Python virtual environment. The "aodh_venv_enabled"
  and "aodh_venv_bin" variables have been removed.

* Installation of ceilometer and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "ceilometer_venv_enabled" and "ceilometer_venv_bin" variables have
  been removed.

* Installation of cinder and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "cinder_venv_enabled" and "cinder_venv_bin" variables have been
  removed.

* Installation of magnum and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "magnum_venv_bin", "magnum_venv_enabled" variables have been
  removed.

* Installation of neutron and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "neutron_venv_enabled", "neutron_venv_bin",
  "neutron_non_venv_lib_dir" and "neutron_venv_lib_dir" variables have
  been removed.

* Installation of nova and its dependent pip packages will now only
  occur within a Python virtual environment. The "nova_venv_enabled",
  "nova_venv_bin" variables have been removed.

* Installation of rally and its dependent pip packages will now only
  occur within a Python virtual environment. The "rally_venv_bin",
  "rally_venv_enabled" variables have been removed.

* Installation of sahara and its dependent pip packages will now
  only occur within a Python virtual environment. The
  "sahara_venv_bin", "sahara_venv_enabled", "sahara_venv_etc_dir", and
  "sahara_non_venv_etc_dir" variables have been removed.

* Installation of swift and its dependent pip packages will now only
  occur within a Python virtual environment. The "swift_venv_enabled",
  "swift_venv_bin" variables have been removed.

* The variable "keystone_apt_packages" has been renamed to
  "keystone_distro_packages".

* The variable "keystone_idp_apt_packages" has been renamed to
  "keystone_idp_distro_packages".

* The variable "keystone_sp_apt_packages" has been renamed to
  "keystone_sp_distro_packages".

* The variable "keystone_developer_apt_packages" has been renamed to
  "keystone_developer_mode_distro_packages".

* The variable "glance_apt_packages" has been renamed to
  "glance_distro_packages".

* The variable "horizon_apt_packages" has been renamed to
  "horizon_distro_packages".

* The variable "aodh_apt_packages" has been renamed to
  "aodh_distro_packages".

* The variable "cinder_apt_packages" has been renamed to
  "cinder_distro_packages".

* The variable "cinder_volume_apt_packages" has been renamed to
  "cinder_volume_distro_packages".

* The variable "cinder_lvm_volume_apt_packages" has been renamed to
  "cinder_lvm_volume_distro_packages".

* The variable "ironic_api_apt_packages" has been renamed to
  "ironic_api_distro_packages".

* The variable "ironic_conductor_apt_packages" has been renamed to
  "ironic_conductor_distro_packages".

* The variable "ironic_conductor_standalone_apt_packages" has been
  renamed to "ironic_conductor_standalone_distro_packages".

* The variable "galera_pre_packages" has been renamed to
  "galera_server_required_distro_packages".

* The variable "galera_packages" has been renamed to
  "galera_server_mariadb_distro_packages".

* The variable "haproxy_pre_packages" has been renamed to
  "haproxy_required_distro_packages".

* The variable "haproxy_packages" has been renamed to
  "haproxy_distro_packages".

* The variable "memcached_apt_packages" has been renamed to
  "memcached_distro_packages".

* The variable "neutron_apt_packages" has been renamed to
  "neutron_distro_packages".

* The variable "neutron_lbaas_apt_packages" has been renamed to
  "neutron_lbaas_distro_packages".

* The variable "neutron_vpnaas_apt_packages" has been renamed to
  "neutron_vpnaas_distro_packages".

* The variable "neutron_apt_remove_packages" has been renamed to
  "neutron_remove_distro_packages".

* The variable "heat_apt_packages" has been renamed to
  "heat_distro_packages".

* The variable "ceilometer_apt_packages" has been renamed to
  "ceilometer_distro_packages".

* The variable "ceilometer_developer_mode_apt_packages" has been
  renamed to "ceilometer_developer_mode_distro_packages".

* The variable "swift_apt_packages" has been renamed to
  "swift_distro_packages".

* The variable "lxc_apt_packages" has been renamed to
  "lxc_hosts_distro_packages".

* The variable "openstack_host_apt_packages" has been renamed to
  "openstack_host_distro_packages".

* The galera_client role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "galera_client_package_state" should be set to "present".

* The ceph_client role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "ceph_client_package_state" should be set to "present".

* The os_ironic role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "ironic_package_state" should be set to "present".

* The os_nova role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "nova_package_state" should be set to "present".

* The memcached_server role always checks whether the latest package
  is installed when executed. If a deployer wishes to change the check
  to only validate the presence of the package, the option
  "memcached_package_state" should be set to "present".

* The os_heat role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "heat_package_state" should be set to "present".

* The rsyslog_server role always checks whether the latest package
  is installed when executed. If a deployer wishes to change the check
  to only validate the presence of the package, the option
  "rsyslog_server_package_state" should be set to "present".

* The pip_install role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "pip_install_package_state" should be set to "present".

* The repo_build role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "repo_build_package_state" should be set to "present".

* The os_rally role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "rally_package_state" should be set to "present".

* The os_glance role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "glance_package_state" should be set to "present".

* The security role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "security_package_state" should be set to "present".

* All roles always checks whether the latest package is installed
  when executed. If a deployer wishes to change the check to only
  validate the presence of the package, the option "package_state"
  should be set to "present".

* The os_keystone role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "keystone_package_state" should be set to "present".

* The os_cinder role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "cinder_package_state" should be set to "present".

* The os_gnocchi role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "gnocchi_package_state" should be set to "present".

* The os_magnum role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "magnum_package_state" should be set to "present".

* The rsyslog_client role always checks whether the latest package
  is installed when executed. If a deployer wishes to change the check
  to only validate the presence of the package, the option
  "rsyslog_client_package_state" should be set to "present".

* The os_sahara role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "sahara_package_state" should be set to "present".

* The repo_server role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "repo_server_package_state" should be set to "present".

* The haproxy_server role always checks whether the latest package
  is installed when executed. If a deployer wishes to change the check
  to only validate the presence of the package, the option
  "haproxy_package_state" should be set to "present".

* The os_aodh role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "aodh_package_state" should be set to "present".

* The openstack_hosts role always checks whether the latest package
  is installed when executed. If a deployer wishes to change the check
  to only validate the presence of the package, the option
  "openstack_hosts_package_state" should be set to "present".

* The galera_server role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "galera_server_package_state" should be set to "present".

* The rabbitmq_server role always checks whether the latest package
  is installed when executed. If a deployer wishes to change the check
  to only validate the presence of the package, the option
  "rabbitmq_package_state" should be set to "present".

* The lxc_hosts role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "lxc_hosts_package_state" should be set to "present".

* The os_ceilometer role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "ceilometer_package_state" should be set to "present".

* The os_swift role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "swift_package_state" should be set to "present".

* The os_neutron role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "neutron_package_state" should be set to "present".

* The os_horizon role always checks whether the latest package is
  installed when executed. If a deployer wishes to change the check to
  only validate the presence of the package, the option
  "horizon_package_state" should be set to "present".

* The variable "rsyslog_client_packages" has been replaced by
  "rsyslog_client_distro_packages".

* The variable "rsyslog_server_packages" has been replaced by
  "rsyslog_server_distro_packages".

* The variable "rabbitmq_monitoring_password" has been added to
  "user_secrets.yml". If this variable does not exist, the RabbitMQ
  monitoring user will not be created.

* All of the discretionary access control (DAC) auditing is now
  disabled by default. This reduces the amount of logs generated
  during deployments and minor upgrades.  The following variables are
  now set to "no":

     security_audit_DAC_chmod: no
     security_audit_DAC_chown: no
     security_audit_DAC_lchown: no
     security_audit_DAC_fchmod: no
     security_audit_DAC_fchmodat: no
     security_audit_DAC_fchown: no
     security_audit_DAC_fchownat: no
     security_audit_DAC_fremovexattr: no
     security_audit_DAC_lremovexattr: no
     security_audit_DAC_fsetxattr: no
     security_audit_DAC_lsetxattr: no
     security_audit_DAC_setxattr: no

* The container property "container_release" has been removed as
  this is automatically set to the same version as the host in the
  container creation process.

* The variable "lxc_container_release" has been removed from the
  "lxc- container-create.yml" playbook as it is no longer consumed by
  the container creation process.

* LBaaSv1 has been removed from the "neutron-lbaas" project in the
  Newton release and it has been removed from OpenStack-Ansible as
  well.

* The LVM configuration tasks and "lvm.conf" template have been
  removed from the "openstack_hosts" role since they are no longer
  needed. All of the LVM configuration is properly handled in the
  "os_cinder" role.

* In the "rsyslog_client" role, the variable "rsyslog_client_repos"
  has been removed as it is no longer used.

* Percona Xtrabackup has been removed from the Galera client role.

* The "infra_hosts" and "infra_containers" inventory groups have
  been removed. No containers or services were assigned to these
  groups exclusively, and the usage of the groups has been supplanted
  by the "shared-infra_*" and "os-infra_*" groups for some time.
  Deployers who were using the groups should adjust any custom
  configuration in the "env.d" directory to assign containers and/or
  services to other groups.

* The variable "verbose" has been removed. Deployers should rely on
  the "debug" var to enable higher levels of memcached logging.

* The variable "verbose" has been removed. Deployers should rely on
  the "debug" var to enable higher levels of logging.

* The aodh-api init service is removed since aodh-api is deployed as
  an apache mod_wsgi service.

* The "ceilometer-api" init service is removed since "ceilometer-
  api" is deployed as an apache "mod_wsgi" service.

* The database create and user creates have been removed from the
  "os_heat" role. These tasks have been relocated to the playbooks.

* The database create and user creates have been removed from the
  "os_nova" role. These tasks have been relocated to the playbooks.

* The database create and user creates have been removed from the
  "os_glance" role. These tasks have been relocated to the playbooks.

* The database and user creates have been removed from the
  "os_horizon" role. These tasks have been relocated to the playbooks.

* The database create and user creates have been removed from the
  "os_cinder" role. These tasks have been relocated to the playbooks.

* The database create and user creates have been removed from the
  "os_neutron" role. These tasks have been relocated to the playbooks.

* The Neutron HA tool written by AT&T is no longer enabled by
  default. This tool was providing HA capabilities for networks and
  routers that were not using the native Neutron L3HA. Because native
  Neutron L3HA is stable, compatible with the Linux Bridge Agent, and
  is a better means of enabling HA within a deployment this tool is no
  longer being setup by default. If legacy L3HA is needed within a
  deployment the deployer can set *neutron_legacy_ha_tool_enabled* to
  **true** to enable the legacy tooling.

* The "repo_build_apt_packages" variable has been renamed.
  "repo_build_distro_packages" should be used instead to override
  packages required to build Python wheels and venvs.

* The "repo_build" role now makes use of Ubuntu Cloud Archive by
  default. This can be disabled by setting "repo_build_uca_enable" to
  "False".

* New overrides are provided to allow for better customization
  around logfile retention and rate limiting for UDP/TCP sockets.
  "rsyslog_server_logrotation_window" defaults to 14 days
  "rsyslog_server_ratelimit_interval" defaults to 0 seconds
  "rsyslog_server_ratelimit_burst" defaults to 10000

* The rsyslog.conf is now using v7+ style configuration settings

* The "swift_fallocate_reserve" default value has changed from
  10737418240 (10GB) to 1% in order to match the OpenStack swift
  default setting.

* A new option *swift_pypy_enabled* has been added to enable or
  disable the pypy interpreter for swift. The default is "false".

* A new option *swift_pypy_archive* has been added to allow a pre-
  built pypy archive to be downloaded and moved into place to support
  swift running under pypy. This option is a dictionary and contains
  the URL and SHA256 as keys.

* The "swift_max_rsync_connections" default value has changed from 2
  to 4 in order to match the OpenStack swift documented value.

* When upgrading a Swift deployment from Mitaka to Newton it should
  be noted that the enabled middleware list has changed. In Newton the
  "staticweb" middleware will be loaded by default. While the change
  adds a feature it is non-disruptive in upgrades.

* All variables in the security role are now prepended with
  "security_" to avoid collisions with variables in other roles. All
  deployers who have used the security role in previous releases will
  need to prepend all security role variables with "security_".

  For example, a deployer could have disabled direct root ssh logins
  with the following variable:

     ssh_permit_root_login: yes

  That variable would become:

     security_ssh_permit_root_login: yes

* Ceilometer no longer manages alarm storage when Aodh is enabled.
  It now redirects alarm-related requests to the Aodh API. This is now
  auto-enabled when Aodh is deployed.

* Overrides for ceilometer "aodh_connection_string" will no longer
  work. Specifying an Aodh connection string in Ceilometer was
  deprecated within Ceilometer in a prior release so this option has
  been removed.

* Hosts running LXC on Ubuntu 14.04 will now need to enable the
  "trusty-backports" repository. The backports repo on Ubuntu 14.04 is
  now required to ensure LXC is updated to the latest stable version.

* The Aodh data migration script should be run to migrate alarm data
  from MongoDB storage to Galera due to the pending removal of MongoDB
  support.

* Neutron now makes use of Ubuntu Cloud Archive by default. This can
  be disabled by setting "neutron_uca_enable" to "False".

* The "utility-all.yml" playbook will no longer distribute the
  deployment host's root user's private ssh key to all utility
  containers. Deployers who desire this behavior should set the
  "utility_ssh_private_key" variable.

* The following variables have been renamed in order to make the
  variable names neutral for multiple operating systems.

     * nova_apt_packages -> nova_distro_packages

     * nova_spice_apt_packages -> nova_spice_distro_packages

     * nova_novnc_apt_packages -> nova_novnc_distro_packages

     * nova_compute_kvm_apt_packages ->
       nova_compute_kvm_distro_packages


Deprecation Notes
*****************

* Removed "cirros_tgz_url" and in most places replaced with
  "tempest_img_url".

* Removed "cirros_img_url" and in most places replaced with
  "tempest_img_url".

* Removed deprecated variable "tempest_compute_image_alt_ssh_user"

* Removed deprecated variable "tempest_compute_image_ssh_password"

* Removed deprecated variable
  "tempest_compute_image_alt_ssh_password"

* Renamed "cirros_img_disk_format" to "tempest_img_disk_format"

* Downloading and unarchiving a .tar.gz has been removed.  The
  related tempest options "ami_img_file", "aki_img_file", and
  "ari_img_file" have been removed from tempest.conf.j2.

* The "[boto]" section of tempest.conf.j2 has been removed.  These
  tests have been completely removed from tempest for some time.

* The "openstack_host_apt_packages" variable has been deprecated.
  "openstack_host_packages" should be used instead to override
  packages required to install on all OpenStack hosts.

* The "rabbitmq_apt_packages" variable has been deprecated.
  "rabbitmq_dependencies" should be used instead to override
  additional packages to install alongside rabbitmq-server.

* Moved "haproxy_service_configs" var to
  "haproxy_default_service_configs" so that "haproxy_service_configs"
  can be modified and added to without overriding the entire default
  service dict.

* galera_package_url changed to percona_package_url for clarity

* galera_package_sha256 changed to percona_package_sha256 for
  clarity

* galera_package_path changed to percona_package_path for clarity

* galera_package_download_validate_certs changed to
  percona_package_download_validate_certs for clarity

* The "main" function in "dynamic_inventory.py" now takes named
  arguments instead of dictionary. This is to support future code
  changes that will move construction logic into separate files.

* Installation of Ansible on the root system, outside of a virtual
  environment, will no longer be supported.

* The variables "`galera_client_package_*`" and
  "`galera_client_apt_percona_xtrabackup_*`" have been removed from
  the role as Xtrabackup is no longer deployed.

* The Neutron HA tool written by AT&T has been deprecated and will
  be removed in the Ocata release.


Security Issues
***************

* A sudoers entry has been added to the repo_servers in order to
  allow the nginx user to stop and start nginx via the init script.
  This is implemented in order to ensure that the repo sync process
  can shut off nginx while synchronising data from the master to the
  slaves.

* A self-signed certificate will now be generated by default when
  HAproxy is used as a load balancer. This certificate is used to
  terminate the public endpoint for Horizon and all OpenStack API
  services.

* Horizon disables password autocompletion in the browser by
  default, but deployers can now enable autocompletion by setting
  "horizon_enable_password_autocomplete" to "True".

* The admin_token_auth middleware presents a potential security risk
  and will be removed in a future release of keystone. Its use can be
  removed by setting the "keystone_keystone_paste_ini_overrides"
  variable.

     keystone_keystone_paste_ini_overrides:
       pipeline:public_api:
         pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service
       pipeline:admin_api:
         pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service
       pipeline:api_v3:
         pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3


Bug Fixes
*********

* This role assumes that there is a network named "public|private"
  and a subnet named "public|private-subnet". These names are made
  configurable by the addition of two sets of variables;
  "tempest_public_net_name" and "tempest_public_subnet_name" for
  public networks and "tempest_private_net_name" and
  "tempest_private_subnet_name" for private networks This addresses
  bug 1588818 (https://bugs.launchpad.net/openstack-
  ansible/+bug/1588818)

* The "/run" directory is excluded from AIDE checks since the files
  and directories there are only temporary and often change when
  services start and stop.

* AIDE initialization is now always run on subsequent playbook runs
  when "security_initialize_aide" is set to "yes". The initialization
  will be skipped if AIDE isn't installed or if the AIDE database
  already exists.

  See bug 1616281 (https://launchpad.net/bugs/1616281) for more
  details.

* Add architecture-specific locations for percona-xtrabackup and
  qpress, with alternate locations provided for ppc64el due to package
  inavailability from the current provider.

* The role previously did not restart the audit daemon after
  generating a new rules file. The bug
  (https://launchpad.net/bugs/1590916) has been fixed and the audit
  daemon will be restarted after any audit rule changes.

* Logging within the container has been bind mounted to the hosts
  this reslves issue *1588051 <https://bugs.launchpad.net/openstack-
  ansible/+bug/1588051>_*

* Removed various deprecated / no longer supported features from
  tempest.conf.j2.  Some variables have been moved to their new
  sections in the config.

* The standard collectstatic and compression process in the
  os_horizon role now happens after horizon customizations are
  installed, so that all static resources will be collected and
  compressed.

* LXC containers will now have the ability to use a fixed mac
  address on all network interfaces when the option
  *lxc_container_fixed_mac* is set **true**. This change will assist
  in resolving a long standing issue where network intensive services,
  such as neutron and rabbitmq, can enter a confused state for long
  periods of time and require rolling restarts or internal system
  resets to recover.

* The dictionary-based variables in "defaults/main.yml" are now
  individual variables. The dictionary-based variables could not be
  changed as the documentation instructed. Instead it was required to
  override the entire dictionary. Deployers must use the new variable
  names to enable or disable the security configuration changes
  applied by the security role. For more information, see Launchpad
  Bug 1577944 (https://bugs.launchpad.net/openstack-
  ansible/+bug/1577944).

* Failed access logging is now disabled by default and can be
  enabled by changing "security_audit_failed_access" to "yes". The
  rsyslog daemon checks for the existence of log files regularly and
  this audit rule was triggered very frequently, which led to very
  large audit logs.

* An Ansible task was added to disable the "netconsole" service on
  CentOS systems if the service is installed on the system.

  Deployers can opt-out of this change by setting
  "security_disable_netconsole" to "no".

* In order to ensure that the appropriate data is delivered to
  requesters from the repo servers, the slave repo_server web servers
  are taken offline during the synchronisation process. This ensures
  that the right data is always delivered to the requesters through
  the load balancer.

* The pip_install_options variable is now honored during repo
  building.  This variable allows deployers to specify trusted CA
  certificates by setting the variable to "--cert /etc/ssl/certs/ca-
  certificates.crt"

* The security role previously set the permissions on all audit log
  files in "/var/log/audit" to "0400", but this prevents the audit
  daemon from writing to the active log file. This will prevent
  "auditd" from starting or restarting cleanly.

  The task now removes any permissions that are not allowed by the
  STIG. Any log files that meet or exceed the STIG requirements will
  not be modified.

* When the security role was run in Ansible's check mode and a tag
  was provided, the "check_mode" variable was not being set. Any tasks
  which depend on that variable would fail. This bug is fixed
  (https://bugs.launchpad.net/openstack-ansible/+bug/1590086) and the
  "check_mode" variable is now set properly on every playbook run.

* The security role now handles "ssh_config" files that contain
  "Match" stanzas. A marker is added to the configuration file and any
  new configuration items will be added below that marker. In
  addition, the configuration file is validated for each change to the
  ssh configuration file.

* Horizon deployments were broken due to an incorrect hostname
  setting being placed in the apache ServerName configuration. This
  caused Horizon startup failure any time debug was disabled.

* Changed the way we name host containers groups in
  dynamic_inventory.py for a hostname from hostname_containers to
  hostname-host_containers to prevent failing in the case where
  containers groups have the same name as host containers when
  choosing hostnames inspired from containers group names. This change
  fixes the following bugs https://bugs.launchpad.net/openstack-
  ansible/+bug/1512883 and https://bugs.launchpad.net/openstack-
  ansible/+bug/1528953.

* The ability to support login user domain and login project domain
  has been added to the keystone module. This resolves
  https://bugs.launchpad.net/openstack-ansible/+bug/1574000

     # Example usage
     - keystone:
         command: ensure_user
         endpoint: "{{ keystone_admin_endpoint }}"
         login_user: admin
         login_password: admin
         login_project_name: admin
         login_user_domain_name: custom
         login_project_domain_name: custom
         user_name: demo
         password: demo
         project_name: demo
         domain_name: custom

* LXC package installation and cache preparation will now occur by
  default only on hosts which will actually implement containers.

* When upgrading it is possible for an old "neutron-ns-metadata-
  proxy" process to remain running in memory. If this happens the old
  version of the process can cause unexpected issues in a production
  environment. To fix this a task has been added to the os_neutron
  role that will execute a process lookup and kill any "neutron-ns-
  metadata-proxy" processes that are not running the current release
  tag. Once the old processes are removed the metadata agent running
  will respawn everything needed within 60 seconds.

* Assigning multiple IP addresses to the same host name will now
  result in an inventory error before running any playbooks.

* The nova admin endpoint is now correctly registered as
  "/v2.1/%(tenant_id)s" instead of "/v2/%(tenant_id)s".

* The auditd rules for auditing V-38568 (filesystem mounts) were
  incorrectly labeled in the auditd logs with the key of
  "export-V-38568". They are now correctly logged with the key
  "filesystem_mount-V-38568".

* Deleting variable entries from the "global_overrides" dictionary
  in "openstack_user_config.yml" now properly removes those variables
  from the "openstack_inventory.json" file. See Bug

* The "pip_packages_tmp" variable has been renamed
  "pip_tmp_packages" to avoid unintended processing by the py_pkgs
  lookup plugin.

* The "repo_build" role now correctly applies OpenStack requirements
  upper-constraints when building Python wheels. This resolves
  https://bugs.launchpad.net/openstack-ansible/+bug/1605846

* The check to validate whether an appropriate ssh public key is
  available to copy into the container cache has been corrected to
  check the deployment host, not the LXC host.

* Static route information for provider networks now *must* include
  the *cidr* and *gateway* information. If either key is missing, an
  error will be raised and the dynamic_inventory.py script will halt
  before any Ansible action is taken. Previously, if either key was
  missing, the inventory script would continue silently without adding
  the static route information to the networks. Note that this check
  does not validate the CIDR or gateway values, just just that the
  values are present.

* The repo_build play now correctly evaluates environment variables
  configured in /etc/environment.  This enables deployments in an
  environment with http proxies.

* Previously, the "ansible_managed" var was being used to insert a
  header into the "swift.conf" that contained date/time information.
  This meant that swift.conf across different nodes did not have the
  same MD5SUM, causing "swift-recon --md5" to break. We now insert a
  piece of static text instead to resolve this issue.

* The XFS filesystem is excluded from the daily mlocate crond job in
  order to conserve disk IO for large IOPS bursts due to
  updatedb/mlocate file indexing.

* The "/var/lib/libvirt/qemu/save" directory is now a symlink to "{{
  nova_system_home_folder }}/save" to resolve an issue where the
  default location used by the libvirt managed save command can result
  with the root partitions on compute nodes becoming full when "nova
  image-create" is run on large instances.

* Aodh has deprecated support for NoSQL storage (MongoDB and
  Cassandra) in Mitaka with removal scheduled for the O* release. This
  causes warnings in the logs. The default of using MongoDB storage
  for Aodh is replaced with the use of Galera. Continued use of
  MongoDB will require the use of vars to specify a correct
  "aodh_connection_string" and add pymongo to the "aodh_pip_packages"
  list.

* The "--compact" flag has been removed from xtrabackup options.
  This had been shown to cause crashes in some SST situations


Other Notes
***********

* "nova_libvirt_live_migration_flag" is now phased out. Please
  create a nova configuration override with "live_migration_tunnelled:
  True" if you want to force the flag "VIR_MIGRATE_TUNNELLED" to
  libvirt. Nova "chooses a sensible default" otherwise.

* "nova_compute_manager" is now phased out.

* The in tree "ansible.cfg" file in the playbooks directory has been
  removed. This file was making compatibility difficult for deployers
  who need to change these values. Additionally this files very
  existance forced Ansible to ignore any other config file in either a
  users home directory or in the default "/etc/ansible" directory.

* Mariadb version upgrade gate checks removed.

* The "run-playbooks.sh" script has been refactored to run all
  playbooks using our core tool set and run order. The refactor work
  updates the old special case script to a tool that simply runs the
  integrated playbooks as they've been designed.

Changes in openstack-ansible 13.0.0..14.0.0
-------------------------------------------

98303ce Update role SHAs for 14.0.0 2016-10-19
c8e38f0 [Docs] Fix the alignment
f1e2070 Update role SHAs for 14.0.0 2016-10-18
e8aa2d2 Enable fixed mac address generation
a40428a Add full path to inventory
dffd265 updated Appendix H to C
97aa8ed Remove 'ignore_errors: true' in favor of 'failed_when: false'
b973a7d [docs] Clarify the 'Network configuration' section
fa21e2b Remove the rabbitmq deterministic sort
f7adda9 Create log aggregation parent directory
93a7aa6 Add support for the Ceph storage driver in Gnocchi
9572ffd [docs] Provide example configurations
6bbb3d5 Prevent overlayfs use in test when kernel < 3.18 or release == trusty
a0bfb6c [docs] Add network config example for test and prod
a87ab1a Fix container log bind mount regression
2d558be Change the common proxy cache manage tasks to be stateful
d5eb321 Configure Calico specific BIRD settings in OSA
d2b7195 [DOCS] Update manual upgrade guide
9ef988a Set default keepalived cidr if none is provided
18de1b4 [Docs] Removed extra grave accent(`)
0d824ce Update role SHAs for 14.0.0 2016-10-12
8668096 Fix value for openstack_host_manage_hosts_file
83dc13e Use UCA for keepalived by default
f3965d0 Add missing double quote
4664922 [DOCS] Edits to the target hosts chaps
b021ad2 [docs] Add Introduction heading to Appendix B
04605ca [Docs] There was typo mistake
dbef51e [docs] Alignment gets corrected
d30a5e3 Ensure that repo_server/repo_build use same user:group
5aee4a3 Rename ironic database password during upgrades
cfbd4f4 Set calico wheel name for py_pkgs lookup
7052150 [DOCS] Edits to installation chapter
b268540 [Docs] RabbitMQ is an AMQP server
ff436eb [Docs] Make the security note readable.
a3dd33a Update role SHAs for 14.0.0 2016-10-07
410d915 [Docs] Fix space typo with effect on rendered page
36ab2b3 [Docs] Fix Ansible link
5d151d6 Update all SHAs for 14.0.0 2016-10-06
1375f8c [DOCS] Applies edits to the OSA install guide appendix C
e6094c8 [DOCS] Applying edits to the OSA install guide: deployment host
d02fd99 [DOCS] Applying edits to the OSA install guide: configure
786b31e [DOCS] Applies edits to the OSA install guide appendix D
b638321 [DOCS] Edits to appendix E
abc1526 [DOCS] Adjust watermark color
4928c37 [DOCS] Edits to appendix F
8d9601b [DOCS] Edits to appendix G
34efaf9 Update all SHAs for 14.0.0 2016-10-05
fe99dad [DOCS] Applies edits to the OSA install guide appendix B
ca727cc [docs] Applying edits to the OSA install guide: overview
3c9d3d1 [DOCS] Applies edits to the OSA install guide appendix A
bc3c32c Fix a few grammatical errors
539f8fc Checksum all traffic traveling though the bridges
8f4c0c4 [docs] Update Newton doc index
342e147 Revises yaml to YAML
ad7231b [install-guide] Aligned properly at Test environment
ac14d06 Update UPPER_CONSTRAINTS_FILE for stable/newton
f709469 Update .gitreview for stable/newton
57fef57 Move base environment directory to an argument
1443268 Mock file system when testing duplicate IPs
ca7df2f Use detailed arguments for main function
62028d0 [DOCS] Added release-name as a watermark to Docs.
d98eb49 Change default log bind mount to be optional
ee40a8a Fix CentOS Ansible bootstrap
195b505 Update all SHAs for 14.0.0 2016-09-27
1604bba Add sshd_config to the bootstrap AIO process
b5806d7 Remove swift_repl|storage_address calculation
b091fff Update the order of release note page
2d1ebb3 Add ironic_rabbitmq settings to group_vars/all.yml
41f42be Filter pre_release versions of packages
6a8a41c Log the ansible runtime report to a file
57baef2 Remove the redundant exit tasks from the gate
671a092 Reduce config file IO and date coupling in tests
f94b2d4 Create complete AIO inventory config for tests
99b2747 Use lineinfile to add missing user secrets
0c8bd97 Update run-playbooks to support playbook logic
e857c43 Add Ironic service info to group_vars/all.yml
83ce564 [docs] Minor edit to the install guide
51a134d Remove use of venv_enabled variables
8d711ae Add debug logging to dynamic inventory
e279676 Force Ansible to use dynamic includes
a981b31 Add files to .gitignore
bd6a0d8 Add curl to utility distro packages
7ac2dd6 [DOC] Better clarification for container_interface in user_config.
350c061 Fix br-vlan port in multi-node bootstrap-host
18c9e1f Add export host option for inventory-manage script
773dc54 Fix deprecation warning for undefined variables
f40ecde Update all SHAs for 14.0.0
5098998 Update AIO script to support ubuntu-ports
1b0f020 [docs] Add links to example configuration files
6e50d69 os_ironic mysql password variable not updated
504bffa Add Swift telemetry notification consumer to Ceilometer
10a7d80 [docs] Resolve errors and simplify sphinx config
d980d26 Update testing bits for consistency
55de3ed [DOCS] Update to installation guide
5e3f0ba Remove search_regex from mariadb port check
f904d75 Fix bootstrap-host authorized_key transfer for multi-nodes
1926e6e Run haproxy playbook earlier within upgrade script
c8791e0 Move inventory management code to enable imports
ce26d14 Remove repo-server-pip-conf-removal from upgrade script
51f4dec Define networking for Multi-node environments
7a70d25 Remove existing MariaDB HTTPS repos during upgrade
39280be [DOCS] Added dynamic content to Upgrade guide from conf.py
6496c66 Retain apt sources options during host bootstrap
3917510 [install-guide] remove redundant part for security hardening
2e7a2b8 [install-guide] complete commands in prepare target hosts
bc8b321 Update all SHAs for Newton 2016-09-16
0bbf801 Modify use of assertTrue(A in B)
518fb38 Add lxc_host dynamic group to inventory.
867bf11 [DOCS] Added missing level info for haproxy_hosts
ec2656e Aodh should inherit service_region
2c2fe6c Updated from global requirements
2ec74e9 load variables as a simple var for upgrades
ecd81b9 Cleanup/standardize usage of tags in plays
e7f37f9 Remove assumption that the neutron_lbaas var is set
27a41d1 [DOCS] Added HAProxy example to Production environment document.
eff7914 Gnocchi identities created before Swift playbook
f7a50a2 Implement scenario capability for AIO
a29d5df Configure Ceilometer middleware for Gnocchi-Swift
968d893 [docs] Merge install guide configure content into a single page
cc250b7 Ensure that gnocchi uses keystone authentication
ab80b0a Fix log path option
863bedb [Docs] Add explanations about our bug triage process
e588a52 Enable the opportunistic strategy
eb55da0 Update all SHAs for Newton 2016-09-12
bceb1e1 Make the file name for user_secrets a variable
527e22f Correct Magnum issues found in AIO testing
5a878af Added the option to copy changes between stock env.d and repo env.d[M->N]
084fc69 Ensure file modes are 644 for inventory and group_vars for Magnum
1da4f02 [DOCS] Reorders ToC for upgrade guide
cbc9234 Properly namespace cinder_storage_address
5be187d Add ansible_host to dynamic inventory
3978647 Enabled conversion of existing db  and tables to utf8_general_ci [M->N]
64708ec Compress all gathered logs for CI
1083dc9 bump the keystone sha for changes to keystone
8cc0125 [docs] added a necessary arg for ansible command after removal of ansible.cfg
d0448be [docs] Move all example configurations to Appendix
7c5f177 Add a doc example for yaml file overrides
3b6895b Move network_entry function to top level
0ba4f8b [docs] Minor edits to the overview chapter
128242a Derive the OpenStack service list from the service file
eeaa433 Reduce the default fact gather subset
e14d359 [docs] Replace 'Host Layout' with 'Service Architecture'
5cc5277 [docs] Split Network Architecture page
4f94bf7 Set file mode to 644 on os-magnum-install.yml
a5a5bf8 Skip V-38471 for CI execution
f4290c0 [Docs] Update security appendix
4f07ac6 [DOC]Added Xenial support in install guide
2d2b732 [docs] Revise Storage Architecture Overview
55b62e8 [DOCS] Renaming sections for install guide
28544d6 [DOCS] IA movement of the install guide
6b7e646 Ensure that repo build arch grouping always runs
4c18f3d Fixed assumed utility pip install for specific clients
a06d93d [DOCS] Clarify is_metal is required if using iSCSI
3210274 Revert "Disable SSL use for RabbitMQ"
c496fc5 Move storage diagrams
2e355b3 [DOCS] Rename upgrade documentation to upgrade guide
02ca69a [DOC]Added missing yml file and example for test and production environments.
e2341fc Optimize and fix known container hosts group
abec787 Move pip_lock_to_internal_repo to group_vars
cf52fa7 Ensure tempest always has an Ansible config export
752c9c9 Revert role SHA pins for Newton RC prep
6bd8f3c Add vars for Swift telemetry settings
9096b0d Disable SSL use for RabbitMQ
8d3b3e2 [docs] Add storage diagrams
91032ef Enable Gnocchi and Aodh when inv groups non-empty
73ee3eb Address missing variable in common tasks
46b662d [DOCS] Moving the draft install guide to the install-guide folder
851ac18 Fix role SHA's for Newton-3 release
7669501 Fixed hosts inclusion when requiring the lxc_hosts role
5ee185e Ensure that the filters_path is correctly updated
992e616 Implement container bind mount for all logs
58e9c8d Fix deprecation warning for undefined variables
a648951 Add RC source to scripts library
943676b Adding a playbook for deploying Sahara
d651ed7 Unbound DNS resolution containers
981db90 Remove pip.conf during upgrade on all hosts
c51fe9b Updated from global requirements
192efa5 [DOCS} Further edits, corrects to draft install
8be9f55 Update all SHAs for Newton 2016-08-25
29f29e6 Add Magnum deployment to setup-openstack playbook
a889885 Updated from global requirements
75b3628 Support multiple rabbitmq clusters
fe55aa2 DOC - note that stable/mitaka on Ubuntu works at most with 14.04
9767d12 Add the BGP dynamic routing neutron plugin
423c409 Remove the ansible.cfg file
c89f277 Add play to deploy Rally to the utility containers
cebce0c Project Calico integration
16bccd9 [DOCS] Add interface configuration content
df49eeb [DOCS] Correcting the appendix letters
92ad610 Tell existing shell about upgraded pip
f37351d DOC - remove quotation for code-blocks
6bcfc47 Removed variable changes table from the doc.
6f028ff Fix error when repo_build_git_cache is undefined
382f4be Set default/fix version numbers in upgrade script
8b2cdb3 Automatically detect whether to test all integrated roles
07123eb Fix wrong version of pip used in bootstrap
b1483c8 Automatically detect whether to test ceilometer/aodh
5ec339c Automatically enable the cinder backup service
ea7e218 Allow the repo-build to index utility pip packages
6a647d5 Remove security hardening toggle from AIO user_variables.yml
64c6307 Automatically set swift as the glance default store
704246d DOC - use 'shell-session' to render root user commands
c4efadd [DOCS] Clean up of draft install guide
adebdb8 Allow the use of a pre-staged git cache
ac35c1d [DOCS] Remove ceph and HAProxy from dev docs
ecf73b3 Move ceph_client and haproxy_server to IRR
31280a3 Make all linting tests use upper-constraints
5f396dd Loopback cinder image must insert before exit 0
2683082 Updated from global requirements
f7babef Implement inventory API docs
0103e0d Set a long package cache timeout for OpenStack-CI
89f088e Add aodh-api init removal upgrade docs and script
d68e65b Add an inventory config check option
9d8177f Update all SHAs for Newton 2016-08-15
b32b5d5 Support pulling architecture-tagged venv artifacts
ebc9af1 Remove old inventory conditional support
4049357 Reduce minimum data disk size for the AIO to 50GB
8d2caac Restrict Ansible fact gathering to base subset
45b9642 Limit LXC hosts playbook to container hosts only
2f87f8c Robust base dir support for bootstrapping
49c303a Create config test base class
619b40c Print remaining tasks on failed upgrade correctly
abc1663 Add ability to change apt/yum package state globally
07ef158 [docs] fix invalid hyperlink in overview-security.rst
36cd1de [DOC] Add cinder service when cleaning up aio host
28b1fc7 [docs] fix a link in overview-host-layout.rst
199e33c [DOCS] Updates to deploy config
f87b141 [DOCS] Update gate job names
d493444 [DOCS]Edited the path to installation workflow diagram in install-guide-draft
b5dc44c [DOCS] Removing and moving nova and neutron docs
ab887ee Include python requirements to resolve SNI issue for Ansible venv
b6d9220 Remove "optional" in the o_u_c example for repos
0d1c6ec Fix deprecation warning for ceph_client role.
12d4c7e Do not discard when creating XFS loopback
0461d79 Adding Magnum-UI Horizon support
55de7dd Move package cache configuration to common tasks
627429b [DOCS] Add storage architecture information
7146e82 Relocate Swift configuration docs to os_swift role
e9dd96e [DOC] Added automatic fetched latest tags.
57ea99a [DOCS] Ensuring deploy-config accurately reflects changes
c278267 [DOCS] Delete horizon docs
085e57d Enable Gnocchi by default
412b863 [DOCS] Delete ceilometer and aodh dev docs
6e432e8 Fix keepalived sync groups var name
19d9064 Remove SSL protocol/cipher from AIO user_variables.yml
5aa7998 Add haproxy_service_enabled boolean
162f530 Add ability to change apt/yum package state for the ceph_client role
62bcac9 Add ability to change apt/yum package state for the haproxy_server role
e18c636 Split package update/install commands
ab3a192 [DOCS] Remove apt proxy when rebuilding AIO
9b314bf Move other-requirements.txt to bindep.txt
e9e79a6 Manual upgrade doc fix
108ea96 Add discovery and build for multiple CPU architectures
b921676 [DOC] Modified conf.py to fetch the latest tag automatically
32b0b54 [DOCS] Remove ops-logging Doc
343d9d6 [DOCS] Remove and move keystone federation documentation
bc7b0a7 [DOCS] Move RabbitMQ configuration info
2e53019 [DOCS] Fix the appendix order
656543d Ensure that the LXC container log is also destroyed
b19f783 [Docs] Remove and move cinder config docs
783ad41 [DOCS] Remove and move ironic role docs
90884cd [DOCS] Remove and move glance documentation
37e7700 Adding support for Magnum
c69c031 [docs] Revise deployment host and target hosts chapter
d95eaf5 Add test for setting existing used IPs.
6535ec3 Disable V-38660 for OpenStack-CI
233eb80 Refactor "get_instance_info" gathering
d3e5487 Fix incorrect operations link in run-playbooks.sh
5dc89d8 Do not override Horizon ServerName in playbook
6cb3b1e Updated from global requirements
fd690e1 Better control of mariadb restarts in upgrades
6ae2266 Docs: Minor security overview update
85fde9f [DOCS] - Cleanup Telemetry docs
c69a07c [docs] Move ops content and fix build errors
5bf8c53 Update all SHAs for Newton 2016-08-01
b2629de Define retries on ceph keyring fetch task
6c7c870 Remove return_netmask function
c790092 Docs: Enabling LBaaSv2 horizon panel
5cc9d0b [docs] Revise deployment configuration chapter
a2ed5c3 [docs] Edit the installation chapter
59bdfb9 Add options to allow for selective git clone
17db059 Add SNI support via OS/python packages
cf875c8 Update Ansible pin to v2.1.1.0
f2f280b Update the home-page info with the developer documentation
d3f240e [docs] Modify host layout diagrams
48ed46e Add Horizon Ironic dashboard plugin
24403e7 Add openrc_region_name to define the service region for openrc files.
ccb1036 Added docs for removing compute host
2d965dc Add SNI support via OS packages
b52c21e [docs] Remove duplicated content
2c20e5f Revert role SHA pin for Newton-3 development
4cbd2f4 Restore telemetry service deployment
d2a2b72 Confirm container data destroys
767662b Fix get_url SNI issues in CentOS 7
9137874 Allow empty container dicts in env overrides
30c0ca3 Add nova-lxd virt driver git repo
c9925be Enable the use of a package manager cache
a4c836c Fix override of ANSIBLE_PACKAGE variable
b3def9d Update all SHAs for Newton 2016-07-27
dcfdc93 Fix role SHA's for Newton-2 release
b8b1491 If /var/log/lxc exists, move it to the log aggregation parent
99ffcf3 Implement git server HAP backend
2b422db [DOCS] Fix up validation failures
e3526a8 Fix distribution detection in bootstrap
e481744 Remove pip_install role execution from RabbitMQ playbook
9812d6d Fix deprecation warning for undefined variables
ae8bc70 Address Ansible bare variable usage
a7884ba Move UCA repo URL var to role defaults
95adb62 Optimise pip install tasks
413151f Change pip install task state to 'latest'
3fa780d Move LXC AppArmor profile setting to the inventory
ade366a Disable ansible retry files
48eedc7 Remove callback plugins
8f1b33d Update the sources-branch-updater
993515c Fix 'D000: Check RST validity' documentation lint failures
30dacdf Dynamically determine whether ceilometer should be enabled
c54736f Removing the infra_hosts and infra_containers groups
e2663c6 Support other architectures in apt sources.list
6bf159f Move LXC logs to /openstack/log
ed4bc6b Add CentOS7 support to the utility playbook
7b75c22 Updated from global requirements
91deb13 Cleanup/standardize common tasks
5455543 Moving neutron play vars to the group_vars for neutron
e37f524 Test LBaaSv2 in AIO
43f585a Support for Open vSwitch Distributed Virtual Routing
e6ad4cf Update all SHAs for Newton 2016-07-20
5510103 Implement overlayfs as the backing store for the AIO
88ae508 Install Ansible from pypi instead of from a git clone
5c4d8b2 [DOCS] Update 'Practice B' with note
e9f6acc Fix 'D001 Line too long' documentation lint failures
a982e3a [DOCS] Adding storage arch to install guide draft
8af2c12 Add other-requirements.txt
60bad86 Change requirements pin method
47bd970 Updated from global requirements
7836911 Resolve 'E501 line too long' linters error
4709455 Ensure that gate test does not remove ~/.ansible/tmp
439831f [DOCS] Adding in note for pretend_min_part_hours_passed
9427a99 [DOCS] Clarify variable usage in global_overrides
24e63ab Fix 'D001 Line too long' documentation lint failures
3ef1297 Fix tox functional test
68d68c2 Remove os-detection script
63b4989 Update mongodb bootstrap tasks
d312147 Removed the default pip install options from upgrade.sh
63012f0 Add upgrade playbook to update database collations
8e9b800 Added git package to the utility container
98e77ca Remove excessive tags
7518743 Adding requests to bootstrap ansible
57fa0b9 Decouple galera client role from OSA inventory groups
0304137 Docs: Implement Manuals Theme and doc8 checks
89d82b0 Fix 'D002 Trailing whitespace' doc errors
14f6650 Introduce a playbook for deploying Gnocchi
31e6cd0 Remove pip_lock_down requirement
430cb36 [DOCS] PIP install via deployment host
8fb6a3b Fix skipping Ceph client linking
a146b7a Document env.d changes in install guide
35c4b55 Fix Neutron local_ip fallback
08beb6f glance_api_servers must contain a valid url with protocol
8d73290 Fail Fast when trying to upgrade with LBaaS v1 enabled
bb2d4ad [docs] Remove duplicated content in the current install guide
f870707 Doc: Update documentation for lxc_net_mtu config
255de98 Update tox configuration
cba8bd5 Document swift in the host layout section
8a49b0c Remove aodh vars present in group_vars
3403f05 Remove duplicate exit_early execution
693911d HAProxy extra endpoints
b511791 HAProxy: configure either novnc or spice
7b288ea Use in-tree env.d files, provide override support
523822b Fix memcached flush if -l is in hostname
976d62f Remove cinder vars present in group_vars
5c31795 Remove ceilometer vars present in group_vars
a4053a9 [docs] Revise overview chapter in OSA install guide
2a03ba0 Enable OpenStack-Infra Ubuntu Cloud Archive mirror
59e5a7c Define galera_address in the all group_vars
8972dec [docs] Address tox errors
dff646c Remove _net_address_search from dynamic inventory
3b51d07 Fix HAProxy config and install version when ssl is disabled
d524386 Flush memcached on first listen IP only
4a7e954 Confirm container destroys
912de0f Make pip_lock_to_internal_repo a playbook var
42f1e4b DOC - Fix YAML format in cidr_networks example
e84cc94 Remove references to unused heat vars
fe9fc36 Trivial typo fixes to dynamic_inventory.py
9be0662 Refactor run-playbooks
23708a6 Docs: Enable LBaaS v2 Horizon panel
a471cbc Removing duplicate gather_facts in playbooks
e46c1c5 Enable human readable logging
fbac5f8 [docs] Fix build errors
cc7bd49 Change USED_IPS variable to a Python set
2321424 Address Ansible bare variable usage
97eb3b3 Remove remaining container_release properties
6ac4aa1 Ignore Ansible .retry files
4aa13d4 DOC - Remove instructions to run haproxy-install.yml play
5f06376 Fix typos in openstack-ansible/doc
ad69389 Update lists of skipped security role tasks
a232676 Gate: Restrict Ansible fact gathering to base subset
2a5a2a1 Add an easy way to run cmds in utility container
7b2a995 Disable root private key distribution by the utility playbook
d3f7e80 Remove libvirt bootstrapping from AIO
b8802f5 Add conditional for overlay network settings
f68bebd Auto-enable Ceilometer + Aodh integration
bdb856c [docs] Migrate ops and appendix content
effa83d Fix keystone DB Access variable
f426eb9 Remove deleted override vars from inventory
53bb55d Ensure that AIO extra indexes config is well formed
875c5e4 [docs] Revise upgrade guide structure
19ac766 Address Ansible bare variable usage
f5b39a0 Docs: Fix missing instructions for newton manual upgrades
ade33a3 Temporarily disable UCA usage in OpenStack-CI
fe60f1e Add release file prep script
e5622ad Speed up gate: avoid gathering facts more than necessary
238257b [docs] Migrate deployment configuration options
5db330d Configuring AODH DB now that it uses MySQL.
4402fd3 Docs: Add role development maturity guidelines
e679951 Actually remove Ironic container creation from AIO preparation
3fafd24 DOC - fix links in upgrade-playbooks
49c5d1d [docs] Add draft install guide directory
d59a2ff Reduce and organize group vars
1133ea8 Remove the AIO metadata checksum fix from run-playbooks
4970801 DOC - Adjust tag usage instructions for VPNaaS
18d4350 Switch Ironic role repo to use git.o.o
07357dd Remove Ironic container creation from AIO preparation
12ba130 Extract and test inventory and backup I/O
7c8533a DOC add note about building AIO more than once
f479a21 Do not use cpu_map_update.py anymore
16c0193 change host_containers group names in inventory
06e5aba Revert to test role master branches for Newton-2 development
bb69b66 Update all SHAs for Newton-1 2016-06-02
810e0a7 Use combined pip_install role
dbdc1c7 Update ansible to version 2.1
3d62933 Consistency for multi-os in the includes
d790aa8 DOC New Appendix - custom component layouts
1d29082 RFC1034/5 hostname upgrade
e31dee1 Remove AIO cache resolver configuration
b5b2bb9 Add RabbitMQ mgmt UI through HAProxy
b3683de Remove unneeded playbook vars
bd33008 Clarify static route requirements check
729c06c Correct nova admin endpoint version
e32d850 Note to deployers overriding MTUs
246d10e DOCS: Clarify guidance for deploy hosts
4303174 Cleanup horizon vars in hosts.yml
8ae5127 Update HAProxy for multi-OS support
f7369d9 lxc_cache_resolvers [u'nameserver1',u'nameserver2'] fixing
1211668 [DOCS] Adding missing kernel modules for VPNaaS
c904de2 Isolate Ansible from the deployment host
3bb1c40 DOCS: Clean up of the Newton upgrade guide
7f70ca7 Set AIO to use an OpenStack-Infra wheel mirror
4b051d7 Test _ensure_inventory_uptodate function
7ebe085 Reduce reliance on global state for testing
5a1bf48 Ensure all role dependencies are consistently specified
96443f5 Automatically enable neutron ha router capabilities
b4f5e13 Expose upgrade guide in base index
86fbc79 Add tests for the _net_address_search function
894e0c4 Test static route settings
213d028 Create ceph python library symlinks
30c59b2 Updates all SHAs for Newton 2016-05-19
96d0dd0 Added option to set the role fetch mode
d49494b Add nova-powervm repo for initial PowerVM support
b15363c Remove paramiko restriction
69f60a8 Remove AIO container cache apt configuration
edee94d Change to using ANSIBLE_FORKS and update related tip
25bb84a Ignore the .coverage temp file
085c31d Bump swift SHA
c52755e Docs: Add note about slow galera recovery
59694c7 Network service docs cleanup
6a61321 Add docs for LBaaSv2 Horizon panels
0cdaa5c Various fixes to the proxy default conf and doc
2f81ec1 Updated the link as per comments
6e9db90 Verbose option has been deprecated from oslo.log
0984490 Fix install guide link in contributor guide
a372277 Docs: Add note about releasenotes local build
fbd1f3f Docs: Troubleshooting info for 3.13 kernel upgrade
602ddac DOC - AIO build expected to be performed as root user
3e4c9df Docs: Cross ref local tests on contribution guide
f0c46ca Docs: Document SSH key exchange for Ceph client
a44d075 Docs: Fix bulleted lists and spacing
3a0523a DOC: Change swift mountpoint to /srv/node
12d9ef2 Docs: Update Liberty & Mitaka release status
a88778f Initial commit to enable mitaka>newton upgrades
601487b Add documentation guidance to the contributor guide
45d5ee5 Refactor ceph_client for multi-OS and ceph
0d3b531 Document the Release Notes build
28340ab DOCS: Deployment host  section - cleanup
71554ca DOCS: Configuration section - cleanup
cff6ea0 Added the DB and DB user create to the plays
1124a5e Test inventory backup file creation
bb5b306 Doc: Correct the note about the LXC host ssh key check
e78c9e3 DOCS: Configuration section - cleanup
02f8d3d Test and refactor argument parsing
8b6fb77 DOC - Removing incorrect doc about installation workflow
8399965 Check for two IP addresses assigned to same host
dfc642c Docs: Mandatory ssh public key
6be15b8 Isolate Ansible bootstrap from repo servers
60247f2 Add group vars to prep for os_tempest role changes
8e663d7 Add neutron_openvswitch_agent to env.d files
2f45772 Revert "Fix container hostname for RFC 1034/1035"
21c2478 install rabbitmq-server in serial
349e134 DOCS - Installation with limited network connectivity
9a2df7c Mention of the supported locales in the documentation
4b84a8c Use task state instead of output to create haproxy log directory
87e32dc Automatically increment the patch ver with sources-branch-updater
7e8d629 Doc: Configuring the network refactor
55155f3 DOCS: Configuration section - cleanup
0086227 Doc: Configuring the network on target refactor
c441849 Ensure that the sources-branch-updater updates the Ironic role
32fd0e7 Fix dynamic_inventory.py test bug
a0bceb9 Removed container_release property from environment files
12f0c68 Doc: Notice to disable security hardening role during minor upgrades
4330b4c DOC - Adding footer to Nuage Appendix doc
18ddf7b Add .swp files to .gitignore
d80d6f9 Make tox use python2.7 more specifically
eee35cc Build wheel for neutron-lbaas-dashboard
e971e15 Integrated updates after the multi-distro changes
6bcb3d1 Add release note for paramiko issue workaround
afc9ec9 Docs: Appendix section - cleanup
7a82904 Docs: Ops section - cleanup
89963f6 Docs: Installation section - cleanup
2703e4b Docs: Target hosts section - cleanup
3a5672b Check for IP addresses assigned to multiple hosts
5443833 Docs: Overview section - cleanup
51441fe Disable security role during major upgrades
1b4550b Add dependencies for paramiko 2.0
2c5edcf Docs: Clean up multiple make html warnings
ef347ab Remove unused var pip_no_index
5a931c7 Add error test coverage and adjust test setup
0cf2c9b Fix typo in overview-hostlayout.rst
2a2ad3a Remove teardown.sh and update related docs
92eb98e Enable SSL termination for all services
c361fae Improved logging for memcached (OSA calling part)
dbcfdec Add docs for limiting access to services
34ddd52 Fix LBaaSv2 neutron_plugin_base entry in docs
e22641a Execute rabbitmq sorts for config tags
909bf76 Set test python executable to python2
644c57b Docs: Update PLUMgrid neutron services dict override
3107fdd Docs: Cleanup page to configure to docs standards
43ff983 [User Guides] Link Updates - openstack-ansible
2fc728d Update Newton SHA's 2016-04-22
72c593c Add docs for HAProxy ping checks
8387b68 Change keystone admin/internal insecure flags
e8ae4cb Update sources-branch-updater to handle release note copying
608640c Add missing line number report, fix coverage dep
ebdff9e DOCS: Update aio docs for Mitaka
edda55e Docs: Split Network Services section into multiple files
7280c90 Docs: Add pip configuration removal to AIO re-deployment process
aa1f09f [DOCS] Adding Ironic configuration docs to Ansible install guide
b50a190 Nuage Neutron Plugin OSA Install guide
2ffb776 Change keystone admin/internal insecure flags
928e907 Refactor main inventory function for testability
559d2dc Add coverage reporting to inventory testing
9a737ad Fix container hostname for RFC 1034/1035
27e65b2 DOC - Adding warning about changing passwords/secrets
ca73998 Add option to auto enable from VPNaaS in Horizon
dfe4f10 Docs: Change invalid reference to FWaaS in VPNaaS documentation
ae99efd Adding modularity to keepalived configuration
5fceb78 Added horizon documentation section for cinder
bb1db35 Doc: Improved documentation about LVM overwrite behaviour
f8c30f0 make hostname,network and ip-address on all examples consistent
4604950 blacklist Ansible 1.9.6
0d9530c Add `ironic_swift_temp_url_secret_key` the secrets
60603c3 Adjust ansible-role-requirements-editor file open options
12555d7 ceph configuration for nova glance and cinder
2d82d41 Move inventory environment loading to function
fac5030 Update source-branch-updater to work with IRR's
35ed804 Add installation support for os_ironic
12a3fba Fixing keepalived bug when 2+ backup nodes have the same priority
fa7218d Minor fix to correct passive to active voice
13de5ff Fix idempotency bug in AIO bootstrap
a2c1d8c Fix configuration string for haproxy
4f3b266 Refactor user config loading into function
f56c9c6 Modify the haproxy play for ansible2 compat
6d3eea3 Add project scoped token when obtaning token
2288151 Add convenience links for install workflow doc
dde53b1 Add tempest_log_dir variable
b6a5c9a Apply host security hardening by default
d87fdf2 Specify allocation pool for public subnet
cc416a1 Doc index update
bb61cc0 Add debug and verbose to user variables
6485728 Add trusty_backports note to requirements
beafa5b Add Ceilometer instructions to new compute node instructions
36a8151 Update documentation index page
1cc4c11 Set SHA's for master to OpenStack master SHA's
4317c3e Update reno for stable/mitaka
4e5e52a set up the unreleased page for reno
d72e3a6 Fix typo in swift.yml.example file
6eb3c34 Ensure the OpenStack gate has access to the logs
797dbb6 Remove hard-coded pip indexes from repo-build playbook
fa063b9 removed duplicate key
6f9ef5f Set lxc_container_caches not to use repo_pip_default_index
496bc49 Removing unneeded is_metal param from user_defined_setup
00207d3 Include security role in setup-hosts.yml


Diffstat (except docs and test files)
-------------------------------------

.gitignore                                         |  16 +-
.gitreview                                         |   2 +-
ansible-role-requirements.yml                      | 154 ++--
bindep.txt                                         |  23 +
.../installation-hosts-limited-connectivity.rst    | 182 ++++
.../developer-docs/ops-remove-computehost.rst      |  51 ++
.../install-guide/app-advanced-config-affinity.rst |  50 ++
.../install-guide/app-advanced-config-options.rst  |  15 +
.../install-guide/app-advanced-config-override.rst | 267 ++++++
.../install-guide/app-advanced-config-security.rst |  39 +
.../app-advanced-config-sslcertificates.rst        | 141 +++
.../install-guide/app-advanced-role-docs.rst       |  92 ++
.../install-guide/configure-cinder-backup.rst      |  79 --
.../configure-configurationintegrity.rst           |  29 -
.../configure-federation-idp-adfs.rst              |  42 -
.../install-guide/configure-federation-idp.rst     |  77 --
.../install-guide/configure-federation-mapping.rst | 168 ----
.../configure-federation-sp-overview.rst           |  60 --
.../install-guide/configure-federation-sp.rst      | 124 ---
.../configure-federation-use-case.rst              | 298 -------
.../install-guide/configure-federation-wrapper.rst |  78 --
.../install-guide/configure-network-services.rst   | 191 -----
.../install-guide/configure-sslcertificates.rst    | 137 ---
.../install-guide/configure-swift-config.rst       | 328 -------
.../install-guide/configure-swift-devices.rst      | 106 ---
.../install-guide/configure-swift-glance.rst       |  70 --
.../install-guide/configure-swift-overview.rst     |  23 -
.../install-guide/configure-swift-policies.rst     |  51 --
.../figures/arch-layout-production.png             | Bin 0 -> 217767 bytes
.../figures/arch-layout-production.svg             |   3 +
.../install-guide/figures/arch-layout-test.png     | Bin 0 -> 220515 bytes
.../install-guide/figures/arch-layout-test.svg     |   3 +
.../install-guide/figures/arch-layout.graffle      | Bin 0 -> 6161 bytes
.../install-guide/figures/environment-overview.png | Bin 72806 -> 0 bytes
.../installation-workflow-configure-deployment.png | Bin 0 -> 49639 bytes
.../installation-workflow-deploymenthost.png       | Bin 0 -> 48857 bytes
.../figures/installation-workflow-overview.png     | Bin 0 -> 46557 bytes
.../installation-workflow-run-playbooks.png        | Bin 0 -> 48037 bytes
.../figures/installation-workflow-targethosts.png  | Bin 0 -> 48201 bytes
.../installation-workflow-verify-openstack.png     | Bin 0 -> 50368 bytes
.../figures/installation-workflow.graffle          | Bin 0 -> 2583 bytes
.../figures/production-storage-cinder.png          | Bin 0 -> 102217 bytes
.../production-storage-cinder.svg/image3.wmf       | Bin 0 -> 19378 bytes
.../production-storage-cinder.svg                  |   3 +
.../figures/production-storage-glance.png          | Bin 0 -> 87006 bytes
.../production-storage-glance.svg/image3.wmf       | Bin 0 -> 19378 bytes
.../production-storage-glance.svg                  |   3 +
.../figures/production-storage-nova.png            | Bin 0 -> 84263 bytes
.../figures/production-storage-nova.svg/image3.wmf | Bin 0 -> 19378 bytes
.../production-storage-nova.svg                    |   3 +
.../figures/production-storage-swift.png           | Bin 0 -> 108150 bytes
.../figures/production-storage-swift.svg           |   3 +
.../figures/production-storage.graffle/data.plist  | Bin 0 -> 8497 bytes
.../figures/production-storage.graffle/image3.wmf  | Bin 0 -> 19378 bytes
.../install-guide/figures/production-storage.svg   |   3 +
.../figures/workflow-configdeployment.png          | Bin 29232 -> 0 bytes
.../figures/workflow-deploymenthost.png            | Bin 28635 -> 0 bytes
.../figures/workflow-foundationplaybooks.png       | Bin 29126 -> 0 bytes
.../figures/workflow-infraplaybooks.png            | Bin 29198 -> 0 bytes
.../figures/workflow-openstackplaybooks.png        | Bin 28949 -> 0 bytes
.../install-guide/figures/workflow-overview.png    | Bin 26878 -> 0 bytes
.../install-guide/figures/workflow-targethosts.png | Bin 28892 -> 0 bytes
.../install-guide/install-infrastructure.rst       |  96 ---
.../overview-service-architecture.rst              | 122 +++
.../install-guide/targethosts-networkconfig.rst    |  26 +
.../install-guide/targethosts-networkexample.rst   | 166 ----
.../install-guide/targethosts-networkrefarch.rst   | 140 ---
.../upgrade-guide/reference-upgrade-playbooks.rst  | 125 +++
.../interfaces.d/openstack_interface.cfg.example   | 123 ---
.../openstack_interface.cfg.prod.example           | 132 +++
.../openstack_interface.cfg.test.example           |  94 ++
etc/openstack_deploy/conf.d/ceilometer.yml.example |   7 +-
etc/openstack_deploy/conf.d/cinder.yml.aio         |  16 +
etc/openstack_deploy/conf.d/glance.yml.aio         |   4 +
etc/openstack_deploy/conf.d/gnocchi.yml.aio        |  19 +
etc/openstack_deploy/conf.d/heat.yml.aio           |   4 +
etc/openstack_deploy/conf.d/horizon.yml.aio        |   4 +
etc/openstack_deploy/conf.d/ironic.yml.aio         |   4 +
etc/openstack_deploy/conf.d/keystone.yml.aio       |   4 +
etc/openstack_deploy/conf.d/magnum.yml.aio         |   3 +
etc/openstack_deploy/conf.d/magnum.yml.example     |   8 +
etc/openstack_deploy/conf.d/neutron.yml.aio        |   5 +
etc/openstack_deploy/conf.d/nova.yml.aio           |   8 +
etc/openstack_deploy/conf.d/sahara.yml.aio         |  16 +
etc/openstack_deploy/conf.d/swift.yml.example      |   8 +-
etc/openstack_deploy/conf.d/unbound.conf.aio       |   3 +
etc/openstack_deploy/conf.d/unbound.conf.example   |   8 +
etc/openstack_deploy/env.d/aodh.yml                |  35 -
etc/openstack_deploy/env.d/ceilometer.yml          |  60 --
.../env.d/cinder-volume.yml.container.example      |  12 +
etc/openstack_deploy/env.d/cinder.yml              |  79 --
.../env.d/extra_container.yml.example              |   2 -
etc/openstack_deploy/env.d/galera.yml              |  32 -
etc/openstack_deploy/env.d/glance.yml              |  36 -
etc/openstack_deploy/env.d/haproxy.yml             |  39 -
etc/openstack_deploy/env.d/heat.yml                |  51 --
etc/openstack_deploy/env.d/horizon.yml             |  31 -
etc/openstack_deploy/env.d/infra.yml               |  22 -
etc/openstack_deploy/env.d/keystone.yml            |  40 -
etc/openstack_deploy/env.d/memcache.yml            |  31 -
etc/openstack_deploy/env.d/neutron.yml             |  74 --
etc/openstack_deploy/env.d/nova.yml                | 113 ---
etc/openstack_deploy/env.d/os-infra.yml            |  22 -
etc/openstack_deploy/env.d/pkg_repo.yml            |  39 -
etc/openstack_deploy/env.d/rabbitmq.yml            |  31 -
etc/openstack_deploy/env.d/rsyslog.yml             |  39 -
etc/openstack_deploy/env.d/shared-infra.yml        |  22 -
etc/openstack_deploy/env.d/swift-remote.yml        |  40 -
etc/openstack_deploy/env.d/swift.yml               |  81 --
etc/openstack_deploy/env.d/utility.yml             |  31 -
etc/openstack_deploy/openstack_user_config.yml.aio |  46 +-
.../openstack_user_config.yml.example              |  86 +-
.../openstack_user_config.yml.prod.example         | 282 ++++++
.../openstack_user_config.yml.test.example         | 144 ++++
etc/openstack_deploy/user_secrets.yml              |  35 +-
etc/openstack_deploy/user_variables.yml            |  40 +-
.../user_variables.yml.prod.example                |   9 +
global-requirement-pins.txt                        |  16 +-
playbooks/ansible.cfg                              |  24 -
playbooks/common-tasks/mysql-db-user.yml           |  36 +
playbooks/common-tasks/os-log-dir-setup.yml        |  42 +
playbooks/common-tasks/os-lxc-container-setup.yml  | 128 +++
playbooks/common-tasks/package-cache-proxy.yml     |  49 ++
playbooks/common-tasks/rabbitmq-vhost-user.yml     |  36 +
playbooks/defaults/repo_packages/gnocchi.yml       |  38 +
playbooks/defaults/repo_packages/nova_consoles.yml |  39 +
.../defaults/repo_packages/openstack_other.yml     |  43 -
.../defaults/repo_packages/openstack_services.yml  | 104 ++-
.../defaults/repo_packages/openstack_testing.yml   |  39 +
playbooks/defaults/repo_packages/projectcalico.yml |  23 +
playbooks/etcd-install.yml                         |  31 +
playbooks/galera-install.yml                       |  65 +-
playbooks/haproxy-install.yml                      | 119 +--
playbooks/inventory/dynamic_inventory.py           | 601 +++++++++----
playbooks/inventory/env.d/aodh.yml                 |  34 +
playbooks/inventory/env.d/ceilometer.yml           |  57 ++
playbooks/inventory/env.d/cinder.yml               |  75 ++
playbooks/inventory/env.d/galera.yml               |  40 +
playbooks/inventory/env.d/glance.yml               |  44 +
playbooks/inventory/env.d/gnocchi.yml              |  41 +
playbooks/inventory/env.d/haproxy.yml              |  38 +
playbooks/inventory/env.d/heat.yml                 |  58 ++
playbooks/inventory/env.d/horizon.yml              |  39 +
playbooks/inventory/env.d/ironic.yml               |  64 ++
playbooks/inventory/env.d/keystone.yml             |  38 +
playbooks/inventory/env.d/magnum.yml               |  39 +
playbooks/inventory/env.d/memcache.yml             |  39 +
playbooks/inventory/env.d/neutron.yml              |  80 ++
playbooks/inventory/env.d/nova.yml                 | 115 +++
playbooks/inventory/env.d/os-infra.yml             |  22 +
playbooks/inventory/env.d/pkg_repo.yml             |  38 +
playbooks/inventory/env.d/rabbitmq.yml             |  39 +
playbooks/inventory/env.d/rsyslog.yml              |  38 +
playbooks/inventory/env.d/sahara.yml               |  38 +
playbooks/inventory/env.d/shared-infra.yml         |  22 +
playbooks/inventory/env.d/swift-remote.yml         |  39 +
playbooks/inventory/env.d/swift.yml                |  77 ++
playbooks/inventory/env.d/unbound.yml              |  36 +
playbooks/inventory/env.d/utility.yml              |  39 +
playbooks/inventory/group_vars/all.yml             | 425 ++++++++-
playbooks/inventory/group_vars/all_containers.yml  |  24 +-
playbooks/inventory/group_vars/aodh_all.yml        |  20 +
playbooks/inventory/group_vars/ceilometer_all.yml  |  29 +
playbooks/inventory/group_vars/cinder_all.yml      |  28 +
playbooks/inventory/group_vars/cinder_volume.yml   |  17 +
playbooks/inventory/group_vars/galera_all.yml      |  19 +
playbooks/inventory/group_vars/glance_all.yml      |  23 +
playbooks/inventory/group_vars/gnocchi_all.yml     |  29 +
playbooks/inventory/group_vars/haproxy_all.yml     |  20 +
playbooks/inventory/group_vars/heat_all.yml        |  20 +
playbooks/inventory/group_vars/horizon_all.yml     |  34 +
playbooks/inventory/group_vars/hosts.yml           | 279 +-----
playbooks/inventory/group_vars/ironic_all.yml      |  21 +
playbooks/inventory/group_vars/keystone_all.yml    |  23 +
playbooks/inventory/group_vars/magnum_all.yml      |  28 +
playbooks/inventory/group_vars/memcached.yml       |  19 +
playbooks/inventory/group_vars/neutron_agent.yml   |  20 +
playbooks/inventory/group_vars/neutron_all.yml     |  24 +
.../group_vars/neutron_calico_dhcp_agent.yml       |  99 +++
playbooks/inventory/group_vars/nova_all.yml        |  23 +
playbooks/inventory/group_vars/rabbitmq_all.yml    |  22 +
playbooks/inventory/group_vars/repo_all.yml        |  47 +
playbooks/inventory/group_vars/rsyslog.yml         |  19 +
playbooks/inventory/group_vars/sahara_all.yml      |  17 +
playbooks/inventory/group_vars/swift_all.yml       |  27 +
playbooks/inventory/group_vars/utility_all.yml     |  54 ++
playbooks/lxc-containers-create.yml                |   9 +-
playbooks/lxc-containers-destroy.yml               |  39 +-
playbooks/lxc-hosts-setup.yml                      |  35 +-
playbooks/memcached-install.yml                    |  44 +-
playbooks/openstack-hosts-setup.yml                |   5 +-
playbooks/os-aodh-install.yml                      |  95 +-
playbooks/os-ceilometer-install.yml                | 103 +--
playbooks/os-cinder-install.yml                    | 201 ++---
playbooks/os-glance-install.yml                    | 162 ++--
playbooks/os-gnocchi-install.yml                   |  70 ++
playbooks/os-heat-install.yml                      | 134 +--
playbooks/os-horizon-install.yml                   |  86 +-
playbooks/os-ironic-install.yml                    |  63 ++
playbooks/os-keystone-install.yml                  | 172 ++--
playbooks/os-magnum-install.yml                    |  64 ++
playbooks/os-neutron-install.yml                   | 187 ++--
playbooks/os-nova-install.yml                      | 181 ++--
playbooks/os-rally-install.yml                     |  34 +
playbooks/os-sahara-install.yml                    |  74 ++
playbooks/os-swift-install.yml                     | 164 +---
playbooks/os-swift-setup.yml                       | 156 ----
playbooks/os-swift-sync.yml                        |   5 +-
playbooks/os-tempest-install.yml                   |  16 +-
playbooks/rabbitmq-install.yml                     |  73 +-
playbooks/repo-build.yml                           | 102 ++-
playbooks/repo-server.yml                          |  87 +-
playbooks/roles/ceph_client/defaults/main.yml      |  89 --
playbooks/roles/ceph_client/handlers/main.yml      |  25 -
playbooks/roles/ceph_client/meta/main.yml          |   6 -
playbooks/roles/ceph_client/tasks/ceph_all.yml     |  43 -
playbooks/roles/ceph_client/tasks/ceph_auth.yml    | 151 ----
playbooks/roles/ceph_client/tasks/ceph_config.yml  |  61 --
.../roles/ceph_client/tasks/ceph_get_mon_host.yml  |  40 -
playbooks/roles/ceph_client/tasks/ceph_install.yml |  47 -
.../roles/ceph_client/tasks/ceph_preinstall.yml    |  77 --
playbooks/roles/ceph_client/tasks/main.yml         |  29 -
.../ceph_client/templates/ceph.client.keyring.j2   |   2 -
playbooks/roles/ceph_client/templates/ceph.conf.j2 |   7 -
.../roles/ceph_client/templates/ceph_pin.pref.j2   |   5 -
.../roles/ceph_client/templates/secret.xml.j2      |   7 -
playbooks/roles/ceph_client/vars/main.yml          |  51 --
playbooks/roles/haproxy_server/CONTRIBUTING.rst    |  85 --
playbooks/roles/haproxy_server/LICENSE             | 202 -----
playbooks/roles/haproxy_server/README.rst          |  26 -
playbooks/roles/haproxy_server/defaults/main.yml   |  86 --
.../roles/haproxy_server/files/haproxy-logging.cfg |   6 -
.../roles/haproxy_server/files/haproxy.default     |   8 -
playbooks/roles/haproxy_server/files/haproxy.sh    | 171 ----
playbooks/roles/haproxy_server/handlers/main.yml   |  33 -
playbooks/roles/haproxy_server/meta/main.yml       |  32 -
.../haproxy_server/tasks/haproxy_add_ppa_repo.yml  | 103 ---
.../roles/haproxy_server/tasks/haproxy_install.yml |  66 --
.../haproxy_server/tasks/haproxy_post_install.yml  |  44 -
.../haproxy_server/tasks/haproxy_pre_install.yml   |  41 -
.../tasks/haproxy_service_config.yml               |  23 -
.../tasks/haproxy_ssl_configuration.yml            |  69 --
playbooks/roles/haproxy_server/tasks/main.yml      |  26 -
.../roles/haproxy_server/templates/haproxy.cfg.j2  |  36 -
.../haproxy_server/templates/haproxy_pin.pref.j2   |   5 -
.../roles/haproxy_server/templates/service.j2      |  56 --
playbooks/rsyslog-install.yml                      |  62 +-
playbooks/security-hardening.yml                   |  10 +-
playbooks/setup-hosts.yml                          |   1 +
playbooks/setup-infrastructure.yml                 |   4 +-
playbooks/setup-openstack.yml                      |  11 +
playbooks/unbound-install.yml                      |  94 ++
playbooks/utility-install.yml                      | 129 ++-
playbooks/vars/configs/haproxy_config.yml          | 219 +++--
playbooks/vars/configs/keepalived_haproxy.yml      |  70 +-
...and-1035-container-update-6e880e4b45e11cf0.yaml |  15 +
.../notes/RFC1034-5_hostname-1ee18e06e8f57853.yaml |   8 +
...FC1034-5_hostname_upgrade-677da788600edbca.yaml |   5 +
.../notes/add-ca-certs-2398cb4856356028.yaml       |   6 +
.../add-disk-image-type-932898aca944f14a.yaml      |   4 +
.../add-gnocchi-integrations-40eef52bf255ab0b.yaml |   7 +
...-ironic-dashboard-support-3eb5168d71e4dddd.yaml |   5 +
...-ironic-dashboard-support-769d60881f0e12d9.yaml |   5 +
...-magnum-dashboard-support-4fcddedffb83bc28.yaml |   5 +
...-magnum-dashboard-support-e41ac6fb6bc14946.yaml |   5 +
...stone-admin-roles-setting-83198a721c64ee3c.yaml |   5 +
...-container-restart-option-8c7f5b20b9414ead.yaml |   8 +
.../notes/add-magnum-to-repo-548f243b3a253b04.yaml |   5 +
...dd-network-name-variables-d658745d7113110e.yaml |   8 +
...nova-extensions-blacklist-8ed18f45aba6a7fb.yaml |  11 +
.../notes/add-nova-lxd-f094438e4bf36d52.yaml       |   6 +
.../notes/add-qemu-conf-d42337dfd42bac6f.yaml      |   4 +
.../notes/add-v38438-3f7e905892be4b4f.yaml         |  21 +
.../notes/add-xenial-support-3dc3711e5b1bdc34.yaml |   4 +
.../notes/add-xenial-support-5c117335b7b7b407.yaml |   3 +
.../notes/add-xenial-support-7c24aa813289aa40.yaml |   3 +
.../notes/add-xenial-support-e285a643a39f0438.yaml |   4 +
.../notes/adding-v38526-381a407caa566b14.yaml      |   8 +
.../notes/adding-v38548-9c51b30bf9780ff3.yaml      |   8 +
.../notes/aide-exclude-run-4d3c97a2d08eb373.yaml   |   6 +
.../aide-initialization-fix-16ab0223747d7719.yaml  |  17 +
.../ansible-fact-subset-08e582fcf7ba4e4e.yaml      |  13 +
.../notes/ansible-forks-fa70caf5155c5d25.yaml      |   4 +
.../ansible-role-fetch-mode-cd163877e96d504a.yaml  |   5 +
...ackage-pinning-dependency-6e2e94d829508859.yaml |   4 +
...pecific-package-locations-e76512288aaf6fa0.yaml |   8 +
...diting-mac-policy-changes-fb83e0260a6431ed.yaml |  15 +
.../notes/augenrules-restart-39fe3e1e2de3eaba.yaml |   5 +
.../base-container-lvm-cow-2faa824f6cd4b083.yaml   |  14 +
.../base-container-overlayfs-ec7eeda2f5807e96.yaml |  11 +
.../notes/bindmount-logs-3c23aab5b5ed3440.yaml     |  25 +
.../broader-image-support-69241983e5a36018.yaml    |  30 +
...-default-os-endpoint-type-3adf9db32764ddf3.yaml |   6 +
.../notes/centos-7-support-d96233f41f63cfb8.yaml   |   3 +
.../ceph-from-uca-and-distro-2fa04e03c39a61bc.yaml |  21 +
.../change-default-collation-260d932780ef4553.yaml |   5 +
.../notes/combine_pip_roles-ba524dbaa601e1a1.yaml  |   6 +
.../compress-customization-a7d03162d837085f.yaml   |   5 +
.../config_check_argument-5a1126c779e3e8f5.yaml    |   7 +
...plate-MultiStrOps-support-c28e33fd5044e14d.yaml |  29 +
...figurable-martian-logging-370ede40b036db0b.yaml |  13 +
...figurable_inventory_group-9f5b193221b7006d.yaml |   7 +
.../container-bind-mounts-1a3a763178255841.yaml    |  12 +
.../container-config-list-a98937ae0ff94cf0.yaml    |  10 +
...container-create-commands-b3aa578309fa665b.yaml |   8 +
.../container-create-lvm-cow-77c049188b8a2676.yaml |   6 +
...ontainer-create-overlayfs-46f3c4c0ecacaadf.yaml |   7 +
...container-repo-host-match-2be99b14642e0591.yaml |  12 +
...ntainer-resolv-host-match-c6e3760cf4a8e5cd.yaml |   6 +
...iner-static-mac-addresses-9aae098fdc8a57cc.yaml |  15 +
.../db-create-in-playbooks-6fb8232da53fe1e1.yaml   |   8 +
...riadb-waittimeout-setting-ddaae0f2e1d31ee1.yaml |   5 +
...e-host-security-hardening-eb73923218abbc2c.yaml |   7 +
...enstack-host-apt-packages-b4d7af53d55d980d.yaml |   5 +
...ate-rabbitmq_apt_packages-b85ea1b449dc136e.yaml |   5 +
...precate-repo-apt-packages-f8c4a22fc60828bf.yaml |   5 +
...ect-cinder-backup-service-7dc68f532741be87.yaml |  13 +
...lance-default-store-swift-b9c36f4a2fe05ec4.yaml |  11 +
.../notes/detect_power-a6a679c8c3dd3262.yaml       |   4 +
...tionary-variables-removed-957c7b7b2108ba1f.yaml |   9 +
...iled-access-audit-logging-789dc01c8bcbef17.yaml |   6 +
...sable-graphical-interface-5db89cd1bef7e12d.yaml |  13 +
.../disable-list-extend-3a9547de9034f9ba.yaml      |  10 +
...isable-netconsole-service-915bb33449b4012c.yaml |   7 +
...le_slave_repo_during_sync-2aaabf90698221e3.yaml |   9 +
.../disabling-rdisc-centos-75115b3509941bfa.yaml   |   8 +
...mic-ceilometer-enablement-18be0bb994ede62a.yaml |   7 +
.../dynamic_tunnel_types-3eb1aa46a0ca9a19.yaml     |  12 +
.../notes/enable-lbaas-aio-9a428c459a10aeda.yaml   |   3 +
.../notes/enable-lsm-bae903e463079a3f.yaml         |  14 +
...ble-tcp-syncookes-boolean-4a884a66a3a0e4d7.yaml |  11 +
...nable_pip_install_options-7c2131c89f90b2c6.yaml |   6 +
.../notes/export-hosts-flag-9c9f170eb89798ea.yaml  |   6 +
.../extra-ceph-clusters-00ad154ffb0589a6.yaml      |   7 +
.../notes/extra-ceph-conf-337b9371b49219ff.yaml    |   5 +
...-audit-log-permission-bug-81a772e2e6d0a5b3.yaml |  10 +
.../fix-check-mode-with-tags-bf798856a27c53eb.yaml |   7 +
.../notes/force-dep-order-2c529683509e45da.yaml    |   9 +
...force-cluster-name-change-b4ce1e225daa840c.yaml |  15 +
releasenotes/notes/git-cache-df0afe90d4029f68.yaml |   6 +
.../notes/git-cache-staged-b9cb0e277478b19a.yaml   |   9 +
.../glance-1604-support-e65870170a925bfe.yaml      |   3 +
.../glance-packages-rename-abd348b0725e4b7b.yaml   |   4 +
.../gnocchi-metrics-service-6a7bdda8e7e71dda.yaml  |   9 +
...ndling-sshd-match-stanzas-fa40b97689004e46.yaml |   7 +
.../haproxy-centos-support-de39c19d6a89b6a5.yaml   |  11 +
.../haproxy-endpoint-toggle-aa9e7e3efc4d6861.yaml  |   4 +
.../haproxy-extra-configs-67a77803494d3e97.yaml    |   8 +
...aproxy-git-server-backend-862e004e61a43292.yaml |   8 +
...oxy-package-cache-backend-da096228387bc1f4.yaml |  13 +
.../haproxy_ssl_terminiation-cdf0092a5bfa34b5.yaml |  31 +
.../hipe-compile-option-c100e8676a806950.yaml      |   7 +
.../horizon-arbitrary-config-8a36e4bd6818afe1.yaml |   6 +
...ble-password-autocomplete-5f8f78a6c8f1edb3.yaml |   5 +
.../horizon-servername-fix-1ac632f205c45ee9.yaml   |   5 +
.../horizon_custom_themes-4ee1fd9444b8a5ae.yaml    |   6 +
...implement-centos7-support-cf6b6ee0d606223f.yaml |   3 +
.../implement-xenial-support-0de6444c53337d46.yaml |  12 +
.../notes/implemented-v38524-b357edec95128307.yaml |  12 +
.../improved-audit-rule-keys-9fa85f758386446c.yaml |   5 +
.../notes/install-local-019edab04ffc8347.yaml      |   8 +
.../intree-and-override-envd-371cf9a809b51fe0.yaml |  14 +
.../inventory-debug-flag-ead0ae2a2a1d7b90.yaml     |   6 +
...y-main-function-arguments-8c43e4c7175937d3.yaml |   6 +
...ry_host_containers_naming-d1f42a0c91d68154.yaml |  11 +
.../ironic-1604-support-b9ebb12ee4d78275.yaml      |   3 +
.../notes/ironic-integration-264c4ed622a3a04e.yaml |   6 +
...e-mysql-password-variable-ec33f37ba6c4fac1.yaml |  16 +
.../notes/isolate-ansible-3e8fcfdff9962a9b.yaml    |   9 +
...d-default-cidr-workaround-8f2b5a0b074898e1.yaml |   9 +
.../notes/keepalived-upgrade-e63a11b7d4dcba20.yaml |  22 +
..._user_and_project_support-e35b0b335b6522e9.yaml |  42 +
.../lbaasv2-horizon-panel-8f99026b025ca2fd.yaml    |   9 +
...2-service-provider-config-57d394bdc64f632e.yaml |   5 +
.../notes/list-extend-toggle-46a75ded97b7ce02.yaml |   6 +
...ration-default-set-to-ssh-6add1dbdeea43509.yaml |   5 +
.../notes/lxc-cache-gpg-156169a867d4653f.yaml      |   7 +
...xc-container-multi-distro-f495f73951fafd1a.yaml |  29 +
...lxc-container-start-delay-d7917f69d9469316.yaml |   6 +
.../lxc-host-setup-refactor-e43559764af67fea.yaml  |  29 +
.../notes/lxc-hosts-limit-9784050b888ea7c8.yaml    |   7 +
.../notes/lxc_hosts-group-a5643e7010d6b151.yaml    |   6 +
.../make-ha-router-a-toggle-9d87d688e8d506c9.yaml  |   4 +
.../make-ha-router-a-toggle-eefd61fc7978240d.yaml  |   4 +
.../notes/make-ipv6-a-toggle-63d9c839e204cdda.yaml |  14 +
...ment_network_config_check-66778387f38b9e0c.yaml |   8 +
.../mariadb-rolling-upgrades-323510425c3c7751.yaml |   8 +
.../memcached-logging-change-8825c2bdbcf824b9.yaml |  10 +
...server-add-nofile-setting-504e0c50e10a4ea6.yaml |   9 +
.../metadata-proxy-cleanup-eed6ff482035dc83.yaml   |  10 +
.../mitaka-deprecations-72bec69c1395261d.yaml      |  10 +
.../notes/multi-arch-build-1ad512acdf6cabb9.yaml   |   7 +
.../notes/multi-arch-support-a8762f6ea7fdbcef.yaml |   8 +
.../notes/multi-distro-add-0e53560f66394691.yaml   |   6 +
.../multiple-ips-for-host-f27cb1f1e878640d.yaml    |   4 +
...tron-agent-dynamic-enable-47f0c709ef0dfe55.yaml |  15 +
.../notes/neutron-bgp-552e6e1f6d37f38d.yaml        |   9 +
.../notes/neutron-calico-2332b0972708af8a.yaml     |   5 +
...n-conditional-overlay-net-eeb6ffefbe01c188.yaml |   7 +
.../notes/neutron-dhcp-mtu-8767de6f541b04c1.yaml   |   8 +
.../neutron-mtu-cleanup-ce73693b4f7aef0d.yaml      |   9 +
...neutron-network-variables-ff6d2c7f8c7c3ccd.yaml |  10 +
...neutron-networking-calico-b05b08f989f768ee.yaml |   5 +
...n-openvswitch-agent-group-a63da4af11202790.yaml |   9 +
.../neutron-ovs-powervm-116662f169e17175.yaml      |  18 +
.../notes/neutron-vpnaas-5c7c6508f2cc05c5.yaml     |   8 +
.../notes/neutron_ovs_dvr-7fca77cac0545441.yaml    |  11 +
.../ng-instance-management-f9134fc283aa289c.yaml   |  16 +
.../nova-admin-endpoint-fix-d52cc00caa5ab5dd.yaml  |   6 +
...console-proxy-git-cleanup-cdeffd3f0d040275.yaml |   8 +
...-largecluster-key-inserts-afc8cac63af41087.yaml |  12 +
.../notes/nova-powervm-b4eddae30abbd08e.yaml       |   5 +
.../notes/nova-uca-support-409b2e6afbce47b1.yaml   |  10 +
...ind-local-interfaces-only-05f03de632e81097.yaml |   5 +
.../online-lxc-network-add-3cfc84ea28e5eab0.yaml   |   5 +
.../openvswitch-support-1b71ae52dde81403.yaml      |  14 +
...egy-and-connection-plugin-bc476fa3607dcc4a.yaml |  11 +
...-glance-only-install-venv-0271d3238c0d561c.yaml |   6 +
...gnocchi-only-install-venv-4e532f44fcf5cda5.yaml |   6 +
...os-heat-only-install-venv-e3e8e466dd67c2bc.yaml |   5 +
...apache-log-format-support-34c9ef74b3bcce31.yaml |   5 +
...horizon-only-install-venv-0fd3292d2b61e840.yaml |   6 +
...-ironic-only-install-venv-0da32fc36bfeae2b.yaml |   5 +
...in-token-auth-deprecation-24e84a18f8a56814.yaml |  17 +
...apache-log-format-support-7232177f835222ee.yaml |   4 +
...pache-mpm-tunable-support-1c72f2f99cd502bc.yaml |  17 +
...eystone-only-install-venv-b766568ee8d40354.yaml |   5 +
...e-uwsgi-and-nginx-options-2157f8e40a7a8156.yaml |  22 +
...dle_internal_only_routers-e46092d6f1f7c4b0.yaml |   7 +
...os_aodh-only-install-venv-3c80a0a66824fcd7.yaml |   5 +
...lometer-only-install-venv-f3cd57b4a1d025c5.yaml |   5 +
releasenotes/notes/os_cinder-1604-support.yaml     |   3 +
...os_cinder-centos7-support-732f8feac7241e2a.yaml |   4 +
..._cinder-only-install-venv-914d5655dd645213.yaml |   5 +
...os_glance-centos7-support-21cb81e361831c9f.yaml |   4 +
..._keystone-centos7-support-0a5d97f81ac42e44.yaml |   4 +
.../os_magnum-install-venv-30263e29e51a2610.yaml   |   5 +
...um-xenial-systemd-support-2e1ee4253dff2b5c.yaml |   4 +
...neutron-only-install-venv-ca3bf63ed0507e4b.yaml |   6 +
.../os_nova-install-venv-6c6c2ba28f67a891.yaml     |   5 +
.../os_rally-install-venv-71cbd1f6ce4fd983.yaml    |   5 +
..._sahara-only-install-venv-8ead48687897ce0b.yaml |   6 +
...s_swift-only-install-venv-fdd5d41759433cf8.yaml |   5 +
...package-list-name-changes-007cacee4faf8ee6.yaml |  10 +
...package-list-name-changes-38f1554097b6bbe9.yaml |   4 +
...package-list-name-changes-4a42f561dac5754e.yaml |   4 +
...package-list-name-changes-4d5ad2e6ff5ecae2.yaml |   4 +
...package-list-name-changes-6f74fbf336030242.yaml |   8 +
...package-list-name-changes-7c8a6dd652b271cf.yaml |   8 +
...package-list-name-changes-7fcd5583f0db0eb6.yaml |   6 +
...package-list-name-changes-a26d94a44c24de2f.yaml |   6 +
...package-list-name-changes-a5571c0b72faadf2.yaml |   4 +
...package-list-name-changes-a86f7e7c805c2d81.yaml |  10 +
...package-list-name-changes-b484be7645bbe66a.yaml |   4 +
...package-list-name-changes-e351db8b482f1326.yaml |   6 +
...package-list-name-changes-e6f88d12f3bd9fa0.yaml |   4 +
...package-list-name-changes-e7a3fc551d742d23.yaml |   4 +
...package-list-name-changes-fdf9c6573bfa1083.yaml |   4 +
.../notes/package-state-003ff33c557af3b5.yaml      |  13 +
.../notes/package-state-1d27f4c7f8618cef.yaml      |  13 +
.../notes/package-state-2e8e2eb4b24475c4.yaml      |  13 +
.../notes/package-state-38187ec5242a005b.yaml      |  13 +
.../notes/package-state-3bf07796262fc9b9.yaml      |  13 +
.../notes/package-state-441864557ee5d75b.yaml      |  13 +
.../notes/package-state-48e933a395bbdc0c.yaml      |  13 +
.../notes/package-state-505f9772bb0d668e.yaml      |  14 +
.../notes/package-state-55fceaf0cd23147e.yaml      |  13 +
.../notes/package-state-63a870de53dd5cd8.yaml      |  13 +
.../notes/package-state-646b25638f523411.yaml      |  13 +
.../notes/package-state-6684c5634bdf127a.yaml      |  13 +
.../notes/package-state-6f5ce66be8ddf119.yaml      |  12 +
.../notes/package-state-711a1eb4814311cc.yaml      |  13 +
.../notes/package-state-7caea8f1db708a2e.yaml      |  13 +
.../notes/package-state-7cbc7179b51ecdde.yaml      |  13 +
.../notes/package-state-7d62ea1e50ad391b.yaml      |  13 +
.../notes/package-state-8b0189f8824b7568.yaml      |  13 +
.../notes/package-state-979c963fb18f7a25.yaml      |  13 +
.../notes/package-state-9a2f60adb4ab68cd.yaml      |  13 +
.../notes/package-state-ab251d8987422f59.yaml      |  13 +
.../notes/package-state-b032231a3cc99ee0.yaml      |  13 +
.../notes/package-state-b41a0e911ad95d1c.yaml      |  13 +
.../notes/package-state-b7a3d3c242e2c3aa.yaml      |  13 +
.../notes/package-state-bb93a1d4b272425d.yaml      |  13 +
.../notes/package-state-c9c7e01e77b596d0.yaml      |  14 +
.../notes/package-state-ed22b9a6683690b3.yaml      |  13 +
.../notes/package-state-f2309b07440d0ae8.yaml      |  13 +
.../notes/package-state-fb7d26a4b7c41a77.yaml      |  13 +
.../notes/package-state-fda322f5e667bbec.yaml      |  13 +
.../notes/package-var-rename-6ec3af6242073a2e.yaml |   4 +
.../notes/package_var_rename-9a55f7030595fdef.yaml |   4 +
...paramiko-2-0-dependencies-9a7c7fe9aeb394e4.yaml |   6 +
.../notes/path-customization-e7e0ae0f93e5283b.yaml |   4 +
.../notes/pip-source-store-d94ff2b68a99481a.yaml   |  10 +
.../notes/pkg-cacher-cfeae8fb990904a4.yaml         |   6 +
.../notes/policy-override-522df5699f09c417.yaml    |   6 +
...tmw-management-ui-haproxy-e9f9ec0343484f2d.yaml |  17 +
.../notes/rally_play-82fa27d8ba2ce22d.yaml         |   3 +
.../reduce-auditd-logging-633677a74aee5481.yaml    |  25 +
.../notes/remove-ansible.cfg-e65e4f17bc30cce7.yaml |  17 +
.../remove-container-release-fa49ff23ca8c1324.yaml |   6 +
.../notes/remove-lbaasv1-26044c48b5d3b508.yaml     |   8 +
...nfig-from-openstack-hosts-efb7d0b3a22d49df.yaml |   6 +
.../notes/remove-overrides-17ef7d0496f6a6c7.yaml   |   5 +
...move-rsyslog_client_repos-055ce574bee8bd14.yaml |   4 +
...emove-upgrade-gate-checks-3fbe339e06094681.yaml |   3 +
.../notes/remove-xtrabackup-0513a40593f2d0e3.yaml  |   7 +
.../notes/remove_infra_group-45e7747e341d97cf.yaml |   9 +
.../notes/remove_verbose_var-c22f4946eedbc5f2.yaml |   5 +
.../notes/remove_verbose_var-e88f65e0c7c440f4.yaml |   4 +
.../removed-aodh-api-init-9e2406629196efff.yaml    |   4 +
...moved-ceilometer-api-init-a4bfc4cbabcbcb16.yaml |   4 +
.../removed-db-create-tasks-276095a2293ed4ee.yaml  |   5 +
.../removed-db-create-tasks-3deea562441871c6.yaml  |   5 +
.../removed-db-create-tasks-4560d4b960383c4e.yaml  |   5 +
.../removed-db-create-tasks-8ae301041fe46cfb.yaml  |   5 +
.../removed-db-create-tasks-8d931286d6347bc6.yaml  |   5 +
.../removed-db-create-tasks-eed527e915f23ee0.yaml  |   5 +
.../removed-neutron-ha-tool-dd7a4717e03163f9.yaml  |  13 +
.../rename-pip-packages-tmp-f40dc7599684466a.yaml  |   5 +
...e-repo-build-apt-packages-df1ca334b857787a.yaml |   5 +
...ild-fix-upper-constraints-9e24c56520538df2.yaml |   5 +
...-build-use-uca-by-default-bde8ded7d72cd42c.yaml |   4 +
.../notes/rhel-gpg-check-0b483a824314d1b3.yaml     |   7 +
...og-client-centos7-support-bf5dd55ef6488a20.yaml |   4 +
...-client-logrotate-options-02dde942779493bb.yaml |   6 +
...log-remote-log-separation-76de4b64f0c18edb.yaml |   8 +
.../run-playbooks-refactor-c89400feb692cd91.yaml   |   6 +
...a-data-processing-service-8e63ebed6baf08bc.yaml |   5 +
.../sahara-horizon-panel-d80d17da528b4c07.yaml     |   9 +
...rch-for-unlabeled-devices-cb047c5f767e93ce.yaml |   6 +
.../selective-git-clone-77d766cc0eaa2175.yaml      |   8 +
.../selective-venv-build-dd9f0e40cd1cc076.yaml     |   8 +
.../selective-wheel-build-34b1c154bb548ed7.yaml    |   8 +
.../notes/service-conf-path-b27cab31dbc72ad4.yaml  |   6 +
.../notes/ssh-pub-key-check-c42309653dbe3493.yaml  |   5 +
.../static_route_error_check-5e7ed6ddf9eb1d1f.yaml |  11 +
...support-for-centos-xenial-2b89c318cc3df4b0.yaml |   5 +
...bal_environment_variables-46cd4d90279fd0e9.yaml |   5 +
.../support-ubuntu-xenial-958e8128ed6578cd.yaml    |   3 +
.../notes/swift-conf-b8dd5e1199f8e4a8.yaml         |   9 +
.../swift-fallocate-reserve-ff513025da68bfed.yaml  |  11 +
.../swift-force-hash-change-45b09eeb8b0368a6.yaml  |  14 +
.../swift-fs-file-limits-a57ab8b4c3c944e4.yaml     |  11 +
.../swift-pretend-mph-passed-7e5c15eeb35861c3.yaml |  17 +
.../notes/swift-pypy-support-9706519c4b88a571.yaml |  15 +
...onfigure-xfs-from-mlocate-e4844e6c0469afd6.yaml |   5 +
.../swift-rings-port-change-4a95bbd9b63fb201.yaml  |  11 +
...ft-rsync-module-per-drive-79b05af8276e7d6e.yaml |  12 +
.../swift-staticweb-support-b280fbebf271820b.yaml  |   9 +
.../swift-syslog-log-perms-5a116171a1adeae3.yaml   |   6 +
...virt_save_dir_to_nova_dir-3b1b278cb7e5831f.yaml |   8 +
.../notes/ubuntu-ppc64le-cab45e63dca77017.yaml     |   4 +
.../notes/ubuntu_ppc64le-581e5fcd5950186e.yaml     |   6 +
.../notes/unbound-dns-e0b591be4fa2b050.yaml        |   6 +
...unique-variable-migration-c0639030b495438f.yaml |  20 +
.../update-aodh-integration-fd2a27e8864bd8ff.yaml  |  10 +
...dated-neutron-plugin_base-25b5dcacc87acd0f.yaml |   2 +-
.../notes/upgrade-lxc-4750ba9aea7b5cd1.yaml        |   6 +
...pper-constraints-override-6853ffec6c07d7f5.yaml |   9 +
.../notes/use-galera-storage-d1a51c051d2740ad.yaml |  14 +
.../notes/use-uca-by-default-070751b0b388fcbe.yaml |   4 +
...utility_container_ssh_key-44b1d15a1c06395e.yaml |   6 +
.../notes/var-deprecations-417d87b9d386466a.yaml   |  11 +
...trabackup-compact-disable-8ae9215207147ebc.yaml |   4 +
releasenotes/source/conf.py                        |  42 +-
releasenotes/source/index.rst                      |   3 +-
releasenotes/source/mitaka.rst                     |   7 +-
releasenotes/source/unreleased.rst                 |   5 +
requirements.txt                                   |  29 +-
scripts/ansible-role-requirements-editor.py        | 104 +++
scripts/bootstrap-aio.sh                           |   2 +-
scripts/bootstrap-ansible.sh                       | 110 ++-
scripts/fastest-infra-wheel-mirror.py              | 170 ++++
scripts/federated-login.sh                         |   2 +-
scripts/gate-check-commit.sh                       |  52 +-
scripts/get-pypi-pkg-version.py                    |   8 +-
scripts/inventory-manage.py                        | 309 +------
scripts/manage_inventory.py                        | 370 ++++++++
scripts/openstack-ansible.rc                       |  49 ++
scripts/os-cmd                                     |  56 ++
scripts/os-detection.py                            |  25 -
scripts/release-yaml-file-prep.py                  | 133 +++
scripts/run-playbooks.sh                           | 222 ++---
scripts/run-tempest.sh                             |   8 +-
scripts/run-upgrade.sh                             | 130 ++-
scripts/scripts-library.sh                         | 189 ++--
scripts/sources-branch-updater.sh                  | 260 ++++--
scripts/teardown.sh                                | 282 ------
.../playbooks/ansible_fact_cleanup.yml             |  25 +
.../playbooks/aodh-api-init-delete.yml             |  47 +
.../playbooks/db-collation-alter.yml               |  57 ++
.../playbooks/deploy-config-changes.yml            |  64 ++
.../playbooks/galera-cluster-rolling-restart.yml   |  58 ++
.../playbooks/lbaas-version-check.yml              |  27 +
.../playbooks/mariadb-apt-cleanup.yml              |  24 +
.../playbooks/memcached-flush.yml                  |  23 +
.../playbooks/old-hostname-compatibility.yml       | 145 ++++
.../playbooks/pip-conf-removal.yml                 |  24 +
.../playbooks/user-secrets-adjustment.yml          |  45 +
.../scripts/ansible_fact_cleanup.sh                |  18 +
.../upgrade-utilities/scripts/make_rst_table.py    |  45 +
.../scripts/migrate_openstack_vars.py              |  70 ++
.../scripts/test_migrate_openstack_vars.py         |  86 ++
setup.cfg                                          |   2 +-
setup.py                                           |  11 +-
test-requirements.txt                              |  28 +-
.../bootstrap-host/tasks/check-requirements.yml    |  16 +-
.../bootstrap-host/tasks/prepare_aio_config.yml    | 144 ++--
.../bootstrap-host/tasks/prepare_data_disk.yml     |  10 +-
.../tasks/prepare_libvirt_service.yml              |  53 --
.../tasks/prepare_loopback_cinder.yml              |   1 +
.../tasks/prepare_loopback_swift.yml               |   1 +
.../tasks/prepare_mongodb_service.yml              |  61 --
.../bootstrap-host/tasks/prepare_mongodb_users.yml |  41 -
.../bootstrap-host/tasks/prepare_networking.yml    |  38 +-
.../bootstrap-host/tasks/prepare_ssh_keys.yml      |  14 +-
.../templates/osa_interfaces_multinode.cfg.j2      |  28 +
.../templates/user_variables.aio.yml.j2            |  75 +-
tox.ini                                            | 186 ++--
721 files changed, 17195 insertions(+), 13538 deletions(-)


Requirements updates
--------------------

diff --git a/requirements.txt b/requirements.txt
index 0d5fad6..2938075 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,13 +1,16 @@
-Jinja2>=2.6             # ansible
-netaddr>=0.7.12         # playbooks/inventory/dynamic_inventory.py
-paramiko>=1.13.0        # ansible
-PrettyTable>=0.7,<0.8   # scripts/inventory-manage.py
-pycrypto>=2.6           # ansible
-PyYAML>=3.1.0           # ansible
-###
-### These are pinned to ensure exactly the same behaviour forever!   ###
-### These pins are updated through the sources-branch-updater script ###
-###
-pip==8.1.1
-setuptools==20.6.7
-wheel==0.29.0
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+pip>=6.0 # MIT
+setuptools!=24.0.0,>=16.0 # PSF/ZPL
+wheel # MIT
+pyasn1 # BSD
+pyOpenSSL>=0.14 # Apache-2.0
+requests>=2.10.0 # Apache-2.0
+ndg-httpsclient>=0.4.2;python_version<'3.0' # BSD
+netaddr!=0.7.16,>=0.7.13 # BSD
+PrettyTable<0.8,>=0.7 # BSD
+pycrypto>=2.6 # Public Domain
+python-memcached>=1.56 # PSF
+PyYAML>=3.1.0 # MIT
+virtualenv # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index 1e3f8b5..86fae7a 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,8 +1,12 @@
-ansible>1.9,<2.0
-ansible-lint>=2.0.3,<=2.3.6
-bashate==0.5.0 # Apache-2.0
-flake8==2.2.4
-hacking>=0.10.0,<0.11
-mccabe==0.2.1 # capped for flake8
-pep8==1.5.7
-pyflakes==0.8.1
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+bashate>=0.2 # Apache-2.0
+coverage>=3.6 # Apache-2.0
+flake8<2.6.0,>=2.5.4 # MIT
+hacking<0.11,>=0.10.0
+mccabe==0.2.1 # MIT License
+mock>=2.0 # BSD
+pep8==1.5.7 # MIT
+pyflakes==0.8.1 # MIT
+virtualenv # MIT
@@ -11,3 +15,5 @@ pyflakes==0.8.1
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
-oslosphinx>=2.5.0 # Apache-2.0
-reno>=0.1.1 # Apache-2.0
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
+oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+openstackdocstheme>=1.5.0 # Apache-2.0
+doc8 # Apache-2.0
+reno>=1.8.0 # Apache2





More information about the OpenStack-announce mailing list