OpenStack-announce
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
April 2016
- 3 participants
- 47 discussions
We are overjoyed to announce the release of:
neutron 8.0.0: OpenStack Networking
This release is part of the mitaka release series.
For more details, please see below.
8.0.0
^^^^^
The ML2 plug-in supports calculating the MTU for instances using
overlay networks by subtracting the overlay protocol overhead from the
value of 'path_mtu', ideally the physical (underlying) network MTU,
and providing the smaller value to instances via DHCP. Prior to
Mitaka, 'path_mtu' defaults to 0 which disables this feature. In
Mitaka, 'path_mtu' defaults to 1500, a typical MTU for physical
networks, to improve the "out of box" experience for typical
deployments.
The ML2 plug-in supports calculating the MTU for networks that are
realized as flat or VLAN networks, by consulting the 'segment_mtu'
option. Prior to Mitaka, 'segment_mtu' defaults to 0 which disables
this feature. This creates slightly confusing API results when
querying Neutron networks, since the plugins that support the MTU API
extension would return networks with the MTU equal to zero. Networks
with an MTU of zero make little sense, since nothing could ever be
transmitted. In Mitaka, 'segment_mtu' now defaults to 1500 which is
the standard MTU for Ethernet networks in order to improve the "out of
box" experience for typical deployments.
The LinuxBridge agent now supports QoS bandwidth limiting.
External networks can now be controlled using the RBAC framework that
was added in Liberty. This allows networks to be made available to
specific tenants (as opposed to all tenants) to be used as an external
gateway for routers and floating IPs.
DHCP and L3 Agent scheduling is availability zone aware.
The "get-me-a-network" feature simplifies the process for launching an
instance with basic network connectivity (via an externally connected
private tenant network).
Support integration with external DNS service.
Add popular IP protocols to the security group code. End-users can
specify protocol names instead of protocol numbers in both RESTful API
and python-neutronclient CLI.
ML2: ports can now recover from binding failed state.
RBAC support for QoS policies
Add description field to security group rules, networks, ports,
routers, floating IPs, and subnet pools.
Add tag mechanism for network resources
Timestamp fields are now added to neutron core resources.
Announcement of tenant prefixes and host routes for floating IP's via
BGP is supported
Allowed address pairs can now be cleared by passing None in addition
to an empty list. This is to make it possible to use the
--action=clear option with the neutron client. neutron port-update
<uuid> --allowed-address-pairs action=clear
Core configuration files are automatically generated.
max_fixed_ips_per_port has been deprecated and will be removed in the
Newton or Ocata cycle depending on when all identified usecases of the
options are satisfied via another quota system.
OFAgent is decomposed and deprecated in the Mitaka cycle.
Add new VNIC type for SR-IOV physical functions.
High Availability (HA) of SNAT service is supported for Distributed
Virtual Routers (DVRs).
An OVS agent configured to run in DVR mode will fail to start if it
cannot get proper DVR configuration values from the server on start-
up. The agent will no longer fallback to non-DVR mode, since it may
lead to inconsistency in the DVR-enabled cluster as the Neutron server
does not distinguish between DVR and non-DVR OVS agents.
Improve DVR's resiliency during Nova VM live migration events.
The Linuxbridge agent now supports l2 agent extensions.
Adding MacVtap ML2 driver and L2 Agent as new vswitch choice
Support for MTU selection and advertisement.
Neutron now provides network IP availability information.
Neutron is integrated with Guru Meditation Reports library.
oslo.messaging.notify.drivers entry points are deprecated
New Features
************
* In Mitaka, the combination of 'path_mtu' defaulting to 1500 and
'advertise_mtu' defaulting to True provides a value of MTU
accounting for any overlay protocol overhead on the network to
instances using DHCP. For example, an instance attaching to a VXLAN
network receives a 1450 MTU from DHCP accounting for 50 bytes of
overhead from the VXLAN overlay protocol if using IPv4 endpoints.
* In Mitaka, queries to the Networking API for network objects will
now return network objects that contain a sane MTU value.
* The LinuxBridge agent can now configure basic bandwidth limiting
QoS rules set for ports and networks. It introduces two new config
options for LinuxBridge agent. First is 'kernel_hz' option which is
value of host kernel HZ setting. It is necessary for proper
calculation of minimum burst value in tbf qdisc setting. Second is
'tbf_latency' which is value of latency to be configured in tc-tbf
setting. Details about this option can be found in tc-tbf manual
(http://linux.die.net/man/8/tc-tbf)
* External networks can now be controlled using the RBAC framework
that was added in Liberty. This allows networks to be made available
to specific tenants (as opposed to all tenants) to be used as an
external gateway for routers and floating IPs. By default this
feature will also allow regular tenants to make their networks
available as external networks to other individual tenants (or even
themselves), but they are prevented from using the wildcard to share
to all tenants. This behavior can be adjusted via policy.json by the
operator if desired.
* A DHCP agent is assigned to an availability zone; the network will
be hosted by the DHCP agent with availability zone specified by the
user.
* An L3 agent is assigned to an availability zone; the router will
be hosted by the L3 agent with availability zone specified by the
user. This supports the use of availability zones with HA routers.
DVR isn't supported now because L3HA and DVR integration isn't
finished.
* Once Nova takes advantage of this feature, a user can launch an
instance without explicitly provisioning network resources.
* Floating IPs can have dns_name and dns_domain attributes
associated with them
* Ports can have a dns_name attribute associated with them. The
network where a port is created can have a dns_domain associated
with it
* Floating IPs and ports will be published in an external DNS
service if they have dns_name and dns_domain attributes associated
with them.
* The reference driver integrates neutron with designate
* Drivers for other DNSaaS can be implemented
* Driver is configured in the default section of neutron.conf using
parameter 'external_dns_driver'
* Ports that failed to bind when an L2 agent was offline can now
recover after the agent is back online.
* Neutron now supports sharing of QoS policies between a subset of
tenants.
* Security group rules, networks, ports, routers, floating IPs, and
subnet pools may now contain an optional description which allows
users to easily store details about entities.
* Users can set tags on their network resources.
* Networks can be filtered by tags. The supported filters are
'tags', 'tags-any', 'not-tags' and 'not-tags-any'.
* Add timestamp fields 'created_at', 'updated_at' into neutron core
resources like network, subnet, port and subnetpool.
* And support for querying these resources by changed-since, it will
return the resources changed after the specfic time string like
YYYY-MM-DDTHH:MM:SS
* By default, the DHCP agent provides a network MTU value to
instances using the corresponding DHCP option if core plugin
calculates the value. For ML2 plugin, calculation mechanism is
enabled by setting [ml2] path_mtu option to a value greater than
zero.
* Allow non-admin users to define "external" extra-routes.
* Announcement of tenant subnets via BGP using centralized Neutron
router gateway port as the next-hop
* Announcement of floating IP host routes via BGP using the
centralized Neutron router gateway port as the next-hop
* Announcement of floating IP host routes via BGP using the floating
IP agent gateway as the next-hop when the floating IP is associated
through a distributed router
* Neutron no longer includes static example configuration files.
Instead, use tools/generate_config_file_samples.sh to generate them.
The files are generated with a .sample extension.
* Add derived attributes to the network to tell users which address
scopes the network is in.
* The subnet API now includes a new use_default_subnetpool
attribute. This attribute can be specified on creating a subnet in
lieu of a subnetpool_id. The two are mutually exclusive. If it is
specified as True, the default subnet pool for the requested
ip_version will be looked up and used. If no default exists, an
error will be returned.
* Neutron now supports creation of ports for exposing physical
functions as network devices to guests.
* High Availability support for SNAT services on Distributed Virtual
Routers. Routers can now be created with the flags distributed=True
and ha=True. The created routers will provide Distributed Virtual
Routing as well as SNAT high availability on the l3 agents
configured for dvr_snat mode.
* Use the value of the network 'mtu' attribute for the MTU of
virtual network interfaces such as veth pairs, patch ports, and tap
devices involving a particular network.
* Enable end-to-end support for arbitrary MTUs including jumbo
frames between instances and provider networks by moving MTU
disparities between flat or VLAN networks and overlay networks from
layer-2 devices to layer-3 devices that support path MTU discovery
(PMTUD).
* The Linuxbridge agent can now be extended by 3rd parties using a
pluggable mechanism.
* Libvirt qemu/kvm instances can now be attached via MacVtap in
bridge mode to a network. VLAN and FLAT attachments are supported.
Other attachmentes than compute are not supported.
* When advertise_mtu is set in the config, Neutron supports
advertising the LinkMTU using Router Advertisements.
* A new API endpoint /v2.0/network-ip-availabilities that allows an
admin to quickly get counts of used_ips and total_ips for network(s)
is available. New endpoint allows filtering by network_id,
network_name, tenant_id, and ip_version. Response returns network
and nested subnet data that includes used and total IPs.
* SriovNicSwitchMechanismDriver driver now exposes a new VIF type
'hostdev_physical' for ports with vnic type 'direct-physical' (used
for SR-IOV PF passthrough). This will enable Nova to provision PFs
as Neutron ports.
* The RPC and notification queues have been separated into different
queues. Specify the transport_url to be used for notifications
within the [oslo_messaging_notifications] section of the
configuration file. If no transport_url is specified in
[oslo_messaging_notifications], the transport_url used for RPC will
be used.
* Neutron services should respond to SIGUSR2 signal by dumping
valuable debug information to standard error output.
* New security groups firewall driver is introduced. It's based on
OpenFlow using connection tracking.
* Neutron can interact with keystone v3.
Known Issues
************
* The combination of 'path_mtu' and 'advertise_mtu' only adjusts the
MTU for instances rather than all virtual network components between
instances and provider/public networks. In particular, setting
'path_mtu' to a value greater than 1500 can cause packet loss even
if the physical network supports it. Also, the calculation does not
consider additional overhead from IPv6 endpoints.
* When using DVR, if a floating IP is associated to a fixed IP
direct access to the fixed IP is not possible when traffic is sent
from outside of a Neutron tenant network (north-south traffic).
Traffic sent between tenant networks (east-west traffic) is not
affected. When using a distributed router, the floating IP will mask
the fixed IP making it inaccessible, even though the tenant subnet
is being announced as accessible through the centralized SNAT
router. In such a case, traffic sent to the instance should be
directed to the floating IP. This is a limitation of the Neutron L3
agent when using DVR and will be addressed in a future release.
* Only creation of dvr/ha routers is currently supported. Upgrade
from other types of routers to dvr/ha router is not supported on
this release.
* More synchronization between Nova and Neutron is needed to
properly handle live migration failures on either side. For
instance, if live migration is reverted or canceled, some dangling
Neutron resources may be left on the destination host.
* To ensure any kind of migration works between all compute nodes,
make sure that the same physical_interface_mappings is configured on
each MacVtap compute node. Having different mappings could cause
live migration to fail (if the configured physical network interface
does not exist on the target host), or even worse, result in an
instance placed on the wrong physical network (if the physical
network interface exists on the target host, but is used by another
physical network or not used at all by OpenStack). Such an instance
does not have access to its configured networks anymore. It then has
layer 2 connectivity to either another OpenStack network, or one of
the hosts networks.
* OVS firewall driver doesn't work well with other features using
openflow.
Upgrade Notes
*************
* Operators using the ML2 plug-in with 'path_mtu' defaulting to 0
may need to perform a database migration to update the MTU for
existing networks and possibly disable existing workarounds for MTU
problems such as increasing the physical network MTU to 1550.
* Operators using the ML2 plug-in with existing data may need to
perform a database migration to update the MTU for existing networks
* Add popular IP protocols to security group code.
* To disable, use [DEFAULT] advertise_mtu = False.
* The router_id option is deprecated and will be removed in the 'N'
cycle.
* Does not change MTU for existing virtual network interfaces.
* Actions that create virtual network interfaces on an existing
network with the 'mtu' attribute containing a value greater than
zero could cause issues for network traffic traversing existing and
new virtual network interfaces.
* The Hyper-V Neutron Agent has been fully decomposed from Neutron.
The *neutron.plugins.hyperv.agent.security_groups_driver.HyperVSecu
rityGroupsDriver* firewall driver has been deprecated and will be
removed in the 'O' cycle. Update the *neutron_hyperv_agent.conf*
files on the Hyper-V nodes to use
*hyperv.neutron.security_groups_driver.HyperVSecurityGroupsDriver*,
which is the networking_hyperv security groups driver.
* When using ML2 and the Linux Bridge agent, the default value for
the ARP Responder under L2Population has changed. The responder is
now disabled to improve compatibility with the allowed-address-pair
extension and to match the default behavior of the ML2 OVS agent.
The logical network will now utilize traditional flood and learn
through the overlay. When upgrading, existing vxlan devices will
retain their old setup and be unimpacted by changes to this flag. To
apply this to older devices created with the Liberty agent, the
vxlan device must be removed and then the Mitaka agent restarted.
The agent will recreate the vxlan devices with the current settings
upon restart. To maintain pre-Mitaka behavior, enable the
arp_responder in the Linux Bridge agent VXLAN config file prior to
starting the updated agent.
* Neutron depends on keystoneauth instead of keystoneclient.
Deprecation Notes
*****************
* The default_subnet_pools option is now deprecated and will be
removed in the Newton release. The same functionality is now
provided by setting is_default attribute on subnetpools to True
using the API or client.
* The 'force_gateway_on_subnet' option is deprecated and will be
removed in the 'Newton' cycle.
* The 'network_device_mtu' option is deprecated and will be removed
in the 'Newton' cycle. Please use the system-wide segment_mtu
setting which the agents will take into account when wiring VIFs.
* max_fixed_ips_per_port has been deprecated and will be removed in
the Newton or Ocata cycle depending on when all identified usecases
of the options are satisfied via another quota system. If you depend
on this configuration option to stop tenants from consuming IP
addresses, please leave a comment on the bug report
(https://launchpad.net/bugs/1502356)
* The 'segment_mtu' option of the ML2 configuration has been
deprecated and replaced with the 'global_physnet_mtu' option in the
main Neutron configuration. This option is meant to be used by all
plugins for an operator to reference their physical network's MTU,
regardless of the backend plugin. Plugins should access this config
option via the 'get_deployment_physnet_mtu' method added to
neutron.plugins.common.utils to avoid being broken on any potential
renames in the future.
Bug Fixes
*********
* Prior to Mitaka, the settings that control the frequency of router
advertisements transmitted by the radvd daemon were not able to be
adjusted. Larger deployments may wish to decrease the frequency in
which radvd sends multicast traffic. The 'min_rtr_adv_interval' and
'max_rtr_adv_interval' settings in the L3 agent configuration file
map directly to the 'MinRtrAdvInterval' and 'MaxRtrAdvInterval' in
the generated radvd.conf file. Consult the manpage for radvd.conf
for more detailed information.
* Fixes bug 1537734
* Prior to Mitaka, name resolution in instances requires specifying
DNS resolvers via the 'dnsmasq_dns_servers' option in the DHCP agent
configuration file or via neutron subnet options. In this case, the
data plane must provide connectivity between instances and upstream
DNS resolvers. Omitting both of these methods causes the dnsmasq
service to offer the IP address on which it resides to instances for
name resolution. However, the static dnsmasq '--no-resolv' process
argument prevents name resolution via dnsmasq, leaving instances
without name resolution. Mitaka introduces the
'dnsmasq_local_resolv' option, default value False to preserve
backward-compatibility, that enables the dnsmasq service to provide
name resolution for instances via DNS resolvers on the host running
the DHCP agent. In this case, the data plane must provide
connectivity between the host and upstream DNS resolvers rather than
between the instances and upstream DNS resolvers. Specifying DNS
resolvers via the 'dnsmasq_dns_servers' option in the DHCP agent
configuration overrides the 'dnsmasq_local_resolv' option for all
subnets using the DHCP agent.
* Before Mitaka, when a default subnetpool was defined in the
configuration, a request to create a subnet would fall back to using
it if no specific subnet pool was specified. This behavior broke
the semantics of subnet create calls in this scenario and is now
considered an API bug. This bug has been fixed so that there is no
automatic fallback with the presence of a default subnet pool.
Workflows which depended on this new behavior will have to be
modified to set the new use_default_subnetpool attribute when
creating a subnet.
* Create DVR router namespaces pro-actively on the destination node
during live migration events. This helps minimize packet loss to
floating IP traffic.
* Explicitly configure MTU of virtual network interfaces rather than
using default values or incorrect values that do not account for
overlay protocol overhead.
* The server will fail to start if any of the declared required
extensions, as needed by core and service plugins, are not properly
configured.
* partially closes bug 1468803
* The Linuxbridge agent now supports the ability to toggle the local
ARP responder when L2Population is enabled. This ensures
compatibility with the allowed-address-pairs extension. closes bug
1445089
* Fix SR-IOV agent macvtap assigned VF check when linux kernel <
3.13
* Loaded agent extensions of SR-IOV agent are now shown in agent
state API.
Other Notes
***********
* Please read the OpenStack Networking Guide
(http://docs.openstack.org/networking-
guide/adv_config_availability_zone.html).
* For overlay networks managed by ML2 core plugin, the calculation
algorithm subtracts the overlay protocol overhead from the value of
[ml2] path_mtu. The DHCP agent provides the resulting (smaller) MTU
to instances using overlay networks.
* The [DEFAULT] advertise_mtu option must contain a consistent value
on all hosts running the DHCP agent.
* Typical networks can use [ml2] path_mtu = 1500.
* The Openflow Agent(OFAgent) mechanism driver is decomposed
completely from neutron tree in the Mitaka. The OFAgent driver and
its agent also are deprecated in favor of OpenvSwitch mechanism
driver with "native" of_interface in the Mitaka and will be removed
in the next release.
* For details please read Blueprint mtu-selection-and-advertisement
(https://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-
selection-and-advertisement.html).
* OVS firewall driver requires OVS 2.5 version or higher with linux
kernel 4.3 or higher. More info at OVS github page
(https://github.com/openvswitch/ovs/blob/master/FAQ.md)
* The oslo.messaging.notify.drivers entry points that were left in
tree for backward compatibility with Icehouse are deprecated and
will be removed after liberty-eol. Configure notifications using the
oslo_messaging configuration options in neutron.conf.
Changes in neutron 8.0.0.0rc2..8.0.0
------------------------------------
3213eb1 Support Routes==2.3
4283a7e Constraint requirements using mitaka upper-constraints.txt file
fc69097 Imported Translations from Zanata
41be555 Imported Translations from Zanata
b435ec5 Imported Translations from Zanata
bec65f6 api tests: Check correct extensions
99915fa Fix setting peer to bridge interfaces
4b86f17 Skip fullstack L3 HA test
4504a74 conn_testers: Bump timeout for ICMPv6 echo tests
Diffstat (except docs and test files)
-------------------------------------
neutron/api/extensions.py | 10 +-
neutron/locale/es/LC_MESSAGES/neutron.po | 1701 +++++++++++++++++++-
neutron/locale/fr/LC_MESSAGES/neutron.po | 1286 ++++++++++++++-
neutron/locale/ja/LC_MESSAGES/neutron.po | 161 +-
.../locale/ko_KR/LC_MESSAGES/neutron-log-error.po | 1270 +++++++++++++++
.../locale/ko_KR/LC_MESSAGES/neutron-log-info.po | 862 ++++++++++
.../ko_KR/LC_MESSAGES/neutron-log-warning.po | 616 +++++++
neutron/locale/ko_KR/LC_MESSAGES/neutron.po | 1577 +++++++++++++++++-
.../drivers/openvswitch/agent/ovs_neutron_agent.py | 4 +-
.../api/admin/test_external_network_extension.py | 4 +-
.../api/admin/test_shared_network_extension.py | 4 +-
.../openvswitch/agent/test_ovs_neutron_agent.py | 16 +-
.../drivers/openvswitch/agent/test_ovs_tunnel.py | 4 +-
tox.ini | 2 +-
17 files changed, 7369 insertions(+), 171 deletions(-)
1
0
We are pleased to announce the release of:
nova 13.0.0: Cloud computing fabric controller
This release is part of the mitaka release series.
For more details, please see below.
13.0.0
^^^^^^
Nova 13.0.0 release is including a lot of new features and bugfixes.
It can be extremely hard to mention all the changes we introduced
during that release but we beg you to read at least the upgrade
section which describes the required modifications that you need to do
for upgrading your cloud from 12.0.0 (Liberty) to 13.0.0 (Mitaka).
That said, a few major changes are worth to notice here. This is not
an exhaustive list of things to notice, rather just important things
you need to know :
* Latest API microversion supported for Mitaka is v2.25
* Nova now requires a second database (called 'API DB').
* A new nova-manage script allows you to perform all online DB
migrations once you upgrade your cloud
* EC2 API support is fully removed.
New Features
************
* Enables NUMA topology reporting on PowerPC architecture from the
libvirt driver in Nova but with a caveat as mentioned below. NUMA
cell affinity and dedicated cpu pinning code assumes that the host
operating system is exposed to threads. PowerPC based hosts use core
based scheduling for processes. Due to this, the cores on the
PowerPC architecture are treated as threads. Since cores are always
less than or equal to the threads on a system, this leads to non-
optimal resource usage while pinning. This feature is supported from
libvirt version 1.2.19 for PowerPC.
* A new REST API to cancel an ongoing live migration has been added
in microversion 2.24. Initially this operation will only work with
the libvirt virt driver.
* It is possible to call attach and detach volume API operations for
instances which are in shelved and shelved_offloaded state. For an
instance in shelved_offloaded state Nova will set to None the value
for the device_name field, the right value for that field will be
set once the instance will be unshelved as it will be managed by a
specific compute manager.
* It is possible to block live migrate instances with additional
cinder volumes attached. This requires libvirt version to be
>=1.2.17 and does not work when live_migration_tunnelled is set to
True.
* Project-id and user-id are now also returned in the return data of
os-server-groups APIs. In order to use this new feature, user have
to contain the header of request microversion v2.13 in the API
request.
* Add support for enabling uefi boot with libvirt.
* A new host_status attribute for servers/detail and
servers/{server_id}. In order to use this new feature, user have to
contain the header of request microversion v2.16 in the API request.
A new policy "os_compute_api:servers:show:host_status" added to
enable the feature. By default, this is only exposed to cloud
administrators.
* A new server action trigger_crash_dump has been added to the REST
API in microversion 2.17.
* When RBD is used for ephemeral disks and image storage, make
snapshot use Ceph directly, and update Glance with the new location.
In case of failure, it will gracefully fallback to the "generic"
snapshot method. This requires changing the typical permissions for
the Nova Ceph user (if using authx) to allow writing to the pool
where vm images are stored, and it also requires configuring Glance
to provide a v2 endpoint with direct_url support enabled (there are
security implications to doing this). See
http://docs.ceph.com/docs/master/rbd/rbd-openstack/ for more
information on configuring OpenStack with RBD.
* A new option "live_migration_inbound_addr" has been added in the
configuration file, set None as default value. If this option is
present in pre_migration_data, the ip address/hostname provided will
be used instead of the migration target compute node's hostname as
the uri for live migration, if it's None, then the mechanism remains
as it is before.
* Added support for CPU thread policies, which can be used to
control how the libvirt virt driver places guests with respect to
CPU SMT "threads". These are provided as instance and image metadata
options, 'hw:cpu_thread_policy' and 'hw_cpu_thread_policy'
respectively, and provide an additional level of control over CPU
pinning policy, when compared to the existing CPU policy feature.
These changes were introduced in commits '83cd67c' and 'aaaba4a'.
* Add support for enabling discard support for block devices with
libvirt. This will be enabled for Cinder volume attachments that
specify support for the feature in their connection properties. This
requires support to be present in the version of libvirt (v1.0.6+)
and qemu (v1.6.0+) used along with the configured virtual drivers
for the instance. The virtio-blk driver does not support this
functionality.
* A new "auto" value for the configuration option
"upgrade_levels.compute" is accepted, that allows automatic
determination of the compute service version to use for RPC
communication. By default, we still use the newest version if not
set in the config, a specific version if asked, and only do this
automatic behavior if 'auto' is configured. When 'auto' is used,
sending a SIGHUP to the service will cause the value to be re-
calculated. Thus, after an upgrade is complete, sending SIGHUP to
all services will cause them to start sending messages compliant
with the newer RPC version.
* Libvirt driver in Nova now supports Cinder DISCO volume driver.
* A disk space scheduling filter is now available, which prefers
compute nodes with the most available disk space. By default, free
disk space is given equal importance to available RAM. To increase
the priority of free disk space in scheduling, increase the
disk_weight_multiplier option.
* A new REST API to force live migration to complete has been added
in microversion 2.22.
* The os-instance-actions methods now read actions from deleted
instances. This means that 'GET /v2.1/{tenant-id}/servers/{server-id
}/os-instance-actions' and 'GET /v2.1/{tenant-id}/servers/{server-id
}/os-instance-actions/{req-id}' will return instance-action items
even if the instance corresponding to '{server-id}' has been
deleted.
* When booting an instance, its sanitized 'hostname' attribute is
now used to populate the 'dns_name' attribute of the Neutron ports
the instance is attached to. This functionality enables the Neutron
internal DNS service to know the ports by the instance's hostname.
As a consequence, commands like 'hostname -f' will work as expected
when executed in the instance. When a port's network has a non-blank
'dns_domain' attribute, the port's 'dns_name' combined with the
network's 'dns_domain' will be published by Neutron in an external
DNS as a service like Designate. As a consequence, the instance's
hostname is published in the external DNS as a service. This
functionality is added to Nova when the 'DNS Integration' extension
is enabled in Neutron. The publication of 'dns_name' and
'dns_domain' combinations to an external DNS as a service
additionaly requires the configuration of the appropriate driver in
Neutron. When the 'Port Binding' extension is also enabled in
Neutron, the publication of a 'dns_name' and 'dns_domain'
combination to the external DNS as a service will require one
additional update operation when Nova allocates the port during the
instance boot. This may have a noticeable impact on the performance
of the boot process.
* The libvirt driver now has a live_migration_tunnelled
configuration option which should be used where the
VIR_MIGRATE_TUNNELLED flag would previously have been set or unset
in the live_migration_flag and block_migration_flag configuration
options.
* For the libvirt driver, by default hardware properties will be
retrieved from the Glance image and if such haven't been provided,
it will use a libosinfo database to get those values. If users want
to force a specific guest OS ID for the image, they can now use a
new glance image property "os_distro" (eg. "--property
os_distro=fedora21"). In order to use the libosinfo database, you
need to separately install the related native package provided for
your operating system distribution.
* Add support for allowing Neutron to specify the bridge name for
the OVS, Linux Bridge, and vhost-user VIF types.
* Added a *nova-manage db online_data_migrations* command for
forcing online data migrations, which will run all registered
migrations for the release, instead of there being a separate
command for each logical data migration. Operators need to make sure
all data is migrated before upgrading to the next release, and the
new command provides a unified interface for doing it.
* Provides API 2.18, which makes the use of project_ids in API urls
optional.
* Libvirt with Virtuozzo virtualisation type now supports snapshot
operations
* Remove "onSharedStorage" parameter from server's evacuate action
in microversion 2.14. Nova will automatically detect if the instance
is on shared storage. Also adminPass is removed from the response
body which makes the response body empty. The user can get the
password with the server's os-server-password action.
* Add two new list/show API for server-migration. The list API will
return the in progress live migratons information of a server. The
show API will return a specified in progress live migration of a
server. This has been added in microversion 2.23.
* A new service.status versioned notification has been introduced.
When the status of the Service object is changed nova will send a
new service.update notification with versioned payload according to
bp versioned-notification-api. The new notification is documented in
http://docs.openstack.org/developer/nova/notifications.html
* Two new policies soft-affinty and soft-anti-affinity have been
implemented for the server-group feature of Nova. This means that
POST /v2.1/{tenant_id}/os-server-groups API resource now accepts
'soft-affinity' and 'soft-anti-affinity' as value of the 'policies'
key of the request body.
* In Nova Compute API microversion 2.19, you can specify a
"description" attribute when creating, rebuilding, or updating a
server instance. This description can be retrieved by getting
server details, or list details for servers. Refer to the Nova
Compute API documentation for more information. Note that the
description attribute existed in prior Nova versions, but was set to
the server name by Nova, and was not visible to the user. So,
servers you created with microversions prior to 2.19 will return the
description equals the name on server details microversion 2.19.
* As part of refactoring the notification interface of Nova a new
config option 'notification_format' has been added to specifies
which notification format shall be used by nova. The possible values
are 'unversioned' (e.g. legacy), 'versioned', 'both'. The default
value is 'both'. The new versioned notifications are documented in
http://docs.openstack.org/developer/nova/notifications.html
* For the VMware driver, the flavor extra specs for quotas has been
extended to support:
* quota:cpu_limit - The cpu of a virtual machine will not exceed
this limit, even if there are available resources. This is
typically used to ensure a consistent performance of virtual
machines independent of available resources. Units are MHz.
* quota:cpu_reservation - guaranteed minimum reservation (MHz)
* quota:cpu_shares_level - the allocation level. This can be
'custom', 'high', 'normal' or 'low'.
* quota:cpu_shares_share - in the event that 'custom' is used,
this is the number of shares.
* quota:memory_limit - The memory utilization of a virtual machine
will not exceed this limit, even if there are available resources.
This is typically used to ensure a consistent performance of
virtual machines independent of available resources. Units are MB.
* quota:memory_reservation - guaranteed minimum reservation (MB)
* quota:memory_shares_level - the allocation level. This can be
'custom', 'high', 'normal' or 'low'.
* quota:memory_shares_share - in the event that 'custom' is used,
this is the number of shares.
* quota:disk_io_limit - The I/O utilization of a virtual machine
will not exceed this limit. The unit is number of I/O per second.
* quota:disk_io_reservation - Reservation control is used to
provide guaranteed allocation in terms of IOPS
* quota:disk_io_shares_level - the allocation level. This can be
'custom', 'high', 'normal' or 'low'.
* quota:disk_io_shares_share - in the event that 'custom' is used,
this is the number of shares.
* quota:vif_limit - The bandwidth limit for the virtual network
adapter. The utilization of the virtual network adapter will not
exceed this limit, even if there are available resources. Units in
Mbits/sec.
* quota:vif_reservation - Amount of network bandwidth that is
guaranteed to the virtual network adapter. If utilization is less
than reservation, the resource can be used by other virtual
network adapters. Reservation is not allowed to exceed the value
of limit if limit is set. Units in Mbits/sec.
* quota:vif_shares_level - the allocation level. This can be
'custom', 'high', 'normal' or 'low'.
* quota:vif_shares_share - in the event that 'custom' is used,
this is the number of shares.
Upgrade Notes
*************
* All noVNC proxy configuration options have been added to the 'vnc'
group. They should no longer be included in the 'DEFAULT' group.
* All VNC XVP configuration options have been added to the 'vnc'
group. They should no longer be included in the 'DEFAULT' group.
* Upon first startup of the scheduler service in Mitaka, all defined
aggregates will have UUIDs generated and saved back to the database.
If you have a significant number of aggregates, this may delay
scheduler start as that work is completed, but it should be minor
for most deployments.
* During an upgrade to Mitaka, operators must create and initialize
a database for the API service. Configure this in
[api_database]/connection, and then run "nova-manage api_db sync"
* We can not use microversion 2.25 to do live-migration during
upgrade, nova-api will raise bad request if there is still old
compute node in the cluster.
* The option "scheduler_driver" is now changed to use entrypoint
instead of full class path. Set one of the entrypoints under the
namespace 'nova.scheduler.driver' in 'setup.cfg'. Its default value
is 'host_manager'. The full class path style is still supported in
current release. But it is not recommended because class path can be
changed and this support will be dropped in the next major release.
* The option "scheduler_host_manager" is now changed to use
entrypoint instead of full class path. Set one of the entrypoints
under the namespace 'nova.scheduler.host_manager' in 'setup.cfg'.
Its default value is 'host_manager'. The full class path style is
still supported in current release. But it is not recommended
because class path can be changed and this support will be dropped
in the next major release.
* The local conductor mode is now deprecated and may be removed as
early as the 14.0.0 release. If you are using local conductor mode,
plan on deploying remote conductor by the time you upgrade to the
14.0.0 release.
* The Extensible Resource Tracker is deprecated and will be removed
in the 14.0.0 release. If you use this functionality and have custom
resources that are managed by the Extensible Resource Tracker,
please contact the Nova development team by posting to the
openstack-dev mailing list. There is no future planned support for
the tracking of custom resources.
* For Liberty compute nodes, the disk_allocation_ratio works as
before, you must set it on the scheduler if you want to change it.
For Mitaka compute nodes, the disk_allocation_ratio set on the
compute nodes will be used only if the configuration is not set on
the scheduler. This is to allow, for backwards compatibility, the
ability to still override the disk allocation ratio by setting the
configuration on the scheduler node. In Newton, we plan to remove
the ability to set the disk allocation ratio on the scheduler, at
which point the compute nodes will always define the disk allocation
ratio, and pass that up to the scheduler. None of this changes the
default disk allocation ratio of 1.0. This matches the behaviour of
the RAM and CPU allocation ratios.
* (Only if you do continuous deployment)
1337890ace918fa2555046c01c8624be014ce2d8 drops support for an
instance major version, which means that you must have deployed at
least commit 713d8cb0777afb9fe4f665b9a40cac894b04aacb before
deploying this one.
* nova now requires ebtables 2.0.10 or later
* nova recommends libvirt 1.2.11 or later
* Filters internal interface changed using now the RequestSpec
NovaObject instead of an old filter_properties dictionary. In case
you run out-of-tree filters, you need to modify the host_passes()
method to accept a new RequestSpec object and modify the filter
internals to use that new object. You can see other in-tree filters
for getting the logic or ask for help in #openstack-nova IRC
channel.
* The "force_config_drive" configuration option provided an "always"
value which was deprecated in the previous release. That "always"
value is now no longer accepted and deployments using that value
have to change it to "True" before upgrading.
* Support for Windows / Hyper-V Server 2008 R2 has been deprecated
in Liberty (12.0.0) and it is no longer supported in Mitaka
(13.0.0). If you have compute nodes running that version, please
consider moving the running instances to other compute nodes before
upgrading those to Mitaka.
* The libvirt driver will now correct unsafe and invalid values for
the live_migration_flag and block_migration_flag configuration
options. The live_migration_flag must not contain
VIR_MIGRATE_SHARED_INC but block_migration_flag must contain it.
Both options must contain the VIR_MIGRATE_PEER2PEER, except when
using the 'xen' virt type this flag is not supported. Both flags
must contain the VIR_MIGRATE_UNDEFINE_SOURCE flag and not contain
the VIR_MIGRATE_PERSIST_DEST flag.
* The libvirt driver has changed the default value of the
'live_migration_uri' flag, that now is dependent on the 'virt_type'.
The old default 'qemu+tcp://%s/system' now is adjusted for each of
the configured hypervisors. For Xen this will be
'xenmigr://%s/system', for kvm/qemu this will be
'qemu+tcp://%s/system'.
* The minimum required libvirt is now version 0.10.2. The minimum
libvirt for the N release has been set to 1.2.1.
* In order to make project_id optional in urls, we must constrain
the set of allowed values for project_id in our urls. This defaults
to a regex of "[0-9a-f\-]+", which will match hex uuids (with /
without dashes), and integers. This covers all known project_id
formats in the wild. If your site uses other values for project_id,
you can set a site specific validation with "project_id_regex"
config variable.
* The old neutron communication options that were slated for removal
in Mitaka are no longer available. This means that going forward
communication to neutron will need to be configured using auth
plugins.
* All code and tests for Nova's EC2 and ObjectStore API support
which was deprecated in Kilo
(https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes_2)
has been completely removed in Mitaka. This has been replaced by the
new ec2-api project
(http://git.openstack.org/cgit/openstack/ec2-api/)
* The commit with change-id
Idd4bbbe8eea68b9e538fa1567efd304e9115a02a requires that the nova_api
database is setup and Nova is configured to use it. Instructions on
doing that are provided below.
Nova now requires that two databases are available and configured.
The existing nova database needs no changes, but a new nova_api
database needs to be setup. It is configured and managed very
similarly to the nova database. A new connection string
configuration option is available in the api_database group. An
example:
[api_database]
connection = mysql+pymysql://user:secret@127.0.0.1/nova_api?charset=utf8
And a new nova-manage command has been added to manage db migrations
for this database. "nova-manage api_db sync" and "nova-manage
api_db version" are available and function like the parallel "nova-
manage db ..." version.
* A new "use_neutron" option is introduced which replaces the obtuse
"network_api_class" option. This defaults to 'False' to match
existing defaults, however if "network_api_class" is set to the
known Neutron value Neutron networking will still be used as before.
* The FilterScheduler is now including disabled hosts. Make sure you
include the ComputeFilter in the "scheduler_default_filters" config
option to avoid placing instances on disabled hosts.
* Upgrade the rootwrap configuration for the compute service, so
that patches requiring new rootwrap configuration can be tested with
grenade.
* For backward compatible support the setting
"CONF.vmware.integration_bridge" needs to be set when using the
Neutron NSX|MH plugin. The default value has been set to "None".
* XenServer hypervisor type has been changed from "xen" to
"XenServer". It could impact your aggregate metadata or your flavor
extra specs if you provide only the former.
* The glance xenserver plugin has been bumped to version 1.3 which
includes new interfaces for referencing glance servers by url. All
dom0 will need to be upgraded with this plugin before upgrading the
nova code.
Deprecation Notes
*****************
* It is now deprecated to use [glance] api_servers without a
protocol scheme (http / https). This is required to support urls
throughout the system. Update any api_servers list with fully
qualified https / http urls.
* The conductor.manager configuration option is now deprecated and
will be removed.
* Deprecate "compute_stats_class" config option. This allowed
loading an alternate implementation for collecting statistics for
the local compute host. Deployments that felt the need to use this
facility are encoraged to propose additions upstream so we can
create a stable and supported interface here.
* Deprecate the "db_driver" config option. Previously this let you
replace our SQLAlchemy database layer with your own. This approach
is deprecated. Deployments that felt the need to use the facility
are encourage to work with upstream Nova to address db driver
concerns in the main SQLAlchemy code paths.
* The host, port, and protocol options in the [glance] configuration
section are deprecated, and will be removed in the N release. The
api_servers value should be used instead.
* Deprecate the use of nova.hooks. This facility used to let
arbitrary out of tree code be executed around certain internal
actions, but is unsuitable for having a well maintained API. Anyone
using this facility should bring forward their use cases in the
Newton cycle as nova-specs.
* Nova used to support the concept that "service managers" were
replaceable components. There are many config options where you can
replace a manager by specifying a new class. This concept is
deprecated in Mitaka as are the following config options.
* [cells] manager
* metadata_manager
* compute_manager
* console_manager
* consoleauth_manager
* cert_manager
* scheduler_manager
Many of these will be removed in Newton. Users of these options are
encouraged to work with Nova upstream on any features missing in the
default implementations that are needed.
* Deprecate "security_group_api" configuration option. The current
values are "nova" and "neutron". In future the correct
security_group_api option will be chosen based on the value of
"use_neutron" which provides a more coherent user experience.
* Deprecate the "vendordata_driver" config option. This allowed
creating a different class loader for defining vendordata metadata.
The default driver loads from a json file that can be arbitrarily
specified, so is still quite flexible. Deployments that felt the
need to use this facility are encoraged to propose additions
upstream so we can create a stable and supported interface here.
* The configuration option "api_version" in the "ironic" group was
marked as deprecated and will be removed in the future. The only
possible value for that configuration was "1" (because Ironic only
has 1 API version) and the Ironic team came to an agreement that
setting the API version via configuration option should not be
supported anymore. As the Ironic driver in Nova requests the Ironic
v1.8 API, that means that Nova 13.0.0 ("Mitaka") requires Ironic
4.0.0 ("Liberty") or newer if you want to use the Ironic driver.
* The libvirt live_migration_flag and block_migration_flag config
options are deprecated. These options gave too fine grained control
over the flags used and, in some cases, misconfigurations could have
dangerous side effects. Please note the availability of a new
live_migration_tunnelled configuration option.
* The "network_device_mtu" option in Nova is deprecated for removal
since network MTU should be specified when creating the network with
nova-network. With Neutron networks, the MTU value comes from the
"segment_mtu" configuration option in Neutron.
* The old top-level resource */os-migrations* is deprecated, it
won't be extended anymore. And migration_type for /os-migrations,
also add ref link to the /servers/{uuid}/migrations/{id} for it when
the migration is an in-progress live-migration. This has been added
in microversion 2.23.
* Deprecate "volume_api_class" and "network_api_class" config
options. We only have one sensible backend for either of these.
These options will be removed and turned into constants in Newton.
* Option "memcached_servers" is deprecated in Mitaka. Operators
should use oslo.cache configuration instead. Specifically "enabled"
option under [cache] section should be set to True and the url(s)
for the memcached servers should be in [cache]/memcache_servers
option.
* The Zookeeper Service Group driver has been removed.
The driver has no known users and is not actively mantained. A
warning log message about the driver's state was added for the Kilo
release. Also, evzookeeper library that the driver depends on is
unmaintained and incompatible with recent eventlet releases.
A future release of Nova will use the Tooz library to track service
liveliness, and Tooz supports Zookeeper.
(https://bugs.launchpad.net/nova/+bug/1443910)
(http://specs.openstack.org/openstack/nova-
specs/specs/liberty/approved/service-group-using-tooz.html)
Security Issues
***************
* [OSSA 2016-001] Nova host data leak through snapshot
(CVE-2015-7548)
* Bug 1524274 (https://bugs.launchpad.net/nova/+bug/1524274)
* Announcement
[OSSA 2016-002] Xen connection password leak in logs via
StorageError (CVE-2015-8749)
* Bug 1516765 (https://bugs.launchpad.net/nova/+bug/1516765)
* Announcement
[OSSA 2016-007] Host data leak during resize/migrate for raw-backed
instances (CVE-2016-2140)
* Bug 1548450 (https://bugs.launchpad.net/nova/+bug/1548450)
* Announcement
Bug Fixes
*********
* In a race condition if base image is deleted by ImageCacheManager
while imagebackend is copying the image to instance path, then the
instance goes in to error state. In this case when libvirt has
changed the base file ownership to libvirt-qemu while imagebackend
is copying the image, then we get permission denied error on
updating the file access time using os.utime. Fixed this issue by
updating the base file access time with root user privileges using
'touch' command.
* When plugging virtual interfaces of type vhost-user the MTU value
will not be applied to the interface by nova. vhost-user ports exist
only in userspace and are not backed by kernel netdevs, for this
reason it is not possible to set the mtu on a vhost-user interface
using standard tools such as ifconfig or ip link.
Other Notes
***********
* Conductor RPC API no longer supports v2.x.
* The service subcommand of nova-manage is deprecated. Use the nova
service-* commands from python-novaclient instead or the os-services
REST resource. The service subcommand will be removed in the 14.0
release.
* The Neutron network MTU value is now used when plugging virtual
interfaces in nova-compute. If the value is 0, which is the default
value for the "segment_mtu" configuration option in Neutron before
Mitaka, then the (deprecated) "network_device_mtu" configuration
option in Nova is used, which defaults to not setting an MTU value.
* The sample policy file shipped with Nova contained many policies
set to ""(allow all) which was not the proper default for many of
those checks. It was also a source of confusion as some people
thought "" meant to use the default rule. These empty policies have
been updated to be explicit in all cases. Many of them were changed
to match the default rule of "admin_or_owner" which is a more
restrictive policy check but does not change the restrictiveness of
the API calls overall because there are similar checks in the
database already. This does not affect any existing deployment, just
the sample file included for use by new deployments.
* Nova's EC2 API support which was deprecated in Kilo
(https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes_2)
is removed from Mitaka. This has been replaced by the new ec2-api
project (http://git.openstack.org/cgit/openstack/ec2-api/)
Changes in nova 13.0.0.0rc2..13.0.0
-----------------------------------
7105f88 Imported Translations from Zanata
5de98cb Imported Translations from Zanata
a9d5542 Fix detach SR-IOV when using LibvirtConfigGuestHostdevPCI
5b6ee70 Imported Translations from Zanata
29042e0 Imported Translations from Zanata
3e9819d Update cells blacklist regex for test_server_basic_ops
c71c4e0 Stop providing force_hosts to the scheduler for move ops
Diffstat (except docs and test files)
-------------------------------------
devstack/tempest-dsvm-cells-rc | 2 +-
nova/conductor/manager.py | 8 +
nova/conductor/tasks/live_migrate.py | 4 +
nova/locale/de/LC_MESSAGES/nova.po | 22 +-
nova/locale/fr/LC_MESSAGES/nova.po | 34 +-
nova/locale/ja/LC_MESSAGES/nova.po | 1315 ++++++++++++--
nova/locale/ko_KR/LC_MESSAGES/nova-log-warning.po | 1914 ++++++++++++++++++++
nova/objects/request_spec.py | 12 +
.../unit/conductor/tasks/test_live_migrate.py | 3 +
nova/virt/libvirt/driver.py | 13 +-
13 files changed, 3194 insertions(+), 194 deletions(-)
1
0
07 Apr '16
We are gleeful to announce the release of:
mistral-dashboard 2.0.0: Mistral dashboard
This release is part of the mitaka release series.
For more details, please see below.
Changes in mistral-dashboard 2.0.0.0rc1..2.0.0
----------------------------------------------
f49b147 Update .gitreview for stable/mitaka
Diffstat (except docs and test files)
-------------------------------------
.gitreview | 1 +
1 file changed, 1 insertion(+)
1
0
[release][designate] designate-dashboard 2.0.0 release (mitaka)
by no-reply@openstack.org 07 Apr '16
by no-reply@openstack.org 07 Apr '16
07 Apr '16
We are tickled pink to announce the release of:
designate-dashboard 2.0.0: Designate Horizon UI bits
This release is part of the mitaka release series.
For more details, please see below.
Changes in designate-dashboard 2.0.0.0rc1..2.0.0
------------------------------------------------
7fbe18a Imported Translations from Zanata
2fdca52 Fix unit tests under Django 1.9
72d45d6 Add ADD_INSTALLED_APPS to 'enabled' file
76df49e Extract strings from django templates
03813a8 Imported Translations from Zanata
202300b Imported Translations from Zanata
a829a1b Update .gitreview for stable/mitaka
Diffstat (except docs and test files)
-------------------------------------
.gitreview | 1 +
babel-django.cfg | 4 +
.../templates/dns_domains/domain_detail.html | 2 +-
.../dns_domains/templates/dns_domains/records.html | 2 +-
.../enabled/_1720_project_dns_panel.py | 2 +
designatedashboard/locale/django.pot | 238 +++++++++++++---
designatedashboard/locale/ja/LC_MESSAGES/django.po | 134 ++++++++-
.../locale/ko_KR/LC_MESSAGES/django.po | 227 +++++++++++++++
.../locale/pt_BR/LC_MESSAGES/django.po | 305 +++++++++++++++++++++
16 files changed, 939 insertions(+), 120 deletions(-)
1
0
We are jazzed to announce the release of:
sahara 4.0.0: Sahara project
This release is part of the mitaka release series.
With source available at:
http://git.openstack.org/cgit/openstack/sahara
For more details, please see below.
4.0.0
^^^^^
New Features
************
* Add ability of scheduling EDP jobs for sahara
* Added support of running sahara-api as wsgi application. Use
'sahara-wsgi-api' command for use this feature.
* CDH 5.5.0 is supported in CDH plugin.
* OpenStack Key Manager service can now be used by sahara to enable
storage of sensitive information in an external service such as
barbican.
Upgrade Notes
*************
* HDP plugin removed from default configuration list. End users who
are using HDP should ensure that their configuration files continue
to list "hdp".
* Move notifications options into oslo_messaging_notifications
Deprecation Notes
*****************
* The HDP 2.0.6 plugin is deprecated in Mitaka release and will be
removed in Newton release. Please, use the Ambari 2.3 instead.
* Removed support of Vanilla 2.6.0 plugin.
* Removed support for the Spark 1.0.0 plugin.
Bug Fixes
*********
* Fixed api_insecure handling in sessions. Closed bug 1539498.
* Add regular expression matching on search values for certain
string fields of sahara objects. This applies to list operations
through the REST API and therefore applies to the dashboard and
sahara client as well. Closes bug 1503345.
Changes in sahara 4.0.0.0rc1..4.0.0
-----------------------------------
65330b8 Set libext path for Oozie 4.0.1, 4.1.0
6daf4c0 Update .gitreview for stable/mitaka
Diffstat (except docs and test files)
-------------------------------------
.gitreview | 1 +
sahara/plugins/mapr/services/oozie/oozie.py | 12 +++++++-----
2 files changed, 8 insertions(+), 5 deletions(-)
1
0
07 Apr '16
We are eager to announce the release of:
freezer-api 2.0.0: OpenStack Backup and Restore API Service
This release is part of the mitaka release series.
For more details, please see below.
Changes in freezer-api 2.0.0.0rc1..2.0.0
----------------------------------------
ea70bbc This is incorrect url in example doc and conf
13f0b7b Add Freezer API Version Test
e38f650 Fix url from stackforge to openstack
Diffstat (except docs and test files)
-------------------------------------
.pylintrc | 2 +-
devstack/README.rst | 2 +-
devstack/gate_hook.sh | 1 +
devstack/local.conf.example | 2 +-
devstack/settings | 2 +-
.../services/freezer_api_client.py | 34 ++++++++++++++++++++++
14 files changed, 155 insertions(+), 12 deletions(-)
1
0
07 Apr '16
We are gleeful to announce the release of:
neutron-fwaas 8.0.0: OpenStack Networking FWaaS
This release is part of the mitaka release series.
For more details, please see below.
8.0.0
^^^^^
Generation of sample Neutron FWaaS configuration files.
Enable quotas for FWaaS.
New Features
************
* Neutron FWaaS no longer includes static example configuration
files. Instead, use tools/generate_config_file_samples.sh to
generate them. The files are generated with a .sample extension.
* The FWaaS extension will register quotas. The default values for
quota_firewall and quota_firewall_policy are set to 10. The default
value for quota_firewall_rule is set to 100. Quotas can be adjusted
in the conf files, including -1 values to allow unlimited.
Known Issues
************
* Tenants may receive a 409 Conflict error with a message body
containing a quota exceeded message during resource creation if
their quota is exceeded.
Other Notes
***********
* Operators that increase the default limit for quota_routers from
10 may want to bump FWaaS quotas as well, since with router
insertion a tenant can potentially have a unique policy and firewall
for each router.
Changes in neutron-fwaas 8.0.0.0rc2..8.0.0
------------------------------------------
ab56228 Constraint requirements using mitaka upper-constraints.txt file
Diffstat (except docs and test files)
-------------------------------------
tools/tox_install.sh | 2 +-
tox.ini | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
1
0
We are jazzed to announce the release of:
freezer 2.0.0: The OpenStack Backup Restore and Disaster Recovery as a
Service Platform
This release is part of the mitaka release series.
For more details, please see below.
Changes in freezer 2.0.0.0rc1..2.0.0
------------------------------------
367a225 Fix creation of jobs with stopped state
fd5d6ba Updated from global requirements
9efa12f Added --overwrite parameter for removing all files from restore directory. Default value --overwrite=False
0973cf7 Use correct type for freezer arguments
a6b771f Provides more details in the help of --restore-from-date option. Explains the behavior with an example.
7c0f5bc Rename mode default to fs
219fb2b Start to introduce tempest tests.
34d2168 enable output of metadata to a file
Diffstat (except docs and test files)
-------------------------------------
devstack/settings | 3 ++
freezer/common/config.py | 49 ++++++++++++++--------
freezer/engine/engine.py | 6 ++-
freezer/job.py | 2 +-
freezer/main.py | 11 ++++-
freezer/mode/default.py | 35 ----------------
freezer/mode/fs.py | 35 ++++++++++++++++
freezer/scheduler/scheduler_job.py | 6 ++-
.../freezer_tempest_plugin/services/__init__.py | 0
freezer/utils/utils.py | 4 ++
requirements.txt | 35 +++++++++-------
setup.cfg | 3 ++
setup.py | 36 +++++++++-------
test-requirements.txt | 26 +++++++-----
23 files changed, 263 insertions(+), 99 deletions(-)
Requirements updates
--------------------
diff --git a/requirements.txt b/requirements.txt
index adeefc3..741e429 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,3 +1,6 @@
-astroid<1.4.0 # breaks pylint 1.4.4
-setuptools>=16.0
-pbr>=1.6
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+astroid<1.4.0 # LGPL # breaks pylint 1.4.4
+setuptools>=16.0 # PSF/ZPL
+pbr>=1.6 # Apache-2.0
@@ -5,9 +8,9 @@ python-swiftclient>=2.2.0 # Apache-2.0
-python-keystoneclient>=1.6.0,!=1.8.0
-python-cinderclient>=1.3.1
-python-glanceclient>=1.2.0 # Apache-2.0
-python-novaclient>=2.29.0,!=2.33.0 # Apache-2.0
-python-openstackclient>=2.0.0 # Apache-2.0
-oslo.utils>=3.2.0
-oslo.i18n>=1.5.0 # Apache-2.0
-oslo.log>=1.14.0
-oslo.config>=3.2.0 # Apache-2.0
+python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
+python-cinderclient>=1.3.1 # Apache-2.0
+python-glanceclient>=2.0.0 # Apache-2.0
+python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
+python-openstackclient>=2.1.0 # Apache-2.0
+oslo.utils>=3.5.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
+oslo.log>=1.14.0 # Apache-2.0
+oslo.config>=3.7.0 # Apache-2.0
@@ -15,3 +18,3 @@ oslo.config>=3.2.0 # Apache-2.0
-PyMySQL>=0.6.2 # MIT License
-pymongo>=3.0.2
-paramiko>=1.13.0
+PyMySQL>=0.6.2 # MIT License
+pymongo!=3.1,>=3.0.2 # Apache-2.0
+paramiko>=1.16.0 # LGPL
@@ -21 +24 @@ six>=1.9.0 # MIT
-apscheduler
+apscheduler # MIT License
diff --git a/test-requirements.txt b/test-requirements.txt
index b830d53..c7c96a9 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,5 +1,8 @@
-flake8>=2.2.4,<=2.4.1
-hacking>=0.10.2,<0.11
-coverage>=3.6
-discover
-mock>=1.2
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+flake8<2.6.0,>2.4.1 # MIT
+hacking<0.11,>=0.10.2
+coverage>=3.6 # Apache-2.0
+discover # BSD
+mock>=1.2 # BSD
@@ -7,5 +10,8 @@ pylint==1.4.5 # GNU GPL v2
-python-subunit>=0.0.18
-sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 # BSD
-oslosphinx>=2.5.0,!=3.4.0 # Apache-2.0
-testrepository>=0.0.18
-testtools>=1.4.0
+python-subunit>=0.0.18 # Apache-2.0/BSD
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+testrepository>=0.0.18 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
+
+# Tempest Plugin
+tempest-lib>=0.14.0 # Apache-2.0
1
0
We are satisfied to announce the release of:
keystone 9.0.0: OpenStack Identity
This release is part of the mitaka release series.
For more details, please see below.
9.0.0
^^^^^
New Features
************
* [blueprint domain-specific-roles
(https://blueprints.launchpad.net/keystone/+spec/domain-specific-
roles)] Roles can now be optionally defined as domain specific.
Domain specific roles are not referenced in policy files, rather
they can be used to allow a domain to build their own private
inference rules with implied roles. A domain specific role can be
assigned to a domain or project within its domain, and any subset of
global roles it implies will appear in a token scoped to the
respective domain or project. The domain specific role itself,
however, will not appear in the token.
* [blueprint bootstrap
(https://blueprints.launchpad.net/keystone/+spec/bootstrap)
keystone-manage now supports the bootstrap command on the CLI so
that a keystone install can be initialized without the need of the
admin_token filter in the paste-ini.
* [blueprint domain-config-default
(https://blueprints.launchpad.net/keystone/+spec/domain-config-
default)] The Identity API now supports retrieving the default
values for the configuration options that can be overriden via the
domain specific configuration API.
* [blueprint url-safe-naming
(https://blueprints.launchpad.net/keystone/+spec/url-safe-naming)
The names of projects and domains can optionally be ensured to be
url safe, to support the future ability to specify projects using
hierarchical naming.
* [bug 1490804 (https://bugs.launchpad.net/keystone/+bug/1490804)
Audit IDs are included in the token revocation list.
* [bug 1519210 (https://bugs.launchpad.net/keystone/+bug/1519210) A
user may now opt-out of notifications by specifying a list of event
types using the *notification_opt_out* option in *keystone.conf*.
These events are never sent to a messaging service.
* [bug 1542417 (https://bugs.launchpad.net/keystone/+bug/1542417)
Added support for a *user_description_attribute* mapping to the LDAP
driver configuration.
* [bug 1526462 (https://bugs.launchpad.net/keystone/+bug/1526462)
Support for posixGroups with OpenDirectory and UNIX when using the
LDAP identity driver.
* [bug 1489061 (https://bugs.launchpad.net/keystone/+bug/1489061)
Caching has been added to catalog retrieval on a per user ID and
project ID basis. This affects both the v2 and v3 APIs. As a result
this should provide a performance benefit to fernet-based
deployments.
* Keystone supports "$(project_id)s" in the catalog. It works the
same as "$(tenant_id)s". Use of "$(tenant_id)s" is deprecated and
catalog endpoints should be updated to use "$(project_id)s".
* [bug 1525317 (https://bugs.launchpad.net/keystone/+bug/1525317)
Enable filtering of identity providers based on *id*, and *enabled*
attributes.
* [bug 1555830 (https://bugs.launchpad.net/keystone/+bug/1555830)
Enable filtering of service providers based on *id*, and *enabled*
attributes.
* [blueprint federation-group-ids-mapped-without-domain-reference
(https://blueprints.launchpad.net/keystone/+spec/federation-group-
ids-mapped-without-domain-reference)] Enhanced the federation
mapping engine to allow for group IDs to be referenced without a
domain ID.
* [blueprint implied-roles
(https://blueprints.launchpad.net/keystone/+spec/implied-roles)
Keystone now supports creating implied roles. Role inference rules
can now be added to indicate when the assignment of one role implies
the assignment of another. The rules are of the form *prior_role*
implies *implied_role*. At token generation time, user/group
assignments of roles that have implied roles will be expanded to
also include such roles in the token. The expansion of implied roles
is controlled by the *prohibited_implied_role* option in the
*[assignment]* section of *keystone.conf*.
* [bug 96869 (https://bugs.launchpad.net/keystone/+bug/968696) A
pair of configuration options have been added to the "[resource]"
section to specify a special "admin" project:
"admin_project_domain_name" and "admin_project_name". If these are
defined, any scoped token issued for that project will have an
additional identifier "is_admin_project" added to the token. This
identifier can then be checked by the policy rules in the policy
files of the services when evaluating access control policy for an
API. Keystone does not yet support the ability for a project acting
as a domain to be the admin project. That will be added once the
rest of the code for projects acting as domains is merged.
* [bug 1515302 (https://bugs.launchpad.net/keystone/+bug/1515302)
Two new configuration options have been added to the *[ldap]*
section. *user_enabled_emulation_use_group_config* and
*project_enabled_emulation_use_group_config*, which allow deployers
to choose if they want to override the default group LDAP schema
option.
* [bug 1501698 (https://bugs.launchpad.net/keystone/+bug/1501698)
Support parameter *list_limit* when LDAP is used as identity
backend.
* [bug 1479569 (https://bugs.launchpad.net/keystone/+bug/1479569)
Names have been added to list role assignments (GET
/role_assignments?include_names=True), rather than returning just
the internal IDs of the objects the names are also returned.
* Domains are now represented as top level projects with the
attribute *is_domain* set to true. Such projects will appear as
parents for any previous top level projects. Projects acting as
domains can be created, read, updated, and deleted via either the
project API or the domain API (V3 only).
* [bug 1500222 (https://bugs.launchpad.net/keystone/+bug/1500222)
Added information such as: user ID, project ID, and domain ID to log
entries. As a side effect of this change, both the user's domain ID
and project's domain ID are now included in the auth context.
* [bug 1473042 (https://bugs.launchpad.net/keystone/+bug/1473042)
Keystone's S3 compatibility support can now authenticate using AWS
Signature Version 4.
* [blueprint totp-auth
(https://blueprints.launchpad.net/keystone/+spec/totp-auth)
Keystone now supports authenticating via Time-based One-time
Password (TOTP). To enable this feature, add the "totp" auth plugin
to the *methods* option in the *[auth]* section of *keystone.conf*.
More information about using TOTP can be found in keystone's
developer documentation
(http://docs.openstack.org/developer/keystone/auth-totp.html)
* [blueprint x509-ssl-client-cert-authn
(https://blueprints.launchpad.net/keystone/+spec/x509-ssl-client-
cert-authn)] Keystone now supports tokenless client SSL x.509
certificate authentication and authorization.
Upgrade Notes
*************
* [bug 1473553 (https://bugs.launchpad.net/keystone/+bug/1473553)
The *keystone-paste.ini* must be updated to put the
"admin_token_auth" middleware before "build_auth_context". See the
sample *keystone- paste.ini* for the correct *pipeline* value.
Having "admin_token_auth" after "build_auth_context" is deprecated
and will not be supported in a future release.
* The LDAP driver now also maps the user description attribute after
user retrieval from LDAP. If this is undesired behavior for your
setup, please add *description* to the *user_attribute_ignore* LDAP
driver config setting. The default mapping of the description
attribute is set to *description*. Please adjust the LDAP driver
config setting *user_description_attribute* if your LDAP uses a
different attribute name (for instance to *displayName* in case of
an AD backed LDAP). If your *user_additional_attribute_mapping*
setting contains *description:description* you can remove this
mapping, since this is now the default behavior.
* The default setting for the *os_inherit* configuration option is
changed to True. If it is required to continue with this portion of
the API disabled, then override the default setting by explicitly
specifying the os_inherit option as False.
* The *keystone-paste.ini* file must be updated to remove extension
filters, and their use in "[pipeline:api_v3]". Remove the following
filters: "[filter:oauth1_extension]",
"[filter:federation_extension]",
"[filter:endpoint_filter_extension]", and
"[filter:revoke_extension]". See the sample keystone-paste.ini
(https://git.openstack.org/cgit/openstack/keystone/tree/etc
/keystone-paste.ini) file for guidance.
* The *keystone-paste.ini* file must be updated to remove extension
filters, and their use in "[pipeline:public_api]" and
"[pipeline:admin_api]" pipelines. Remove the following filters:
"[filter:user_crud_extension]", "[filter:crud_extension]". See the
sample keystone-paste.ini
(https://git.openstack.org/cgit/openstack/keystone/tree/etc
/keystone-paste.ini) file for guidance.
* A new config option, *insecure_debug*, is added to control whether
debug information is returned to clients. This used to be controlled
by the *debug* option. If you'd like to return extra information to
clients set the value to "true". This extra information may help an
attacker.
* The configuration options for LDAP connection pooling, *[ldap]
use_pool* and *[ldap] use_auth_pool*, are now both enabled by
default. Only deployments using LDAP drivers are affected.
Additional configuration options are available in the *[ldap]*
section to tune connection pool size, etc.
* [bug 1541092 (https://bugs.launchpad.net/keystone/+bug/1541092)
Only database upgrades from Kilo and newer are supported.
* Keystone now uses oslo.cache. Update the *[cache]* section of
*keystone.conf* to point to oslo.cache backends:
"oslo_cache.memcache_pool" or "oslo_cache.mongo". Refer to the
sample configuration file for examples. See oslo.cache
(http://docs.openstack.org/developer/oslo.cache) for additional
documentation.
Deprecation Notes
*****************
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] The V8 Assignment driver interface is deprecated. Support
for the V8 Assignment driver interface is planned to be removed in
the 'O' release of OpenStack.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] The V8 Role driver interface is deprecated. Support for the
V8 Role driver interface is planned to be removed in the 'O' release
of OpenStack.
* The V8 Resource driver interface is deprecated. Support for the V8
Resource driver interface is planned to be removed in the 'O'
release of OpenStack.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] The "admin_token_auth" filter must now be placed before the
"build_auth_context" filter in *keystone-paste.ini*.
* Use of "$(tenant_id)s" in the catalog endpoints is deprecated in
favor of "$(project_id)s".
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] Deprecate the "enabled" option from "[endpoint_policy]", it
will be removed in the 'O' release, and the extension will always be
enabled.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] The token memcache and memcache_pool persistence backends
have been deprecated in favor of using Fernet tokens (which require
no persistence).
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] Deprecated all v2.0 APIs. The keystone team recommends
using v3 APIs instead. Most v2.0 APIs will be removed in the 'Q'
release. However, the authentication APIs and EC2 APIs are
indefinitely deprecated and will not be removed in the 'Q' release.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] As of the Mitaka release, the PKI and PKIz token formats
have been deprecated. They will be removed in the 'O' release. Due
to this change, the *hash_algorithm* option in the *[token]* section
of the configuration file has also been deprecated. Also due to this
change, the "keystone-manage pki_setup" command has been deprecated
as well.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] As of the Mitaka release, write support for the LDAP driver
of the Identity backend has been deprecated. This includes the
following operations: create user, create group, delete user, delete
group, update user, update group, add user to group, and remove user
from group. These operations will be removed in the 'O' release.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] As of the Mitaka release, the auth plugin
*keystone.auth.plugins.saml2.Saml2* has been deprecated. It is
recommended to use *keystone.auth.plugins.mapped.Mapped* instead.
The "saml2" plugin will be removed in the 'O' release.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] As of the Mitaka release, the simple_cert_extension is
deprecated since it is only used in support of the PKI and PKIz
token formats. It will be removed in the 'O' release.
* The *os_inherit* configuration option is disabled. In the future,
this option will be removed and this portion of the API will be
always enabled.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] The file "httpd/keystone.py" has been deprecated in favor
of "keystone-wsgi-admin" and "keystone-wsgi-public" and may be
removed in the 'O' release.
* [blueprint deprecated-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka)] "keystone.common.cache.backends.memcache_pool",
"keystone.common.cache.backends.mongo", and
"keystone.common.cache.backends.noop" are deprecated in favor of
oslo.cache backends. The keystone backends will be removed in the
'O' release.
* The V8 Federation driver interface is deprecated in favor of the
V9 Federation driver interface. Support for the V8 Federation driver
interface is planned to be removed in the 'O' release of OpenStack.
Security Issues
***************
* The use of admin_token filter is insecure compared to the use of a
proper username/password. Historically the admin_token filter has
been left enabled in Keystone after initialization due to the way
CMS systems work. Moving to an out-of-band initialization using
"keystone-manage bootstrap" will eliminate the security concerns
around a static shared string that conveys admin access to keystone
and therefore to the entire installation.
* The admin_token method of authentication was never intended to be
used for any purpose other than bootstrapping an install. However
many deployments had to leave the admin_token method enabled due to
restrictions on editing the paste file used to configure the web
pipelines. To minimize the risk from this mechanism, the
*admin_token* configuration value now defaults to a python *None*
value. In addition, if the value is set to *None*, either
explicitly or implicitly, the *admin_token* will not be enabled, and
an attempt to use it will lead to a failed authentication.
* [bug 1490804 (https://bugs.launchpad.net/keystone/+bug/1490804)
[CVE-2015-7546 (http://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2015-7546)] A bug is fixed where an
attacker could avoid token revocation when the PKI or PKIZ token
provider is used. The complete remediation for this vulnerability
requires the corresponding fix in the keystonemiddleware project.
Bug Fixes
*********
* [bug 1535878 (https://bugs.launchpad.net/keystone/+bug/1535878)
Originally, to perform GET /projects/{project_id}, the provided
policy files required a user to have at least project admin level of
permission. They have been updated to allow it to be performed by
any user who has a role on the project.
* [bug 1516469 (https://bugs.launchpad.net/keystone/+bug/1516469)
Endpoints filtered by endpoint_group project association will be
included in the service catalog when a project scoped token is
issued and "endpoint_filter.sql" is used for the catalog driver.
* Support has now been added to send notification events on
user/group membership. When a user is added or removed from a group
a notification will be sent including the identifiers of both the
user and the group.
* [bug 1527759 (https://bugs.launchpad.net/keystone/+bug/1527759)
Reverted the change that eliminates the ability to get a V2 token
with a user or project that is not in the default domain. This
change broke real-world deployments that utilized the ability to
authenticate via V2 API with a user not in the default domain or
with a project not in the default domain. The deployer is being
convinced to update code to properly handle V3 auth but the fix
broke expected and tested behavior.
* [bug 1480270 (https://bugs.launchpad.net/keystone/+bug/1480270)
Endpoints created when using v3 of the keystone REST API will now be
included when listing endpoints via the v2.0 API.
Other Notes
***********
* The list_project_ids_for_user(), list_domain_ids_for_user(),
list_user_ids_for_project(), list_project_ids_for_groups(),
list_domain_ids_for_groups(), list_role_ids_for_groups_on_project()
and list_role_ids_for_groups_on_domain() methods have been removed
from the V9 version of the Assignment driver.
* [blueprint move-extensions
(https://blueprints.launchpad.net/keystone/+spec/move-extensions)
If any extension migrations are run, for example: "keystone-manage
db_sync --extension endpoint_policy" an error will be returned. This
is working as designed. To run these migrations simply run:
"keystone-manage db_sync". The complete list of affected extensions
are: "oauth1", "federation", "endpoint_filter", "endpoint_policy",
and "revoke".
* [bug 1367113 (https://bugs.launchpad.net/keystone/+bug/1367113)
The "get entity" and "list entities" functionality for the KVS
catalog backend has been reimplemented to use the data from the
catalog template. Previously this would only act on temporary data
that was created at runtime. The create, update and delete entity
functionality now raises an exception.
* "keystone-manage db_sync" will no longer create the Default
domain. This domain is used as the domain for any users created
using the legacy v2.0 API. A default domain is created by "keystone-
manage bootstrap" and when a user or project is created using the
legacy v2.0 API.
* The ability to validate a trust-scoped token against the v2.0 API
has been removed, in favor of using the version 3 of the API.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] Removed "extras" from token responses. These fields should
not be necessary and a well-defined API makes this field redundant.
This was deprecated in the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] Removed "RequestBodySizeLimiter" from keystone middleware.
The keystone team suggests using
"oslo_middleware.sizelimit.RequestBodySizeLimiter" instead. This was
deprecated in the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] Notifications with event_type
"identity.created.role_assignment" and
"identity.deleted.role_assignment" have been removed. The keystone
team suggests listening for "identity.role_assignment.created" and
"identity.role_assignment.deleted" instead. This was deprecated in
the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] Removed "check_role_for_trust" from the trust controller,
ensure policy files do not refer to this target. This was deprecated
in the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] Removed Catalog KVS backend
("keystone.catalog.backends.sql.Catalog"). This was deprecated in
the Icehouse release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] The LDAP backend for Assignment has been removed. This was
deprecated in the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] The LDAP backend for Resource has been removed. This was
deprecated in the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] The LDAP backend for Role has been removed. This was
deprecated in the Kilo release.
* [blueprint removed-as-of-mitaka
(https://blueprints.launchpad.net/keystone/+spec/removed-as-of-
mitaka)] Removed Revoke KVS backend
("keystone.revoke.backends.kvs.Revoke"). This was deprecated in the
Juno release.
Changes in keystone 9.0.0.0rc2..9.0.0
-------------------------------------
3e5fca0 Update federated user display name with shadow_users_api
Diffstat (except docs and test files)
-------------------------------------
keystone/identity/core.py | 4 ++--
2 files changed, 30 insertions(+), 2 deletions(-)
1
0
We are glad to announce the release of:
heat 6.0.0: OpenStack Orchestration
This release is part of the mitaka release series.
For more details, please see below.
6.0.0
^^^^^
New Features
************
* Added new functionality for showing and listing stack outputs
without resolving all outputs during stack initialisation.
* Added new API calls for showing and listing stack outputs
"/stack/outputs" and "/stack/outputs/output_key".
* Added using of new API in python-heatclient for "output_show" and
"output_list". Now, if version of Heat API is 1.19 or above, Heat
client will use API calls "output_show" and "output_list" instead of
parsing of stack get response. If version of Heat API is lower than
1.19, outputs resolve in Heat client as well as before.
* Add new "OS::Barbican::GenericContainer" resource for storing
arbitrary barbican secrets.
* Add new "OS::Barbican::RSAContainer" resource for storing RSA
public keys, private keys, and private key pass phrases.
* A new "OS::Barbican::CertificateContainer" resource for storing
the secrets that are relevant to certificates.
* OS::Nova::HostAggregate resource plugin is added to support host
aggregate, which is provided by nova "aggregates" API extension.
* nova.host constraint is added to support to validate host
attribute which is provided by nova "host" API extension.
* OS::Neutron::QoSPolicy resource plugin is added to support QoS
policy, which is provided by neutron "qos" API extension.
* OS::Neutron::QoSBandwidthLimitRule resource plugin is added to
support neutron QoS bandwidth limit rule, which is provided by
neutron "qos" API extension.
* Resources "OS::Neutron::Port" and "OS::Neutron::Net" now support
"qos_policy" optional property, that will associate with QoS policy
to offer different service levels based on the policy rules.
* OS::Neutron::RBACPolicy resource plugin is added to support RBAC
policy, which is used to manage RBAC policy in Neutron. This
resource creates and manages Neutron RBAC policy, which allows to
share Neutron networks to subsets of tenants.
* Added a new "event-sinks" element to the environment which allows
specifying a target where events from the stack are sent. It
supports the "zaqar-queue" element for now.
* Adds a new "immutable" boolean field to the parameters section in
a HOT template. This gives template authors the ability to mark
template parameters as immutable to restrict updating parameters
which have destructive effects on the application. A value of True
results in the engine rejecting stack-updates that include changes
to that parameter. When not specified in the template, "immutable"
defaults to False to ensure backwards compatibility with old
templates.
* A new "OS::Keystone::Region" resource that helps in managing the
lifecycle of keystone region.
* A new "OS::Neutron:AddressScope" resource that helps in managing
the lifecycle of neutron address scope. Availability of this
resource depends on availability of neutron "address-scope" API
extension. This resource can be associated with multiple subnet
pools in a one- to-many relationship. The subnet pools under an
address scope must not overlap.
* New resources for Neutron Load Balancer version 2. These are
unique for version 2 and do not support or mix with existing version
1 resources.
* New resource "OS::Neutron::LBaaS::LoadBalancer" is added to create
and manage Load Balancers which allow traffic to be directed between
servers.
* New resource "OS::Neutron::LBaaS::Listener" is added to create and
manage Listeners which represent a listening endpoint for the Load
Balancer.
* New resource "OS::Neutron::LBaaS::Pool" is added to create and
manage Pools which represent a group of nodes. Pools define the
subnet where nodes reside, the balancing algorithm, and the nodes
themselves.
* New resource "OS::Neutron::LBaaS::PoolMember" is added to create
and manage Pool members which represent a single backend node.
* New resource "OS::Neutron::LBaaS::HealthMonitor" is added to
create and manage Health Monitors which watch status of the Load
Balanced servers.
* A stack can be searched for resources based on their name, status,
type, action, id and physcial_resource_id. And this feature is
enabled both in REST API and CLI. For more details, please refer
orchestration API document and heat CLI user guide.
* Adds a new feature to restrict update or replace of a resource
when a stack is being updated. Template authors can set
"restricted_actions" in the "resources" section of
"resource_registry" in an environment file to restrict update or
replace.
* New resource "OS::Senlin::Cluster" is added to create a cluster in
senlin. A cluster is a group of homogeneous nodes.
* New resource "OS::Senlin::Node" is added to create a node in
senlin. Node represents a physical object exposed by other OpenStack
services.
* New resource "OS::Senlin::Receiver" is added to create a receiver
in senlin. Receiver can be used to hook the engine to some external
event/alarm sources.
* New resource "OS::Senlin::Profile" is added to create a profile in
senlin. Profile is a module used for creating nodes, it's the
definition of a node.
* New resource "OS::Senlin::Policy" is added to create a policy in
senlin. Policy is a set of rules that can be checked and/or enforced
when an Action is performed on a Cluster.
* The OS::Nova::Server now supports a new property
user_data_update_policy, which may be set to either 'REPLACE'
(default) or 'IGNORE' if you wish to allow user_data updates to be
ignored on stack update. This is useful when managing a group of
servers where changed user_data should apply to new servers without
replacing existing servers.
* Multiple environment files may be passed to the server in the
files dictionary along with an ordered list of the environment file
names. The server will generate the stack's environment from the
provided files rather than requiring the client to merge the
environments together. This is optional; the existing interface to
pass in the already resolved environment is still present.
* A new "OS::Neutron:SubnetPool" resource that helps in managing the
lifecycle of neutron subnet pool. Availability of this resource
depends on availability of neutron "subnet_allocation" API
extension.
* Resource "OS::neutron::Subnet" now supports "subnetpool" optional
property, that will automate the allocation of CIDR for the subnet
from the specified subnet pool.
* Template validation is improved to ignore the given set of error
codes. For example, heat will report template as invalid one, if it
does not find any required OpenStack services in the cloud
deployment and while authoring the template, user might wants to
avoid such scenarios, so that (s)he could create valid template
without bothering about run-time environments. Please refer the API
documentation of validate template for more details.
Upgrade Notes
*************
* If upgrading with pre-icehouse stacks which contain resources that
create users (such as OS::Nova::Server,
OS::Heat::SoftwareDeployment, and OS::Heat::WaitConditionHandle), it
is possible that the users will not be removed upon stack deletion
due to the removal of a legacy fallback code path. In such a
situation, these users will require manual removal.
Changes in heat 6.0.0.0rc2..6.0.0
---------------------------------
bea576f Sync integration tests requirements
950505d Revert "Check RBAC policy for nested stacks"
184b09a Imported Translations from Zanata
0c407b8 Add translation rule to delete ssh auth key from Magnum baymodel
Diffstat (except docs and test files)
-------------------------------------
heat/common/policy.py | 8 +-
heat/engine/resources/openstack/magnum/baymodel.py | 11 +
heat/engine/stack.py | 2 -
heat/locale/de/LC_MESSAGES/heat.po | 21 +-
heat/locale/fr/LC_MESSAGES/heat.po | 9 +-
heat/locale/it/LC_MESSAGES/heat.po | 9 +-
heat/locale/ja/LC_MESSAGES/heat.po | 173 +++++++-------
heat/locale/ko_KR/LC_MESSAGES/heat-log-error.po | 251 +++++++++++++++++++++
heat/locale/ko_KR/LC_MESSAGES/heat-log-warning.po | 10 +-
.../functional/test_conditional_exposure.py | 21 --
11 files changed, 396 insertions(+), 147 deletions(-)
1
0