[openstack-dev] [ironic] Midcycle summary part 5/6
jim at jimrollenhagen.com
Thu Feb 18 20:15:42 UTC 2016
As our midcycle is virtual and split into 6 "sessions" for the sake of
timezones, we'll be sending a brief summary of each session so that
folks can catch up before the next one. All of this info should be on
the etherpad as well.
Session 5/6 was February 18, 1500-2000 UTC.
* Discussed live upgrades in general
* Agreed that we do not need to isolate the API from DB at this time,
nor use the @remotable decorator on object methods
* Currently update are passed via a conductor RPC method, with only
the delta being passed. What would the perf impact be of passing
the whole object over RPC (in the @remotable case)
* Things we need to do to get to live upgrade
* Get grenade gating, and grenade-partial running (even if broken)
* These should test upgrades both from last stable release and
last intermediate release
* Be able to pin RPC versions, probably via config to start
* Get better at reviewing for compatibility in the rpcapi
* grenade-partial will help here
* Maintain expand/contract migrations
* Several WIPs will need to keep this in mind when moving data
* Good deployer docs on upgrade process
* Reviewed the network provider patch
* Found a number of issues that will need a major refactor to solve
* Plan is to make this a NetworkInterface, dynamically loaded, similar
to our existing drivers
* May need an "enabled network providers" thing
* jroll to refactor this patch tomorrow; will likely turn out similar
to the driver composition proposal
* Need folks to review the driver comp proposal, to make sure that's
reasonable for this work
* Talked about a "network state" sort of endpoint that can talk to
* This would initially move the (un)plug_vifs logic out of nova and
* will likely become a summit topic: "three service tug rope?"
* As a user, would I...
* ask the bare metal service to plug a physical device into a
* ask the network service to plug my instance into a network?
* ask the compute service to plug my instance into a network?
* Discussed VLAN aware baremetal spec
* tl;dr unbinds user-facing neutron networking from physical infra
* Seems mostly sane, just some details to work out
* Discussed how to do more complex port mapping
* i.e. "put the 1g port on net x and the 10g port on net y"
* Decided this is a completely separate piece of work; can be solved
* Requires work on glean/cloud-init and the metadata to plumb data
* Primarily VLANs/bonding
* How to determine in the ML2 mech if switchport is trunk or access
* How do we support instances that don't support VLANs?
* Current POC code munges configdrive in ironic driver to pass the
right metadata; need to work with Nova team to figure out how to get
this up in the neutron API; mostly for sake of configdrive
* This is also likely to become a summit session
* Distinct action items for now:
* jroll to put this on summit hotlist
* jroll (or other rackspace folks) to dig up cloud-init patches
* sambetts to get the POC code on gerrit for testing/visibility
* sambetts and TheJulia to hack on glean
* all: review the spec :)
Thanks to all for coming to this session, it was very productive! Just
one more to go. See some of you at 0000. :D
More information about the OpenStack-dev