[openstack-dev] [ironic] Midcycle summary part 5/6

Jim Rollenhagen jim at jimrollenhagen.com
Thu Feb 18 20:15:42 UTC 2016


Hi all,

As our midcycle is virtual and split into 6 "sessions" for the sake of
timezones, we'll be sending a brief summary of each session so that
folks can catch up before the next one. All of this info should be on
the etherpad as well.

Session 5/6 was February 18, 1500-2000 UTC.

* Discussed live upgrades in general
  * Agreed that we do not need to isolate the API from DB at this time,
    nor use the @remotable decorator on object methods
    * Currently update are passed via a conductor RPC method, with only
      the delta being passed. What would the perf impact be of passing
      the whole object over RPC (in the @remotable case)
  * Things we need to do to get to live upgrade
    * Get grenade gating, and grenade-partial running (even if broken)
      * These should test upgrades both from last stable release and
        last intermediate release
    * Be able to pin RPC versions, probably via config to start
    * Get better at reviewing for compatibility in the rpcapi
      * grenade-partial will help here
    * Maintain expand/contract migrations
      * Several WIPs will need to keep this in mind when moving data
      * around
    * Good deployer docs on upgrade process

* Reviewed the network provider patch
  * https://review.openstack.org/#/c/139687/70
  * Found a number of issues that will need a major refactor to solve
  * Plan is to make this a NetworkInterface, dynamically loaded, similar
    to our existing drivers
  * May need an "enabled network providers" thing
  * jroll to refactor this patch tomorrow; will likely turn out similar
    to the driver composition proposal
  * Need folks to review the driver comp proposal, to make sure that's
    reasonable for this work
    * https://review.openstack.org/#/c/188370/
  * Talked about a "network state" sort of endpoint that can talk to
    network providers
    * This would initially move the (un)plug_vifs logic out of nova and
      into ironic
    * will likely become a summit topic: "three service tug rope?"
      * As a user, would I...
        * ask the bare metal service to plug a physical device into a
          network?
        * ask the network service to plug my instance into a network?
        * ask the compute service to plug my instance into a network?

* Discussed VLAN aware baremetal spec
  * tl;dr unbinds user-facing neutron networking from physical infra
  * https://review.openstack.org/#/c/277853/
  * Seems mostly sane, just some details to work out
  * Discussed how to do more complex port mapping
    * i.e. "put the 1g port on net x and the 10g port on net y"
    * Decided this is a completely separate piece of work; can be solved
      in parallel.
  * Requires work on glean/cloud-init and the metadata to plumb data
    through
    * Primarily VLANs/bonding
  * How to determine in the ML2 mech if switchport is trunk or access
    mode?
  * How do we support instances that don't support VLANs?
  * Current POC code munges configdrive in ironic driver to pass the
    right metadata; need to work with Nova team to figure out how to get
    this up in the neutron API; mostly for sake of configdrive
  * This is also likely to become a summit session
  * Distinct action items for now:
    * jroll to put this on summit hotlist
    * jroll (or other rackspace folks) to dig up cloud-init patches
    * sambetts to get the POC code on gerrit for testing/visibility
    * sambetts and TheJulia to hack on glean
    * all: review the spec :)

Thanks to all for coming to this session, it was very productive! Just
one more to go. See some of you at 0000. :D

// jim



More information about the OpenStack-dev mailing list