[openstack-dev] [magnum] Notes for Magnum design summit

Hongbin Lu hongbin.lu at huawei.com
Tue May 3 19:29:18 UTC 2016



> -----Original Message-----
> From: Cammann, Tom [mailto:tom.cammann at hpe.com]
> Sent: May-02-16 1:12 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit
> 
> Thanks for the write up Hongbin and thanks to all those who contributed
> to the design summit. A few comments on the summaries below.
> 
> 6. Ironic Integration: https://etherpad.openstack.org/p/newton-magnum-
> ironic-integration
> - Start the implementation immediately
> - Prefer quick work-around for identified issues (cinder volume
> attachment, variation of number of ports, etc.)
> 
> We need to implement a bay template that can use a flat networking
> model as this is the only networking model Ironic currently supports.
> Multi-tenant networking is imminent. This should be done before work on
> an Ironic template starts.
> 
> 7. Magnum adoption challenges: https://etherpad.openstack.org/p/newton-
> magnum-adoption-challenges
> - The challenges is listed in the etherpad above
> 
> Ideally we need to turn this list into a set of actions which we can
> implement over the cycle, i.e. create a BP to remove requirement for
> LBaaS.

I created a BP for that: https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas

> 
> 9. Magnum Heat template version:
> https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
> - In each bay driver, version the template and template definition.
> - Bump template version for minor changes, and bump bay driver version
> for major changes.
> 
> We decided only bay driver versioning was required. The template and
> template driver does not need versioning due to the fact we can get
> heat to pass back the template which it used to create the bay.

ACK. Thanks for pointing it out.

> 
> 10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-
> monitoring
> - Add support for sending notifications to Ceilometer
> - Revisit bay monitoring and self-healing later
> - Container monitoring should not be done by Magnum, but it can be done
> by cAdvisor, Heapster, etc.
> 
> We split this topic into 3 parts – bay telemetry, bay monitoring,
> container monitoring.
> Bay telemetry is done around actions such as bay/baymodel CRUD
> operations. This is implemented using using ceilometer notifications.
> Bay monitoring is around monitoring health of individual nodes in the
> bay cluster and we decided to postpone work as more investigation is
> required on what this should look like and what users actually need.
> Container monitoring focuses on what containers are running in the bay
> and general usage of the bay COE. We decided this will be done
> completed by Magnum by adding access to cAdvisor/heapster through
> baking access to cAdvisor by default.

ACK. Thanks for the clarification.

> 
> - Manually manage bay nodes (instead of being managed by Heat
> ResourceGroup): It can address the use case of heterogeneity of bay
> nodes (i.e. different availability zones, flavors), but need to
> elaborate the details further.
> 
> The idea revolves around creating a heat stack for each node in the bay.
> This idea shows a lot of promise but needs more investigation and isn’t
> a current priority.

Yes, the idea needs a thoughtful discussion. I will send another ML to discuss it. I agree this doesn't have to be a priority in Newton cycle, but I knew there are at least two requested features that will benefit from this idea:
1. The ability to specify different availability zones for bay nodes: https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
2. The ability to specify different flavors for bay nodes: http://lists.openstack.org/pipermail/openstack-dev/2016-April/092838.html

> 
> Tom
> 
> 
> From: Hongbin Lu <hongbin.lu at huawei.com>
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" <openstack-dev at lists.openstack.org>
> Date: Saturday, 30 April 2016 at 05:05
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [magnum] Notes for Magnum design summit
> 
> Hi team,
> 
> For reference, below is a summary of the discussions/decisions in
> Austin design summit. Please feel free to point out if anything is
> incorrect or incomplete. Thanks.
> 
> 1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-
> driver
> - Refactor existing code into bay drivers
> - Each bay driver will be versioned
> - Individual bay driver can have API extension and magnum CLI could
> load the extensions dynamically
> - Work incrementally and support same API before and after the driver
> change
> 
> 2. Bay lifecycle operations: https://etherpad.openstack.org/p/newton-
> magnum-bays-lifecycle-operations
> - Support the following operations: reset the bay, rebuild the bay,
> rotate TLS certificates in the bay, adjust storage of the bay, scale
> the bay.
> 
> 3. Scalability: https://etherpad.openstack.org/p/newton-magnum-
> scalability
> - Implement Magnum plugin for Rally
> - Implement the spec to address the scalability of deploying multiple
> bays concurrently: https://review.openstack.org/#/c/275003/
> 
> 4. Container storage: https://etherpad.openstack.org/p/newton-magnum-
> container-storage
> - Allow choice of storage driver
> - Allow choice of data volume driver
> - Work with Kuryr/Fuxi team to have data volume driver available in
> COEs upstream
> 
> 5. Container network: https://etherpad.openstack.org/p/newton-magnum-
> container-network
> - Discuss how to scope/pass/store OpenStack credential in bays (needed
> by Kuryr to communicate with Neutron).
> - Several options were explored. No perfect solution was identified.
> 
> 6. Ironic Integration: https://etherpad.openstack.org/p/newton-magnum-
> ironic-integration
> - Start the implementation immediately
> - Prefer quick work-around for identified issues (cinder volume
> attachment, variation of number of ports, etc.)
> 
> 7. Magnum adoption challenges: https://etherpad.openstack.org/p/newton-
> magnum-adoption-challenges
> - The challenges is listed in the etherpad above
> 
> 8. Unified abstraction for COEs:
> https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
> - Create a new project for this efforts
> - Alter Magnum mission statement to clarify its goal (Magnum is not a
> container service, it is sort of a COE management service)
> 
> 9. Magnum Heat template version:
> https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
> - In each bay driver, version the template and template definition.
> - Bump template version for minor changes, and bump bay driver version
> for major changes.
> 
> 10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-
> monitoring
> - Add support for sending notifications to Ceilometer
> - Revisit bay monitoring and self-healing later
> - Container monitoring should not be done by Magnum, but it can be done
> by cAdvisor, Heapster, etc.
> 
> 11. Others: https://etherpad.openstack.org/p/newton-magnum-meetup
> - Clear Container support: Clear Container needs to integrate with COEs
> first. After the integration is done, Magnum team will revisit bringing
> the Clear Container COE to Magnum.
> - Enhance mesos bay to DCOS bay: Need to do it step-by-step: First,
> create a new DCOS bay type. Then, deprecate and delete the mesos bay
> type.
> - Start enforcing API deprecation policy:
> https://governance.openstack.org/reference/tags/assert_follows-
> standard-deprecation.html
> - Freeze API v1 after some patches are merged.
> - Multi-tenancy within a bay: not the priority in Newton cycle
> - Manually manage bay nodes (instead of being managed by Heat
> ResourceGroup): It can address the use case of heterogeneity of bay
> nodes (i.e. different availability zones, flavors), but need to
> elaborate the details further.
> _______________________________________________________________________
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list