[openstack-dev] [magnum] Notes for Magnum design summit

Hongbin Lu hongbin.lu at huawei.com
Mon Jun 13 14:58:29 UTC 2016


Gary,

It is hard to tell if your change fits into Magnum upstream or not, unless there are further details. I encourage you to upload your changes to gerrit, so that we can review and discuss it inline. Also, keep in mind that the change might be rejected if it doesn’t fit into upstream objectives or it is duplicated to other existing work, but I hope it won’t discourage your contribution. If your change is related to Ironic, we might request you to coordinate your work with Spyros and/or others who is working on Ironic integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strigazi at gmail.com]
Sent: June-13-16 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) <li-gong.duan at hpe.com<mailto:li-gong.duan at hpe.com>> wrote:
Hi Tom/All,

>6. Ironic Integration: https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as this is the only networking model Ironic currently supports. Multi-tenant networking is imminent. This should be done before work on an Ironic template starts.

We have already implemented a bay template that uses a flat networking model and other python code(making magnum to find the correct heat template) which is used in our own project.
What do you think of this feature? If you think it is necessary for Magnum, I can contribute this codes to Magnum upstream.

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates, there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


Regards,
Gary Duan


-----Original Message-----
From: Cammann, Tom
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the design summit. A few comments on the summaries below.

6. Ironic Integration: https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as this is the only networking model Ironic currently supports. Multi-tenant networking is imminent. This should be done before work on an Ironic template starts.

7. Magnum adoption challenges: https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for major changes.

We decided only bay driver versioning was required. The template and template driver does not need versioning due to the fact we can get heat to pass back the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay cluster and we decided to postpone work as more investigation is required on what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and general usage of the bay COE. We decided this will be done completed by Magnum by adding access to cAdvisor/heapster through baking access to cAdvisor by default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): It can address the use case of heterogeneity of bay nodes (i.e. different availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This idea shows a lot of promise but needs more investigation and isn’t a current priority.

Tom


From: Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Saturday, 30 April 2016 at 05:05
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [magnum] Notes for Magnum design summit

Hi team,

For reference, below is a summary of the discussions/decisions in Austin design summit. Please feel free to point out if anything is incorrect or incomplete. Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs upstream

5. Container network: https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, variation of number of ports, etc.)

7. Magnum adoption challenges: https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

8. Unified abstraction for COEs: https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
- Create a new project for this efforts
- Alter Magnum mission statement to clarify its goal (Magnum is not a container service, it is sort of a COE management service)

9. Magnum Heat template version: https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for major changes.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by cAdvisor, Heapster, etc.

11. Others: https://etherpad.openstack.org/p/newton-magnum-meetup
- Clear Container support: Clear Container needs to integrate with COEs first. After the integration is done, Magnum team will revisit bringing the Clear Container COE to Magnum.
- Enhance mesos bay to DCOS bay: Need to do it step-by-step: First, create a new DCOS bay type. Then, deprecate and delete the mesos bay type.
- Start enforcing API deprecation policy: https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
- Freeze API v1 after some patches are merged.
- Multi-tenancy within a bay: not the priority in Newton cycle
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): It can address the use case of heterogeneity of bay nodes (i.e. different availability zones, flavors), but need to elaborate the details further.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160613/0ee59912/attachment.html>


More information about the OpenStack-dev mailing list