<div dir="ltr">Hi Gary.<br><div class="gmail_extra"><br><div class="gmail_quote">On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) <span dir="ltr"><<a href="mailto:li-gong.duan@hpe.com" target="_blank">li-gong.duan@hpe.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hi Tom/All,<br>
<span class=""><br>
>6. Ironic Integration: <a href="https://etherpad.openstack.org/p/newton-magnum-ironic-integration" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-ironic-integration</a><br>
>- Start the implementation immediately<br>
>- Prefer quick work-around for identified issues (cinder volume attachment, variation of number of ports, etc.)<br>
<br>
>We need to implement a bay template that can use a flat networking model as this is the only networking model Ironic currently supports. Multi-tenant networking is imminent. This should be done before work on an Ironic template starts.<br>
<br>
</span>We have already implemented a bay template that uses a flat networking model and other python code(making magnum to find the correct heat template) which is used in our own project.<br>
What do you think of this feature? If you think it is necessary for Magnum, I can contribute this codes to Magnum upstream.<br></blockquote><div><br></div><div>This feature is useful to magnum and there is a blueprint for that:</div><div><a href="https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips">https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips</a><br></div><div>You can add some notes on the whiteboard about your proposed change.</div><div><br></div><div>As for the ironic integration, we should modify the existing templates, there</div><div>is work in progress on that: <a href="https://review.openstack.org/#/c/320968/">https://review.openstack.org/#/c/320968/</a></div><div><br></div><div>btw, you added new yaml files or you modified the existing ones kubemaster,</div><div>minion and cluster?</div><div><br></div><div>Cheers,</div><div>Spyros</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
Regards,<br>
Gary Duan<br>
<span class="im"><br>
<br>
-----Original Message-----<br>
From: Cammann, Tom<br>
</span><span class="im">Sent: Tuesday, May 03, 2016 1:12 AM<br>
To: OpenStack Development Mailing List (not for usage questions) <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>><br>
</span><div class=""><div class="h5">Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit<br>
<br>
Thanks for the write up Hongbin and thanks to all those who contributed to the design summit. A few comments on the summaries below.<br>
<br>
6. Ironic Integration: <a href="https://etherpad.openstack.org/p/newton-magnum-ironic-integration" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-ironic-integration</a><br>
- Start the implementation immediately<br>
- Prefer quick work-around for identified issues (cinder volume attachment, variation of number of ports, etc.)<br>
<br>
We need to implement a bay template that can use a flat networking model as this is the only networking model Ironic currently supports. Multi-tenant networking is imminent. This should be done before work on an Ironic template starts.<br>
<br>
7. Magnum adoption challenges: <a href="https://etherpad.openstack.org/p/newton-magnum-adoption-challenges" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-adoption-challenges</a><br>
- The challenges is listed in the etherpad above<br>
<br>
Ideally we need to turn this list into a set of actions which we can implement over the cycle, i.e. create a BP to remove requirement for LBaaS.<br>
<br>
9. Magnum Heat template version: <a href="https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning</a><br>
- In each bay driver, version the template and template definition.<br>
- Bump template version for minor changes, and bump bay driver version for major changes.<br>
<br>
We decided only bay driver versioning was required. The template and template driver does not need versioning due to the fact we can get heat to pass back the template which it used to create the bay.<br>
<br>
10. Monitoring: <a href="https://etherpad.openstack.org/p/newton-magnum-monitoring" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-monitoring</a><br>
- Add support for sending notifications to Ceilometer<br>
- Revisit bay monitoring and self-healing later<br>
- Container monitoring should not be done by Magnum, but it can be done by cAdvisor, Heapster, etc.<br>
<br>
We split this topic into 3 parts – bay telemetry, bay monitoring, container monitoring.<br>
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This is implemented using using ceilometer notifications.<br>
Bay monitoring is around monitoring health of individual nodes in the bay cluster and we decided to postpone work as more investigation is required on what this should look like and what users actually need.<br>
Container monitoring focuses on what containers are running in the bay and general usage of the bay COE. We decided this will be done completed by Magnum by adding access to cAdvisor/heapster through baking access to cAdvisor by default.<br>
<br>
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): It can address the use case of heterogeneity of bay nodes (i.e. different availability zones, flavors), but need to elaborate the details further.<br>
<br>
The idea revolves around creating a heat stack for each node in the bay. This idea shows a lot of promise but needs more investigation and isn’t a current priority.<br>
<br>
Tom<br>
<br>
<br>
From: Hongbin Lu <<a href="mailto:hongbin.lu@huawei.com">hongbin.lu@huawei.com</a>><br>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>><br>
Date: Saturday, 30 April 2016 at 05:05<br>
To: "OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>><br>
Subject: [openstack-dev] [magnum] Notes for Magnum design summit<br>
<br>
Hi team,<br>
<br>
For reference, below is a summary of the discussions/decisions in Austin design summit. Please feel free to point out if anything is incorrect or incomplete. Thanks.<br>
<br>
1. Bay driver: <a href="https://etherpad.openstack.org/p/newton-magnum-bay-driver" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-bay-driver</a><br>
- Refactor existing code into bay drivers<br>
- Each bay driver will be versioned<br>
- Individual bay driver can have API extension and magnum CLI could load the extensions dynamically<br>
- Work incrementally and support same API before and after the driver change<br>
<br>
2. Bay lifecycle operations: <a href="https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations</a><br>
- Support the following operations: reset the bay, rebuild the bay, rotate TLS certificates in the bay, adjust storage of the bay, scale the bay.<br>
<br>
3. Scalability: <a href="https://etherpad.openstack.org/p/newton-magnum-scalability" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-scalability</a><br>
- Implement Magnum plugin for Rally<br>
- Implement the spec to address the scalability of deploying multiple bays concurrently: <a href="https://review.openstack.org/#/c/275003/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/275003/</a><br>
<br>
4. Container storage: <a href="https://etherpad.openstack.org/p/newton-magnum-container-storage" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-container-storage</a><br>
- Allow choice of storage driver<br>
- Allow choice of data volume driver<br>
- Work with Kuryr/Fuxi team to have data volume driver available in COEs upstream<br>
<br>
5. Container network: <a href="https://etherpad.openstack.org/p/newton-magnum-container-network" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-container-network</a><br>
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr to communicate with Neutron).<br>
- Several options were explored. No perfect solution was identified.<br>
<br>
6. Ironic Integration: <a href="https://etherpad.openstack.org/p/newton-magnum-ironic-integration" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-ironic-integration</a><br>
- Start the implementation immediately<br>
- Prefer quick work-around for identified issues (cinder volume attachment, variation of number of ports, etc.)<br>
<br>
7. Magnum adoption challenges: <a href="https://etherpad.openstack.org/p/newton-magnum-adoption-challenges" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-adoption-challenges</a><br>
- The challenges is listed in the etherpad above<br>
<br>
8. Unified abstraction for COEs: <a href="https://etherpad.openstack.org/p/newton-magnum-unified-abstraction" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-unified-abstraction</a><br>
- Create a new project for this efforts<br>
- Alter Magnum mission statement to clarify its goal (Magnum is not a container service, it is sort of a COE management service)<br>
<br>
9. Magnum Heat template version: <a href="https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning</a><br>
- In each bay driver, version the template and template definition.<br>
- Bump template version for minor changes, and bump bay driver version for major changes.<br>
<br>
10. Monitoring: <a href="https://etherpad.openstack.org/p/newton-magnum-monitoring" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-monitoring</a><br>
- Add support for sending notifications to Ceilometer<br>
- Revisit bay monitoring and self-healing later<br>
- Container monitoring should not be done by Magnum, but it can be done by cAdvisor, Heapster, etc.<br>
<br>
11. Others: <a href="https://etherpad.openstack.org/p/newton-magnum-meetup" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/newton-magnum-meetup</a><br>
- Clear Container support: Clear Container needs to integrate with COEs first. After the integration is done, Magnum team will revisit bringing the Clear Container COE to Magnum.<br>
- Enhance mesos bay to DCOS bay: Need to do it step-by-step: First, create a new DCOS bay type. Then, deprecate and delete the mesos bay type.<br>
- Start enforcing API deprecation policy: <a href="https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html" rel="noreferrer" target="_blank">https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html</a><br>
- Freeze API v1 after some patches are merged.<br>
- Multi-tenancy within a bay: not the priority in Newton cycle<br>
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): It can address the use case of heterogeneity of bay nodes (i.e. different availability zones, flavors), but need to elaborate the details further.<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>