[openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

Erik McCormick emccormick at cirrusseven.com
Wed Sep 27 00:53:40 UTC 2017


My main question here would be this: If you feel there are deficiencies in
Ironic, why not contribute to improving Ironic rather than spawning a whole
new project?

I am happy to take a look at it, and I'm by no means trying to contradict
your assumptions here. I just get concerned with the overhead and confusion
that comes with competing projects.

Also, if you'd like to discuss this in detail with a room full of bodies, I
suggest proposing a session for the Forum in Sydney. If some of the
contributors will be there, it would be a good opportunity for you to get
feedback.

Cheers,
Erik


On Sep 26, 2017 8:41 PM, "Matt Riedemann" <mriedemos at gmail.com> wrote:

> On 9/25/2017 6:27 AM, Zhenguo Niu wrote:
>
>> Hi folks,
>>
>> First of all, thanks for the audiences for Mogan project update in the TC
>> room during Denver PTG. Here we would like to get more suggestions before
>> we apply for inclusion.
>>
>> Speaking only for myself, I find the current direction of one
>> API+scheduler for vm/baremetal/container unfortunate. After containers
>> management moved out to be a separated project Zun, baremetal with Nova and
>> Ironic continues to be a pain point.
>>
>> #. API
>> Only part of the Nova APIs and parameters can apply to baremetal
>> instances, meanwhile for interoperable with other virtual drivers, bare
>> metal specific APIs such as deploy time RAID, advanced partitions can not
>>  be included. It's true that we can support various compute drivers, but
>> the reality is that the support of each of hypervisor is not equal,
>> especially for bare metals in a virtualization world. But I understand the
>> problems with that as Nova was designed to provide compute
>> resources(virtual machines) instead of bare metals.
>>
>> #. Scheduler
>> Bare metal doesn't fit in to the model of 1:1 nova-compute to resource,
>> as nova-compute processes can't be run on the inventory nodes themselves.
>> That is to say host aggregates, availability zones and such things based on
>> compute service(host) can't be applied to bare metal resources. And for
>> grouping like anti-affinity, the granularity is also not same with virtual
>> machines, bare metal users may want their HA instances not on the same
>> failure domain instead of the node itself. Short saying, we can only get a
>> rigid resource class only scheduling for bare metals.
>>
>>
>> And most of the cloud providers in the market offering virtual machines
>> and bare metals as separated resources, but unfortunately, it's hard to
>> achieve this with one compute service. I heard people are deploying
>> seperated Nova for virtual machines and bare metals with many downstream
>> hacks to the bare metal single-driver Nova but as the changes to Nova would
>> be massive and may invasive to virtual machines, it seems not practical to
>> be upstream.
>>
>> So we created Mogan [1] about one year ago, which aims to offer bare
>> metals as first class resources to users with a set of bare metal specific
>> API and a baremetal-centric scheduler(with Placement service). It was like
>> an experimental project at the beginning, but the outcome makes us believe
>> it's the right way. Mogan will fully embrace Ironic for bare metal
>> provisioning and with RSD server [2] introduced to OpenStack, it will be a
>> new world for bare metals, as with that we can compose hardware resources
>> on the fly.
>>
>> Also, I would like to clarify the overlaps between Mogan and Nova, I bet
>> there must be some users who wants to use one API for the compute resources
>> management as they don't care about whether it's a virtual machine or a
>> bare metal server. Baremetal driver with Nova is still the right choice for
>> such users to get raw performance compute resources. On the contrary, Mogan
>> is for real bare metal users and cloud providers who wants to offer bare
>> metals as a separated resources.
>>
>> Thank you for your time!
>>
>>
>> [1] https://wiki.openstack.org/wiki/Mogan
>> [2] https://www.intel.com/content/www/us/en/architecture-and-tec
>> hnology/rack-scale-design-overview.html
>>
>> --
>> Best Regards,
>> Zhenguo Niu
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Cross-posting to the operators list since they are the community that
> you'll likely need to convince the most about Mogan and whether or not they
> want to start experimenting with it.
>
> --
>
> Thanks,
>
> Matt
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170926/82fbca30/attachment.html>


More information about the OpenStack-dev mailing list