[openstack-dev] [fuel] [vmware] Two hypervisors in one cloud
blak111 at gmail.com
Sun Jan 25 10:31:17 UTC 2015
Yes, you are correct that the suggestion is to have the integration layer
in Neutron backed by 3rd party testing. However, there is nothing technical
preventing you from specifying an arbitrary python path to load as a
driver/plugin. If there is a lot of demand for vDS support, it might be
On Sat, Jan 24, 2015 at 3:25 AM, Andrey Danin <adanin at mirantis.com> wrote:
> I agree. But, as far as I know , there should be a some kind of ML2
> integration layer for each plugin, and it should be in Neutron code base
> (see  for example). There is no vDS ML2 driver in Neutron at all and FF
> will become soon. So, it seems we cannot manage to adjust a blueprint spec
> , make it approved, refactor a code of the driver and provide a 3rd
> party CI for that in such a short period before FF.
>  thin Mellanox ML2 driver https://review.openstack.org/#/c/148614/
>  https://blueprints.launchpad.net/neutron/+spec/ml2-dvs-mech-driver
> On Sat, Jan 24, 2015 at 12:45 AM, Kevin Benton <blak111 at gmail.com> wrote:
>> It's worth noting that all Neutron ML2 drivers are required to move to
>> their own repos starting in Kilo so installing an extra python package to
>> use a driver will become part of the standard Neutron installation
>> workflow. So I would suggest creating a stackforge project for the vDS
>> driver and packaging it up.
>> On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin <adanin at mirantis.com>
>>> Hi, all,
>>> As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
>>> environment or a VMware vCenter-oriented environment. We wants to go
>>> further and mix them together. A user should be able to run both
>>> hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
>>> Here is how we gonna do this.
>>> * When vCenter is used as a hypervisor the only way to use volumes with
>>> it is to use Cinder VMDK backend. And vise versa, KVM cannot operate with
>>> the volumes provided by Cinder VMDK backend. All that means that we should
>>> have two separe infrastructures (a hypervisor + a volume service) for each
>>> HV presented in environment. To do that we decided to place corresponding
>>> nova-compute and cinder-volume instances into different Availability Zones.
>>> Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
>>> user to mount a volume to an instance which doesn't support this volume
>>> * A cinder-volume service is just a proxy between vCenter Datastore and
>>> Glance when used with VMDK. It means that the service itself doesn't need a
>>> local hard drive but sometimes can significantly consume network. That's
>>> why it's not a good idea to always put it to a Controller node. So, we want
>>> to add a new role called 'cinder-vmdk'. A user will be able to put this
>>> role to whatever node he wants: a separate node or combine with other
>>> roles. HA will be achieved by placing the role on two or more nodes.
>>> Cinder-volume services on each node will be configured identicaly,
>>> including 'host' stanza. We have the same approach now for Cinder+Ceph.
>>> * Nova-compute services for vCenter are kept running on Controller
>>> nodes. They are managed by Corosync.
>>> * There are two options for network backend exist. A good old
>>> Nova-network and a modern Neutron with ML2 DVS driver enabled. The problem
>>> with Nova-network is that we have to run it in 'singlehost' mode. It means,
>>> that the only nova-network service will be running for the whole
>>> environment. It makes the service a single point of failure, prevents a
>>> user to use Security Groups, and increases a network consuming for the node
>>> where the service is running. The problem with Neutron is that there is no
>>> ML2 DVS driver in an upstream Neutron for Juno and even Kilo. The is an
>>> unmerged patch  with almost no chances to get in Kilo. Good news are
>>> that we managed to run a PoC lab with this driver and both HVs enabled. So,
>>> we can build the driver as a package but it'll be a little ugly. That's why
>>> we picked the Nova-network approach as a basis. In Cluster creation wizard
>>> will be something to choose if you want to use vCenter in a cluster or not.
>>> Depending on it the nova-network service will be run in the 'singlenode' or
>>> 'multinode' mode. May be, if we have enough resources we'll implement a
>>> Neutron + vDS support also.
>>> * We are going to move all VMWare-specific settings to a separate UI
>>> tab. On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
>>> VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
>>> wizard there will be a checkbox called 'add a VMware vCenter support into
>>> your cloud'. When it's enabled a user can choose nova-network only.
>>> * OSTF test suit will be extended to support separate sets of tests for
>>> each HV.
>>>  Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/
>>> Links to blueprints:
>>> I would appreciate to see your thoughts about all that.
>>> Andrey Danin
>>> adanin at mirantis.com
>>> skype: gcon.monolake
>>> OpenStack Development Mailing List (not for usage questions)
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> Kevin Benton
>> OpenStack Development Mailing List (not for usage questions)
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev