[openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

Andrey Danin adanin at mirantis.com
Fri Jan 23 19:39:13 UTC 2015


Hi, all,

As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
environment or a VMware vCenter-oriented environment. We wants to go
further and mix them together. A user should be able to run both
hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
Here is how we gonna do this.

* When vCenter is used as a hypervisor the only way to use volumes with it
is to use Cinder VMDK backend. And vise versa, KVM cannot operate with the
volumes provided by Cinder VMDK backend. All that means that we should have
two separe infrastructures (a hypervisor + a volume service) for each HV
presented in environment. To do that we decided to place corresponding
nova-compute and cinder-volume instances into different Availability Zones.
Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
user to mount a volume to an instance which doesn't support this volume
type.

* A cinder-volume service is just a proxy between vCenter Datastore and
Glance when used with VMDK. It means that the service itself doesn't need a
local hard drive but sometimes can significantly consume network. That's
why it's not a good idea to always put it to a Controller node. So, we want
to add a new role called 'cinder-vmdk'. A user will be able to put this
role to whatever node he wants: a separate node or combine with other
roles. HA will be achieved by placing the role on two or more nodes.
Cinder-volume services on each node will be configured identicaly,
including 'host' stanza. We have the same approach now for Cinder+Ceph.

* Nova-compute services for vCenter are kept running on Controller nodes.
They are managed by Corosync.

* There are two options for network backend exist. A good old Nova-network
and a modern Neutron with ML2 DVS driver enabled. The problem with
Nova-network is that we have to run it in 'singlehost' mode. It means, that
the only nova-network service will be running for the whole environment. It
makes the service a single point of failure, prevents a user to use
Security Groups, and increases a network consuming for the node where the
service is running. The problem with Neutron is that there is no ML2 DVS
driver in an upstream Neutron for Juno and even Kilo. The is an unmerged
patch [1] with almost no chances to get in Kilo. Good news are that we
managed to run a PoC lab with this driver and both HVs enabled. So, we can
build the driver as a package but it'll be a little ugly. That's why we
picked the Nova-network approach as a basis. In Cluster creation wizard
will be something to choose if you want to use vCenter in a cluster or not.
Depending on it the nova-network service will be run in the 'singlenode' or
'multinode' mode. May be, if we have enough resources we'll implement a
Neutron + vDS support also.

* We are going to move all VMWare-specific settings to a separate UI tab.
On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
wizard there will be a checkbox called 'add a VMware vCenter support into
your cloud'. When it's enabled a user can choose nova-network only.

* OSTF test suit will be extended to support separate sets of tests for
each HV.

[1] Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/

Links to blueprints:
https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role
https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor


I would appreciate to see your thoughts about all that.



-- 
Andrey Danin
adanin at mirantis.com
skype: gcon.monolake
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150123/33469848/attachment-0001.html>


More information about the OpenStack-dev mailing list