<div dir="ltr">It's worth noting that all Neutron ML2 drivers are required to move to their own repos starting in Kilo so installing an extra python package to use a driver will become part of the standard Neutron installation workflow. So I would suggest creating a stackforge project for the vDS driver and packaging it up.</div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin <span dir="ltr"><<a href="mailto:adanin@mirantis.com" target="_blank">adanin@mirantis.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span>Hi, all,</span></div><div><br></div><div><span>As
you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
environment or a VMware vCenter-oriented environment. We wants to go
further and mix them together. A user should be able to run both
hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
Here is how we gonna do this.</span></div><div><br></div><div><span>*
When vCenter is used as a hypervisor the only way to use volumes with
it is to use Cinder VMDK backend. And vise versa, KVM cannot operate
with the volumes provided by Cinder VMDK backend. All that means that we
should have two separe infrastructures (a hypervisor + a volume
service) for each HV presented in environment. To do that we decided to
place corresponding nova-compute and cinder-volume instances into
different Availability Zones. Also we want to disable 'cross_az_attach'
option in nova.conf to restrict a user to mount a volume to an instance
which doesn't support this volume type.</span></div><div><br></div><div><span>*
A cinder-volume service is just a proxy between vCenter Datastore and
Glance when used with VMDK. It means that the service itself doesn't
need a local hard drive but sometimes can significantly consume network.
That's why it's not a good idea to always put it to a Controller node.
So, we want to add a new role called 'cinder-vmdk'. A user will be able
to put this role to whatever node he wants: a separate node or combine
with other roles. HA will be achieved by placing the role on two or more
nodes. Cinder-volume services on each node will be configured
identicaly, including 'host' stanza. We have the same approach now for
Cinder+Ceph.</span></div><div><br></div><div><span>* Nova-compute services for vCenter are kept running on Controller nodes. They are managed by Corosync.</span></div><div><br></div><div><span>*
There are two options for network backend exist. A good old
Nova-network and a modern Neutron with ML2 DVS driver enabled. The
problem with Nova-network is that we have to run it in 'singlehost'
mode. It means, that the only nova-network service will be running for
the whole environment. It makes the service a single point of failure,
prevents a user to use Security Groups, and increases a network
consuming for the node where the service is running. The problem with
Neutron is that there is no ML2 DVS driver in an upstream Neutron for
Juno and even Kilo. The is an unmerged patch [1] with almost no chances
to get in Kilo. Good news are that we managed to run a PoC lab with this
driver and both HVs enabled. So, we can build the driver as a package
but it'll be a little ugly. That's why we picked the Nova-network
approach as a basis. In Cluster creation wizard will be something to
choose if you want to use vCenter in a cluster or not. Depending on it
the nova-network service will be run in the 'singlenode' or 'multinode'
mode. May be, if we have enough resources we'll implement a Neutron +
vDS support also.</span></div><div><br></div><div><span>*
We are going to move all VMWare-specific settings to a separate UI tab.
On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
wizard there will be a checkbox called 'add a VMware vCenter support
into your cloud'. When it's enabled a user can choose nova-network only.</span></div><div><br></div><div><span>* OSTF test suit will be extended to support separate sets of tests for each HV.</span></div><div><br></div><div><span>[1] Neutron ML2 vDS driver </span><span><a href="https://review.openstack.org/#/c/111227/" target="_blank">https://review.openstack.org/#/c/111227/</a></span></div><div><br></div><div><span>Links to blueprints:</span></div><div><span><a href="https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings" target="_blank">https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings</a></span></div><div><span><a href="https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role" target="_blank">https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role</a></span></div><div><span><a href="https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor" target="_blank">https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor</a></span></div><div><br></div><div><br></div><div><span>I would appreciate to see your thoughts about all that.</span></div><span class="HOEnZb"><font color="#888888"><div><br></div><br clear="all"><br>-- <br><div>Andrey Danin<br><a href="mailto:adanin@mirantis.com" target="_blank">adanin@mirantis.com</a><br>skype: gcon.monolake<br></div>
</font></span></div>
<br>__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div>Kevin Benton</div></div>
</div>