os.vbvs at gmail.com
Tue Mar 25 18:09:41 UTC 2014
Thanks for the confirmation, and for the link to the services in service VM
blueprint link. I went through it and while the concept of using service
VMs is one way to go (cloudstack uses the approach, for example), I am not
inclined to take that approach for some reasons. Having a service image
essentially means forcing customers/deployers to either go with a specific
OS version or to maintain multiple formats (qcow/ova/vhd etc) for different
hypervisors. It will mean putting in packaging code into openstack to build
these VMs with every build. Upgradation from one Openstack release to the
next will sometime eventually force upgradation of these service VMs as
well. We will need to track service VM versions in glance or elsewhere and
prevent deployment across versions if required. Since we're talking about
VMs and not just modules, downtime also increases during upgrades. It adds
a whole level of complexity to handle all these scenarios, and we will need
to do that in Openstack code. I am very averse to doing that. Also as an
aside, when service VMs come into picture, deployment (depending on the
hypervisor) can take quite a long time (and scalability issues can be hit).
Take VMWare for example. We can create full or linked clones on it. Full
clone creation is quite costly in it. Storage migration is another area.
The issues will add up with time. From a developer's pov, debugging can get
tricky at times with service VMs.
The issues brought up in the blueprint are definitely valid and must be
addressed, but we'll need to have a discussion on what the optimal and
cleanest approach would be.
There is another interesting discussion going on regarding LBaaS APIs for
Libra and HAProxy between Susanne Balle/Eugene and others - I'll chip in
with my 2 cents on that..
Thanks a lot again!
On Tue, Mar 25, 2014 at 1:02 AM, Oleg Bondarev <obondarev at mirantis.com>wrote:
> Hi Vijay,
> Currently Neutron LBaaS supports only namespace based implementation for
> You can however run LBaaS agent on the host other than network controller
> node - in that
> case HAProxy processes will be running on that host but still in
> Also there is an effort in Neutron regarding adding support of advanced
> services in VMs .
> After it is completed I hope it will be possible to adopt it in LBaaS and
> run HAProxy in such a service VM.
>  https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> On Tue, Mar 25, 2014 at 1:39 AM, Vijay B <os.vbvs at gmail.com> wrote:
>> Hi Eugene,
>> Thanks for the reply! How/where is the agent configuration done for
>> HAProxy? If I don't want to go with a network namespace based HAProxy
>> process, but want to deploy my own HAProxy instance on a host outside of
>> the network controller node, and make neutron deploy pools/VIPs on that
>> HAProxy instance, does neutron currently support this scenario? If so, what
>> are the configuration steps I will need to carry out to deploy HAProxy on a
>> separate host (for example, where do I specify the ip address of the
>> haproxy host, etc)?
>> On Mon, Mar 24, 2014 at 2:04 PM, Eugene Nikanorov <
>> enikanorov at mirantis.com> wrote:
>>> HAProxy driver has not removed from the trunk, instead it became a base
>>> for agent-based driver, so the only haproxy-specific thing in the plugin
>>> driver is device driver name. Namespace driver is a device driver on the
>>> agent side and it was there from the beginning.
>>> The reason for the change is mere refactoring: it seems that solutions
>>> that employ agents could share the same code with only device driver being
>>> So, everything is in place, HAProxy continues to be the default
>>> implementation of Neutron LBaaS service. It supports spawning haproxy
>>> processes on any host that runs lbaas agent.
>>> On Tue, Mar 25, 2014 at 12:33 AM, Vijay B <os.vbvs at gmail.com> wrote:
>>>> I'm looking at HAProxy support in Neutron, and I observe that the
>>>> drivers/haproxy/plugin_driver.py file in the stable/havana release has been
>>>> effectively removed from trunk (master), in that the plugin driver in the
>>>> master simply points to the namespace driver. What was the reason to do
>>>> this? Was the plugin driver in havana tested and documented? I can't seem
>>>> to get hold of any relevant documentation that describes how to configure
>>>> HAProxy LBs installed on separate boxes (and not brought up in network
>>>> namespaces) - can anyone please point me to the same?
>>>> Also, are there any plans to bring back the HAProxy plugin driver to
>>>> talk to remote HAProxy instances?
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev