[openstack-dev] [charms]Running two haproxy-using units on same machine?

James Page james.page at ubuntu.com
Wed Sep 14 09:14:58 UTC 2016


I can't find the provider/colocation document I wrote a while back (its
disappeared from the canonical wiki)

I'll re-write it in the charm-guide soon.

On Wed, 14 Sep 2016 at 10:03 Neil Jerram <neil at tigera.io> wrote:

> Thanks James for this quick and clear answer!
>
>     Neil
>
>
> On Tue, Sep 13, 2016 at 8:46 PM, James Page <james.page at ubuntu.com> wrote:
>
>> Hi Neil
>>
>> On Tue, 13 Sep 2016 at 20:43 Neil Jerram <neil at tigera.io> wrote:
>>
>>> Should it be possible to run two OpenStack charm units, that both use
>>> haproxy to load balance their APIs, on the same machine?  Or is there some
>>> doc somewhere that says that a case like that should use separate machines?
>>>
>>> (I'm asking in connection with the bug report at
>>> https://bugs.launchpad.net/openstack-charm-testing/+bug/1622697.)
>>>
>>
>> No - that's not currently possible.  For example, if you try to place
>> both nova-cloud-controller and cinder units on the same machine, they both
>> assume sole control over haproxy.cfg and will happily trample each others
>> changes.
>>
>> There is a doc somewhere - I'll dig it out and add to the charm-guide on
>> docs.openstack.org.
>>
>> Solution: use a LXC or LXD container for each service, assuring sole
>> control of the filesystem for each charm, avoiding said conflict.
>>
>> Cheers
>>
>> James
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160914/99378349/attachment.html>


More information about the OpenStack-dev mailing list