[Openstack] nova-compute and cinder-scheduler HA

Сергей Мотовиловец motovilovets.sergey at gmail.com
Sat May 17 18:27:31 UTC 2014


Hello, Jay!

Thanks for the answer. Yeah, it seems like you literally gave me the answer
by asking questions about Rabbit. That made me take a closer look at Rabbit
and think of it as a source of these problems. What I missed was defining
HA policy explicitly (as it's not enabled by default since v.3.0, which is
about forever). Well, at least I can't reproduce these problems anymore.

As we're already here, I'd like to ask a few more questions regarding
architecture for small-sized clusters. Like, let's separate servers
logically into 3 groups - controller / storage / compute, these are all
pretty much equal in terms of resources, and we wanna add some HA magic
here.
The first thing that comes to my mind - we can run HAProxy on each and
every node and configure everything in a way that node reaches any external
service it relies on through HAProxy, so that if we add server of any kind
it's just about re-configuring and reloading HAProxy. Is there something
wrong about this approach (except for the increased latency?). And are
there any OpenStack services that (for some reasons) should not be scaled
by spawning other instances of em?
Another thing that bothers me is how to distribute neutron-related services
if there's no dedicated networking node?

I'd really appreciate if you could find a few minutes and answer these
questions, as it's not that easy to find any real-life production-ready
examples for small deployments.

P.S. My name is Sergey :) And I should definitely add a signature


2014-05-16 16:33 GMT+03:00 Jay Pipes <jaypipes at gmail.com>:

> On 05/14/2014 02:49 PM, Сергей Мотовиловец wrote:
>
>> Hello everyone!
>>
>
> Hi Motovilovets :) Comments and questions for you inline...
>
>  I'm facing some troubles with nova and cinder here.
>>
>> I have 2 control nodes (active/active) in my testing environment with
>> Percona XtraDB cluster (Galera+xtrabackup) + garbd on a separate node
>> (to avoid split-brain) Â + OpenStack Icehouse, latest from Ubuntu 14.04
>>
>> main repo.
>>
>> The problem is horizontal scalability of nova-conductor and
>> cinder-scheduler services, seems like all active instances of these
>> services are trying to execute same MySQL queries they get from
>> Rabbit, which leads to numerous deadlocks in set-up with Galera.Â
>>
>
> Are you using RabbitMQ in clustered mode? Also, how are you doing your
> load balancing? Do you use HAProxy or some appliance? Do you have sticky
> sessions enabled for your load balancing?
>
>
>  In case when multiple nova-conductor services are running (and using
>> MySQL instances on corresponding control nodes) it appears as "Deadlock
>> found when trying to get lock; try restarting transaction" in log.
>> With cinder-scheduler it leads to "InvalidBDM: Block Device Mapping is
>> Invalid."
>>
>
> So, it's not actually a deadlock that is occurring... unless I'm mistaken
> (I've asked a Percona engineer to take a look at this thread to
> double-check me), the error about "Deadlock found..." is actually *not* a
> deadlock. It's just that Galera uses the same InnoDB error code as a normal
> deadlock to indicate that the WSREP certification process has timed out
> between the cluster nodes. Would you mind pastebin'ing your wsrep.cnf and
> my.cnf files for us to take a look at? I presume that you do not have much
> latency between the cluster nodes (i.e. they are not over a WAN)... let me
> know if that is not the case.
>
> It would also be helpful to see your rabbit and load balancer configs if
> you can pastebin those, too.
>
>  Is there any possible way to make multiple instances of these services
>> running simultaneously and not duplicating queries?Â
>>
>
> Yes, it most certainly is. At AT&T, we ran Galera clusters of much bigger
> size with absolutely no problems due to this cert timeout problem that
> manifests itself as a deadlock, so I know it's definitely possible to have
> a clean, performant, multi-writer Galera solution for OpenStack. :)
>
> Best,
> -jay
>
>  (I don't really like the idea of handling this with Heartbeat+Pacemaker
>> or other similar stuff, mostly because I'm thinking about equal load
>> distribution across control nodes, but in this case it seems like it has
>> an opposite effect, multiplying load on MySQL)
>>
>> Another thing that is extremely annoying: if instance stuck in ERROR
>> state because of deadlock during its termination - it is impossible to
>> terminate instance anymore in Horizon, only via nova-api with
>> reset-state. How can this be handled?
>>
>> I'd really appreciate any help/advises/thoughts regarding these problems.
>>
>>
>> Best regards,
>> Motovilovets Sergey
>> Software Operation Engineer
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>>
>>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140517/ab2d2f54/attachment.html>


More information about the Openstack mailing list