[Openstack] [OpenStack] [keystone] How to make keystone highly available?

Alexandr Porunov alexandr.porunov at gmail.com
Wed Sep 21 06:29:13 UTC 2016


> That is more like a hot-standby (only one server is used at any one time)
> and I guess that's an option as well. But because I have limited
resources,
> I'd prefer, if possible, to use all of them all the time.

Wrong. It depends on how you configured you cluster. You can point half of
your machines to use keystone1 and other half machines to use keystone2.
For it you have to configure your keepalived to manage two virtual ip
address with different priority. Example:
keystone1: 192.168.0.61, proirity 101
keystone1: 192.168.0.62, priority 100
keystone2: 192.168.0.61 proirity 100
keystone2: 192.168.0.62, priority 101

your 8 machines (nove, swift, glance, and so on) will use 192.168.0.61
your other 8 machines will use 192.168.0.62

Assuming your keystone1 is dead then keepalived will assign both IPs for
keystone2 (i.e. 192.168.0.61, 192.168.0.62). Your all machines then
automatically will use keystone2. After your keystone1 is returned back to
life your keepalived will reassign IP address 192.168.0.61 back to
keystone1 bacause 192.168.0.61 has higher priority for keystone1 than
keystone2.

Sincerely,
Alexandr

On Wed, Sep 21, 2016 at 12:44 AM, Turbo Fredriksson <turbo at bayour.com>
wrote:

> On Sep 20, 2016, at 10:06 PM, Alexandr Porunov wrote:
>
> > If you care about high availability (as I do) then you need to have
> > additional keystone instance which will prevent your cluster from SPOF.
>
> That was the idea. One node is already dedicated for that, but I haven't
> installed it yet, because I'm not sure what the right way to do this is.
>
> I've been reading the high availability docs for the last couple of weeks
> now, but they don't talk about the most basic things :(. For once, they
> get right down to it (which I've complained about other docs don't in the
> past :D).
>
>
> IF (!) I understand things correctly, services report themselves into
> the catalog (which is basically the *SQL server). This catalog is Keystone.
>
> So my first Keystone server have registered itself as:
>
> bladeA01:~# openstack endpoint list | grep keystone
> | 26855d6e55284651a0fcaa5cf25b3d90 | europe-london | keystone     |
> identity            | True    | internal  | http://10.0.4.1:5000/v2.0
>               |
> | 72eb76cdb6cf4db3813eac4a683e4e34 | europe-london | keystone     |
> identity            | True    | public    | http://10.0.4.1:5000/v2.0
>               |
> | c95ce66a4efa46b4855185d088279824 | europe-london | keystone     |
> identity            | True    | admin     | http://10.0.4.1:35357/v2.0
>              |
>
> Now, I'm assuming that the second one will do the same (on ITS IP of
> course). So "anyone" needing to contact Keystone, will, I assume,
> consult this catalog and "pick one".
>
>
> So if I use a DNS round-robin for the two keystones:
>
>     openstack.domain.tld.    1 IN A 10.0.4.1
>     openstack.domain.tld.    1 IN A 10.0.4.9
>
> the entry for 'openstack.domain.tld.' will be invalidated every second,
> practically making sure that a new request will be given every time.
> The downside is that if the .1 keystone is down, the service will have
> to wait for the time out before it can ask the next one.
>
> Round-robin is "poor mans load balancer", and it have many flaws, but
> it will at least give some form of using all available resources at any
> one time.
>
> > For it I use the same virtual IP address for both keystone instances
> > managed by keepalived.
>
> That is more like a hot-standby (only one server is used at any one time)
> and I guess that's an option as well. But because I have limited resources,
> I'd prefer, if possible, to use all of them all the time.
>
> Balancing the load is less of an issue than high availability is, but
> if possible, I'd like to solve both of them :).
>
> > Also you can use peacemaker and other stuff to reach high availability
>
> Yes, but I'm guessing those need _another_ machine in front of the ones
> I want to load balance. And if that goes down, EVERYTHING stops working.
> Unless they are clustered, which require _even more_ machines!
>
> Which I don't have. I don't want to dedicate a dual CPU, eight core
> Intel Xeon E5530 @ 2.40GHz just to swap traffic around! It's a huge
> waste of precious resources. And if I add smaller machines outside of
> the blade center, then ALL traffic needs to go out in the rack and then
> back in, which will affect performance (which is already kind'a bad
> because it's an older setup with only Gbps links).
>
> I had to dedicate one whole switch (there's two Cisco 3020 in the
> blade center) just for the truncated link down to the storage. Which
> is another think I need to solve eventually - it is currently the biggest,
> badest SPOF :( :(.
> --
> Michael Jackson is not going to buried or cremated
> but recycled into shopping bags so he can remain white,
> plastic and dangerous for kids to play with.
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160921/cfd7b973/attachment.html>


More information about the Openstack mailing list