Questions about High Availability setup

Sean Mooney smooney at redhat.com
Mon Aug 29 10:45:22 UTC 2022


On Mon, 2022-08-29 at 09:53 +0000, Eugen Block wrote:
> Hi,
> 
> > Currently cinder-volume does support Active-Active HA, though not all
> > drivers support this configuration.  Besides using a driver that
> > supports A/A, a DLM is also required and needs to be configured in
> > Cinder, finally the "host" and "backend_host" options must not be
> > configured and the "cluster" configuration should be configured instead.
> 
> this is interesting information, I was not aware of that. Does the rbd  
> driver support A/A? I'm still dealing with some older cloud  
> deployments and didn't have the time to look into newer features yet,  
> but it would be great! We do currently use the "host" option to let  
> haproxy redirect the cinder-volume requests. I'm definitely gonna need  
> to look into that. Thanks for pointing that out!
you can find the list of drivers that support it here
https://docs.openstack.org/cinder/latest/reference/support-matrix.html#operation_active_active_ha
ceph work in both rbd and iscsi mode.

> 
> Thanks,
> Eugen
> 
> Zitat von Gorka Eguileor <geguileo at redhat.com>:
> 
> > On 26/08, Eugen Block wrote:
> > > Hi,
> > > 
> > > just in addition to the previous response, cinder-volume is a stateful
> > > service and there should be only one instance running. We configured it to
> > > be bound to the virtual IP controlled by pacemaker, pacemaker also controls
> > > all stateless services in our environment although it wouldn't be necessary.
> > > But that way we have all resources at one place and don't need to
> > > distinguish.
> > > 
> > 
> > Hi,
> > 
> > Small clarification, cinder-volume is not actually stateful.
> > 
> > It is true that historically the cinder-volume service only supported
> > High Availability in Active-Passive mode and required the configuration
> > to set the "host" or "backend_host" configuration option to the same
> > value in all the different controller nodes, so on failover the newly
> > started service would consider existing resources as its own.
> > 
> > Currently cinder-volume does support Active-Active HA, though not all
> > drivers support this configuration.  Besides using a driver that
> > supports A/A, a DLM is also required and needs to be configured in
> > Cinder, finally the "host" and "backend_host" options must not be
> > configured and the "cluster" configuration should be configured instead.
> > 
> > Cheers,
> > Gorka.
> > 
> > 
> > > 
> > > Zitat von Satish Patel <satish.txt at gmail.com>:
> > > 
> > > > Hi,
> > > > 
> > > > 3 nodes requirements come from MySQL galera and RabbitMQ  
> > > clustering because
> > > > of quorum requirements ( it should be in odd numbers 1, 3, 5 etc..). Rest
> > > > of components works without clustering and they live behind HAProxy LB for
> > > > load sharing and redundancy.
> > > > 
> > > > Someone else can add more details here if I missed something.
> > > > 
> > > > On Thu, Aug 25, 2022 at 4:05 AM 박경원 <park0kyung0won at dgist.ac.kr> wrote:
> > > > 
> > > > > Hello
> > > > > 
> > > > > 
> > > > > I have two questions about deploying openstack in high available setup
> > > > > 
> > > > > Specifically, HA setup for controller nodes
> > > > > 
> > > > > 
> > > > > 1. Are openstack services (being deployed on controller nodes)  
> > > stateless?
> > > > > 
> > > > > 
> > > > > Aside from non-openstack packages(galera/mysql, zeromq, ...) for
> > > > > infrastructure, are openstack services stateless?
> > > > > 
> > > > > For example, can I achieve high availability by deploying two nova-api
> > > > > services to two separate controller nodes
> > > > > 
> > > > > by load balacing API calls to them through HAproxy?
> > > > > 
> > > > > Is this(load balancer) the way how openstack achieves high availability?
> > > > > 
> > > > > 
> > > > > 
> > > > > 2. Why minimum 3 controller nodes for HA?
> > > > > 
> > > > > 
> > > > > Is this solely due to etcd?
> > > > > 
> > > > > 
> > > > > Thanks!
> > > > > 
> > > 
> > > 
> > > 
> > > 
> 
> 
> 
> 




More information about the openstack-discuss mailing list