Thanks!!! It's helpful. We are using an openstack-ansible tool for deployment. Ma

On Thu, Apr 6, 2023 at 12:44 PM Can Özyurt <acozyurt@gmail.com> wrote:
Hi Satish,

We have separate RMQ/DB nodes. I think 5 and 7 nodes respectively. Yet
all these nodes are listed under [control] as well, they still get
other controller components deployed on them. However I think the big
difference is that they are high-end bare metal servers: apps have
plenty of resources to run on. Big machines, easy life.

As to slowiness: I never noticed any slowiness when I work through
APIs or Horizon. Maybe I got used to it and forgot how fast it used to
be. If you have strict response time policies, it may not be ideal.
But since it's a private cloud,  we consider it as a simple trade-off.

I don't want to jinx it for myself but we haven't had a RMQ issue for
a while now. I have heard people taking care of small clusters
complain about RMQ too: at this point I am not sure if RMQ has a
scaling problem or it's just luck of a draw.

As I said above, it comes down to choice. We chose this way because we
can tolerate its shortcomings.

On Thu, 6 Apr 2023 at 18:37, Satish Patel <satish.txt@gmail.com> wrote:
>
> Hi Can,
>
> Thank you for sharing information. I am surprised how you are running that many nodes with a single control plan. Could you share info about how many controllers you have? Do you have separate rabbitMQ nodes/DB nodes etc.
>
> I have 350 compute nodes with 3x controller nodes and it works fine but the problem is slowness. I felt it was a little slower compared to smaller environments. Also I googled about it and most people suggested using 100 nodes per deployment for easy management.
>
> In my deployment I have all vlan based tenants so my network nodes are kind of useless and just for DHCP. mostly i am worried about RabbitMQ.
>
>
> On Thu, Apr 6, 2023 at 2:54 AM Can Özyurt <acozyurt@gmail.com> wrote:
>>
>> Hi,
>>
>> Technically yes. We have had more than a thousand nodes in a single
>> cell for years on production. But we are considering adding more cells
>> as we grow bigger and bigger.
>>
>> On Thu, 6 Apr 2023 at 03:25, Satish Patel <satish.txt@gmail.com> wrote:
>> >
>> > Hi Can,
>> >
>> > You are saying a single control plan can handle 600 compute nodes?
>> >
>> > On Wed, Apr 5, 2023 at 5:36 PM Can Özyurt <acozyurt@gmail.com> wrote:
>> >>
>> >> It's totally okay to use regions as AZs if that's what you need,
>> >> unless you are planning to deploy in another city. Then you will be
>> >> already exhausting the term region and you will have been left with
>> >> much more to deal with. Ultimately it comes down to needs and future
>> >> plans.
>> >>
>> >> Secondly, I think you should be fine with a single cell anyway as we
>> >> have much more in one cell. However experience may vary depending on
>> >> the volume and characteristics of the daily workload and the design of
>> >> the control plane, so maybe you should take someone else's word with
>> >> more in-depth knowledge for the guidance.
>> >>
>> >> Cheers.
>> >>
>> >> On Wed, 5 Apr 2023 at 18:47, Satish Patel <satish.txt@gmail.com> wrote:
>> >> >
>> >> > Folks,
>> >> >
>> >> > This is a very broad question and could have many answers but still wanted to ask to see what people are thinking and doing.
>> >> >
>> >> > We have DC with multiple racks and around 600 nodes so we are planning to create 3 private openstack and each with 200 nodes. In other data centers we have multiple 200 nodes openstack clusters and life is good. In the new datacenter I am thinking of deploying the same way openstack but shared keystone between multiple openstack environments to have a single place for identification and I am going to call it 3 regions even it's in a single DC. Hope shared keystone is helpful/easier for doing terrafrom work and selecting region easily.
>> >> >
>> >> > Other option I was thinking about to deploy cellv2 technology and put all 600 nodes in single cloud but always worried for operation point of view. Now sure how many people running cellv2 successfully without pain. (Except CERN)
>> >> >
>> >> > If people running multi-region openstack cloud then please share some thought and idea to make life easier. Not looking for best solution but just clue to get idea to deploy and manage multiple openstack.
>> >> >
>> >> > Thanks in advance.
>> >> > S
>> >> >
>> >> >