Hi,I have tried following in global.yml and it didn't create any network. Assuming your patch is missing for that right? Or am I missing something?# Trove (DBaaS)
enable_trove: "yes"
enable_cinder_backup: "no"
enable_cinder_backend_lvm: "no"
docker_disable_ip_forward: false
trove_mgmt_network:
name: trove-mgmt-net
provider_network_type: flat
provider_physical_network: physnet1
external: True
shared: True
subnet:
name: trove-mgmt-subnet
cidr: "192.168.100.0/24"
gateway_ip: 192.168.100.1
allocation_pool_start: 192.168.100.150
allocation_pool_end: 192.168.100.199
enable_dhcp: yesOn Sun, Jan 7, 2024 at 11:33 AM Satish Patel <satish.txt@gmail.com> wrote:Thank you for the information.Let me understand. You are saying I should create trove-mgmt network which attached to all controller services and same network will attach to trove instances (guest-agent) right? (Just like Octavia lb-mgmt-net)Where do I tell trove to use trove-mgmt network when you spin up instance? (In what config file?)Do I need your patch for that or without that patch it’s possible?Second thing I was thinking why don’t I expose rabbitmq/keystone etc on public ip using ngnix so trove vm can access those public endpoints.. what is the problem in this approach ?On Sun, Jan 7, 2024 at 5:16 AM W Ch <wchy1001@gmail.com> wrote:Hi, please refer to this patch: https://review.opendev.org/c/openstack/kolla-ansible/+/863521/43/tests/templates/globals-default.j2 in kolla-ansible.In your production, we recommend to create a provider network (vlan or flat network) to be used as the trove management network.This network should be responsible for accessing rabbitmq, swift, keystone and docker registry.For more information about trove management network, please read this document: https://docs.openstack.org/trove/latest/admin/run_trove_in_production.html#management-networkthanks.2024年1月6日 22:55,Satish Patel <satish.txt@gmail.com> 写道:Folks,I am trying to find a kolla-ansible doc related to how to implement trove in production with best practices. Especially when a guest-agent (running inside VM) talks to RPC service.How are other folks doing in their environments? In most cases RPC is an internal service and not visible to any guest VMs. Looking for some guidance.