[openstack-ansible] [yoga] utility_container failure

Father Vlasie fv at spots.edu
Wed Aug 17 21:16:26 UTC 2022


Hello, 

I am very appreciative of your help!

I think my interface setup might be questionable.

I did not realise that the nodes need to talk to each other on the external IP. I thought that was only for communication with entities external to the cluster.

My bond0 is associated with br-vlan so I put the external IP there and set br-vlan as the external interface in user_variables.

The nodes can now ping each other on the external network.

This is how I have user_variables configured:

———

haproxy_keepalived_external_vip_cidr: “192.168.2.9/26"
haproxy_keepalived_internal_vip_cidr: "192.168.3.9/32"
haproxy_keepalived_external_interface: br-vlan
haproxy_keepalived_internal_interface: br-mgmt
haproxy_bind_external_lb_vip_address: 192.168.2.9
haproxy_bind_internal_lb_vip_address: 192.168.3.9

———

My IP addresses are configured thusly (one sample from each node type):

———

infra1
    bond0->br-vlan 192.168.2.13
    br-mgmt 192.168.3.13
    br-vxlan 192.168.30.13
    br-storage

compute1 
    br-vlan 
    br-mgmt 192.168.3.16
    br-vxlan 192.168.30.16
    br-storage 192.168.20.16

log1 
    br-vlan 
    br-mgmt 192.168.3.19
    br-vxlan 
    br-storage 

———

I have destroyed all of my containers and I am running setup-hosts again. 

Here’s to hoping it all turns out this time!

Very gratefully,

FV

> On Aug 16, 2022, at 7:31 PM, James Denton <james.denton at rackspace.com> wrote:
> 
> Hello,
>  
> >> If I am using bonding on the infra nodes, should the haproxy_keepalived_external_interface be the device name (enp1s0) or bond0?
> 
> This will likely be the bond0 interface and not the individual bond member. However, the interface defined here will ultimately depend on the networking of that host, and should be an external facing one (i.e. the interface with the default gateway).
>  
> In many environments, you’ll have something like this (or using 2 bonds, but same idea):
>  
> 	• Bond0 (192.168.100.5/24 gw 192.168.100.1)
> 		• Em49
> 		• Em50
> 	• Br-mgmt (172.29.236.5/22)
> 		• Bond0.236
> 	• Br-vxlan (172.29.240.5/22)
> 		• Bond0.240
> 	• Br-storage (172.29.244.5/22)
> 		• Bond0.244
>  
> In this example, bond0 has the management IP 192.168.100.5 and br-mgmt is the “container” bridge with an IP configured from the ‘container’ network (see cidr_networks in openstack_user_config.yml). FYI: LXC containers will automatically be assigned IPs from the ‘container’ network outside of the ‘used_ips’ range(s). The infra host will communicate with the containers via this br-mgmt interface.
>  
> I’m using FQDNs for the VIPs, which are specified in openstack_user_config.yml here:
>  
> global_overrides:
>   internal_lb_vip_address: internalapi.openstack.rackspace.lab
>   external_lb_vip_address: publicapi.openstack.rackspace.lab
>  
> To avoid DNS resolution issues internally (or rather, to ensure the IP is configured in the config files and not the domain name) I’ll override with the IP and hard set the preferred interface(s):
>  
> haproxy_keepalived_external_vip_cidr: "192.168.100.10/32"
> haproxy_keepalived_internal_vip_cidr: "172.29.236.10/32"
> haproxy_keepalived_external_interface: bond0
> haproxy_keepalived_internal_interface: br-mgmt
> haproxy_bind_external_lb_vip_address: 192.168.100.10
> haproxy_bind_internal_lb_vip_address: 172.29.236.10
>  
> With the above configuration, keepalived will manage two VIPs - one external and one internal, and endpoints will have the FQDN rather than IP.
>  
> >> Curl shows "503 Service Unavailable No server is available to handle this request”
> 
> Hard to say without seeing logs why this is happening, but I will assume that keepalived is having issues binding the IP to the interface. You might find the reason in syslog or ‘journalctl -xe -f -u keepalived’.
>  
> >> Running "systemctl status var-www-repo.mount” gives an output of “Unit var-www-repo.mount could not be found."
> 
> You might try running ‘umount /var/www/repo’ and re-run the repo-install.yml playbook (or setup-infrastructure.yml).
>  
> Hope that helps!
>  
> James Denton
> Rackspace Private Cloud
>  
> From: Father Vlasie <fv at spots.edu>
> Date: Tuesday, August 16, 2022 at 4:31 PM
> To: James Denton <james.denton at rackspace.com>
> Cc: openstack-discuss at lists.openstack.org <openstack-discuss at lists.openstack.org>
> Subject: Re: [openstack-ansible] [yoga] utility_container failure
> 
> CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
> 
> 
> Hello,
> 
> Thank you very much for the reply!
> 
> haproxy and keepalived both show status active on infra1 (my primary node).
> 
> Curl shows "503 Service Unavailable No server is available to handle this request”
> 
> (Also the URL is http not https….)
> 
> If I am using bonding on the infra nodes, should the haproxy_keepalived_external_interface be the device name (enp1s0) or bond0?
> 
> Earlier in the output I find the following error (showing for all 3 infra nodes):
> 
> ------------
> 
> TASK [systemd_mount : Set the state of the mount] *****************************************************************************************************************************************
> fatal: [infra3_repo_container-7ca5db88]: FAILED! => {"changed": false, "cmd": "systemctl reload-or-restart $(systemd-escape -p --suffix=\"mount\" \"/var/www/repo\")", "delta": "0:00:00.022275", "end": "2022-08-16 14:16:34.926861", "msg": "non-zero return code", "rc": 1, "start": "2022-08-16 14:16:34.904586", "stderr": "Job for var-www-repo.mount failed.\nSee \"systemctl status var-www-repo.mount\" and \"journalctl -xe\" for details.", "stderr_lines": ["Job for var-www-repo.mount failed.", "See \"systemctl status var-www-repo.mount\" and \"journalctl -xe\" for details."], "stdout": "", "stdout_lines": []}
> 
> ——————
> 
> Running "systemctl status var-www-repo.mount” gives an output of “Unit var-www-repo.mount could not be found."
> 
> Thank you again!
> 
> Father Vlasie
> 
> > On Aug 16, 2022, at 6:32 AM, James Denton <james.denton at rackspace.com> wrote:
> >
> > Hello,
> >
> > That error means the repo server at 192.168.3.9:8181 is unavailable. The repo server sits behind haproxy, which should be listening on 192.168.3.9 port 8181 on the active (primary) node. You can verify this by issuing a ‘curl -vhttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2F192.168.3.9%3A8181%2F%25E2%2580%2599&data=05%7C01%7Cjames.denton%40rackspace.com%7C17cc3373086d47b3321408da7fceb08a%7C570057f473ef41c8bcbb08db2fc15c2b%7C0%7C0%7C637962823085464366%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PSsQW%2FW9N5JXapmMal%2FrsUm%2B8IYkDFTxwqxaH3K7tbA%3D&reserved=0. You might check the haproxy service status and/or keepalived status to ensure they are operating properly. If the IP cannot be bound to the correct interface, keepalive may not start.
> >
> > James Denton
> > Rackspace Private Cloud
> >
> > From: Father Vlasie <fv at spots.edu>
> > Date: Tuesday, August 16, 2022 at 7:38 AM
> > To: openstack-discuss at lists.openstack.org <openstack-discuss at lists.openstack.org>
> > Subject: [openstack-ansible] [yoga] utility_container failure
> >
> > CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
> >
> >
> > Hello everyone,
> >
> > I have happily progressed to the second step of running the playbooks, namely "openstack-ansible setup-infrastructure.yml"
> >
> > Everything looks good except for just one error which is mystifying me:
> >
> > ----------------
> >
> > TASK [Get list of repo packages] **********************************************************************************************************************************************************
> > fatal: [infra1_utility_container-5ec32cb5]: FAILED! => {"changed": false, "content": "", "elapsed": 30, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error timed out>", "redirected": false, "status": -1, "url": "https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2F192.168.3.9%3A8181%2Fconstraints%2Fupper_constraints_cached.txt&data=05%7C01%7Cjames.denton%40rackspace.com%7C17cc3373086d47b3321408da7fceb08a%7C570057f473ef41c8bcbb08db2fc15c2b%7C0%7C0%7C637962823085620584%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=XhtQdDC8GpQuJXGbQxhlBkUOS3krH%2F9d%2FxU9hWB1Cts%3D&reserved=0"}
> >
> > ----------------
> >
> > 192.168.3.9 is the IP listed in user_variables.yml under haproxy_keepalived_internal_vip_cidr
> >
> > Any help or pointers would be very much appreciated!
> >
> > Thank you,
> >
> > Father Vlasie
> >
> 




More information about the openstack-discuss mailing list