[Openstack] Unable to access guests in DevStack on OpenStack environment

Robert Collins robertc at robertcollins.net
Thu Mar 27 06:44:59 UTC 2014


A few things to consider
 - you'll be running qemu I presume, not nested kvm? Thats very slow -
perhaps the VM hasn't booted. nova console-log will let you see that
and also potentially other failures at boot etc.
 - you should see DHCP requests and responses in syslog on the control
node, if you don't the VM definitely hasn't come up.
 - you should be able to ping the VM from the netns of the dhcp agent
(if dhcp is working then this is a good place to start)
 - if dhcp isn't working, try using tcp dump on the compute nodes and
debug your overlay network (there are many blog posts on this)

That all said - its very odd that the vm got an ip on the *public*
network. I would have expected it to get an ip on the private network
and be exposed on the public one via a floating ip. the public network
probably has dhcp off (see neutron net-show and neutron subnet-show).

At a guess - you booted the vm as admin, rather than the user account,
and you got the public network as the network for the VM. Try using
the demo user account, or make your own user.

-Rob


On 27 March 2014 18:25, Juergen Brendel <juergen at brendel.com> wrote:
>
> Hello!
>
> I would be very grateful if someone could please help me to
> trouble-shoot a connectivity issue: I cannot ping or SSH into guests I
> have created on top of DevStack.
>
> What might be a complicating factor is that I have the DevStack cluster
> running on guests that are running on top of a base OpenStack setup, but
> most likely, I assume, I'm just making a really silly mistake in my
> setup somewhere. I just need to find it. Or maybe, there's just
> something odd about my setup that triggers some known issues?
>
> Here's what I'm working with:
>
>      1. A base OpenStack install (Havana), using VLANs and a couple of
>         servers. One controller host and two compute hosts. This is my
>         base-cluster. It is not based on DevStack, but just an ordinary
>         OpenStack install.
>      2. I spin up three guest machines ("nova boot..."). From the
>         controller host of the base-cluster I can log into those guests
>         without problem ("ip netns exec .... ssh ..."). These three
>         guests are my midlevel-cluster. The controller has private IP
>         address 10.5.5.2, the compute hosts have 10.5.5.5 and 10.5.5.6.
>      3. I log into the midlevel-cluster hosts and download and install
>         DevStack ("stable/havana") on them. The stack.sh script runs
>         without error or problem.
>      4. After stack.sh has run, I can see that a private and public
>         network have been created for that DevStack installation. On the
>         controller I also set some security group rules, to allow ICMP
>         and SSH to any guest instances that should be created within
>         DevStack.
>      5. I now create a guest on that DevStack cluster ("nova boot... "
>         again). This is my toplevel-guest. I can see ("nova list...")
>         that the guest has booted and that an IP address has been
>         assigned, but no matter what I do, I cannot ping or login to
>         those toplevel-guests. I use the qrouter namespace to attempt
>         this ("ip netns exec qrouter-...."), but no luck.
>
> Some information, which might be useful for trouble shooting:
>
> On the midlevel-cluster hosts, I use these localrc files to install
> DevStack (I'm just looking for simple GRE networking). It's a multi-node
> install, so first, here is the localrc for the controller:
>
>         # Passwords and tokens
>         ADMIN_PASSWORD=password
>         MYSQL_PASSWORD=password
>         RABBIT_PASSWORD=password
>         SERVICE_PASSWORD=password
>         SERVICE_TOKEN=tokentoken
>
>         # Logging, screen, devstack behavior
>         API_RATE_LIMIT=False
>         VERBOSE=True
>         DEBUG=True
>         LOGFILE=/home/ubuntu/tempest_run/workspace/stack.sh.log
>         USE_SCREEN=True
>         SCREEN_LOGDIR=/home/ubuntu/tempest_run/workspace
>         RECLONE=Yes
>         OFFLINE=False
>         LIBVIRT_TYPE=kvm
>
>         # Services
>         disable_service n-net
>         disable_service n-cpu
>         enable_service q-svc
>         enable_service q-agt
>         enable_service q-l3
>         enable_service q-meta
>         enable_service q-lbaas
>         enable_service q-dhcp
>         enable_service tempest
>         enable_service neutron
>
>         # Networking
>         ENABLE_TENANT_TUNNELS=True
>         Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
>         Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=gre)
>         Q_SRV_EXTRA_OPTS=(tenant_network_type=gre)
>         Q_USE_NAMESPACE=True
>         Q_USE_SECGROUP=True
>
> Here is the localrc for the compute hosts:
>
>         # Passwords and tokens
>         ADMIN_PASSWORD=password
>         MYSQL_PASSWORD=password
>         RABBIT_PASSWORD=password
>         SERVICE_PASSWORD=password
>         SERVICE_TOKEN=tokentoken
>
>         # Logging, screen, devstack behavior
>         VERBOSE=True
>         DEBUG=True
>         LOGFILE=/home/ubuntu/tempest_run/workspace/stack.sh.log
>         USE_SCREEN=True
>         SCREEN_LOGDIR=/home/ubuntu/tempest_run/workspace
>         RECLONE=Yes
>         OFFLINE=False
>         LIBVIRT_TYPE=kvm
>
>         # Controller connection
>         HOST_IP=10.5.5.5
>         SERVICE_HOST=10.5.5.2
>         MYSQL_HOST=10.5.5.2
>         RABBIT_HOST=10.5.5.2
>         Q_HOST=10.5.5.2
>         GLANCE_HOSTPORT=10.5.5.2:9292
>
>         # Services
>         ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
>
>         # Networking
>         ENABLE_TENANT_TUNNELS=True
>         Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
>         Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=gre)
>         Q_USE_NAMESPACE=True
>         Q_USE_SECGROUP=True
>
>
> After stack.sh has run to completion, I get the following on the
> DevStack controller:
>
>         $ ip netns qrouter-293c2395-3a05-4ad7-99e5-e5b1ebb80a35
>
>         $ neutron net-list
>         +--------------------------------------+---------+------------------------------------------------------+
>         | id                                   | name    | subnets                                              |
>         +--------------------------------------+---------+------------------------------------------------------+
>         | 789cb9ce-9f91-4d0b-9069-6eb5b808bdfc | public  | d206940f-5daf-464d-bf28-4dac527aba06 172.24.4.224/28 |
>         | c8f179ba-6675-49f3-92ba-6e58b38f59c1 | private | b2bc098b-2b87-46a5-bb5e-7a75d9520c17 10.0.0.0/24     |
>         +--------------------------------------+---------+------------------------------------------------------+
>
> I create a guest instance (toplevel-guest), like so:
>
>         $ nova boot --image e09876f9-c755-48a8-ada7-c658f8736a9e --flavor 1 --key-name mykey foobar
>
> With "nova list" I see:
>
>         $ nova list
>         +--------------------------------------+--------+--------+------------+-------------+---------------------+
>         | ID                                   | Name   | Status | Task State | Power State | Networks            |
>         +--------------------------------------+--------+--------+------------+-------------+---------------------+
>         | 8794d3fd-8596-4ddb-bf20-d823b9804f0d | foobar | ACTIVE | -          | Running     | public=172.24.4.227 |
>         +--------------------------------------+--------+--------+------------+-------------+---------------------+
>
> The following routers are known to neutron:
>
>         $ neutron router-list
>         +--------------------------------------+---------+-----------------------------------------------------------------------------+
>         | id                                   | name    | external_gateway_info                                                       |
>         +--------------------------------------+---------+-----------------------------------------------------------------------------+
>         | 293c2395-3a05-4ad7-99e5-e5b1ebb80a35 | router1 | {"network_id": "789cb9ce-9f91-4d0b-9069-6eb5b808bdfc", "enable_snat": true} |
>         +--------------------------------------+---------+-----------------------------------------------------------------------------+
>
> The following ports are known to neutron:
>
>         $ neutron port-list
>         +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
>         | id                                   | name | mac_address       | fixed_ips                                                                           |
>         +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
>         | 22126185-ae99-42d0-8876-bef9a96ff5a1 |      | fa:16:3e:9d:be:fb | {"subnet_id": "d206940f-5daf-464d-bf28-4dac527aba06", "ip_address": "172.24.4.227"} |
>         | 8914b69b-b9f7-4555-bbcb-5af8ae0d340c |      | fa:16:3e:0d:fd:d2 | {"subnet_id": "b2bc098b-2b87-46a5-bb5e-7a75d9520c17", "ip_address": "10.0.0.1"}     |
>         | d5f45982-e7fa-49a7-a8b3-b3e7b93a227c |      | fa:16:3e:2a:6f:ff | {"subnet_id": "d206940f-5daf-464d-bf28-4dac527aba06", "ip_address": "172.24.4.226"} |
>         +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
>
> I can ping the address 172.24.4.226 (presumably the router), but not the address of the guest (172.24.4.227).
>
>         $ ping 172.24.4.227
>         PING 172.24.4.227 (172.24.4.227) 56(84) bytes of data.
>         From 172.24.4.225 icmp_seq=1 Destination Host Unreachable
>         From 172.24.4.225 icmp_seq=2 Destination Host Unreachable
>         From 172.24.4.225 icmp_seq=3 Destination Host Unreachable
>
> Even if I use the qrouter namespace, it still doesn't work:
>
>         $ sudo ip netns exec qrouter-293c2395-3a05-4ad7-99e5-e5b1ebb80a35 ping 172.24.4.227
>         PING 172.24.4.227 (172.24.4.227) 56(84) bytes of data.
>         From 172.24.4.226 icmp_seq=1 Destination Host Unreachable
>         From 172.24.4.226 icmp_seq=2 Destination Host Unreachable
>         From 172.24.4.226 icmp_seq=3 Destination Host Unreachable
>
> In my desparation, I tried security groups in nova as well as neutron.
> The nova rules were added like this
>
>         $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
>         $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
>
> The neutron rules like this:
>
>         $ neutron security-group-rule-create --protocol tcp --port-range-min 22 \
>                  --port-range-max 22 --direction ingress default
>         $ neutron security-group-rule-create --protocol icmp \
>                  --direction ingress foogroup
>
> But in both cases, no luck.
>
> I also tried bringing up the guest NIC on the private network. Still no
> luck.
>
>
> On the DevStack controller host, the OVS config looks like this:
>
>         $ sudo ovs-vsctl show
>         2a2472f2-eebc-4214-8a57-bfc05a21ae26
>             Bridge br-int
>                 Port br-int
>                     Interface br-int
>                         type: internal
>                 Port "qr-8914b69b-b9"
>                     tag: 1
>                     Interface "qr-8914b69b-b9"
>                         type: internal
>                 Port patch-tun
>                     Interface patch-tun
>                         type: patch
>                         options: {peer=patch-int}
>             Bridge br-tun
>                 Port br-tun
>                     Interface br-tun
>                         type: internal
>                 Port "gre-10.5.5.5"
>                     Interface "gre-10.5.5.5"
>                         type: gre
>                         options: {in_key=flow, local_ip="10.5.5.2", out_key=flow, remote_ip="10.5.5.5"}
>                 Port patch-int
>                     Interface patch-int
>                         type: patch
>                         options: {peer=patch-tun}
>             Bridge br-ex
>                 Port "qg-d5f45982-e7"
>                     Interface "qg-d5f45982-e7"
>                         type: internal
>                 Port br-ex
>                     Interface br-ex
>                         type: internal
>             ovs_version: "1.4.6"
>
> On the DevStack compute hosts, it's like this:
>
>         $ sudo ovs-vsctl show
>         9b280f62-130c-4c12-89ea-0bc2fa22156e
>             Bridge br-int
>                 Port patch-tun
>                     Interface patch-tun
>                         type: patch
>                         options: {peer=patch-int}
>                 Port br-int
>                     Interface br-int
>                         type: internal
>             Bridge br-tun
>                 Port br-tun
>                     Interface br-tun
>                         type: internal
>                 Port "gre-10.5.5.2"
>                     Interface "gre-10.5.5.2"
>                         type: gre
>                         options: {in_key=flow, local_ip="10.5.5.5", out_key=flow, remote_ip="10.5.5.2"}
>                 Port patch-int
>                     Interface patch-int
>                         type: patch
>                         options: {peer=patch-tun}
>             ovs_version: "1.4.6"
>
>
> The network interfaces on the DevStack controller are like this:
>
>         $ ifconfig -a
>         br-ex     Link encap:Ethernet  HWaddr 96:bf:ae:d5:a0:44
>                   inet addr:172.24.4.225  Bcast:0.0.0.0  Mask:255.255.255.240
>                   inet6 addr: fe80::94bf:aeff:fed5:a044/64 Scope:Link
>                   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>                   RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:468 (468.0 B)  TX bytes:468 (468.0 B)
>
>         br-int    Link encap:Ethernet  HWaddr 62:28:61:90:0b:47
>                   BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:468 (468.0 B)  TX bytes:0 (0.0 B)
>
>         br-tun    Link encap:Ethernet  HWaddr 0e:f3:a7:96:91:43
>                   BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>         eth0      Link encap:Ethernet  HWaddr fa:16:3e:dc:7e:82
>                   inet addr:10.5.5.2  Bcast:10.5.5.255  Mask:255.255.255.0
>                   inet6 addr: fe80::f816:3eff:fedc:7e82/64 Scope:Link
>                   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>                   RX packets:438220 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:180410 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:1000
>                   RX bytes:580740049 (580.7 MB)  TX bytes:14956005 (14.9 MB)
>
>         lo        Link encap:Local Loopback
>                   inet addr:127.0.0.1  Mask:255.0.0.0
>                   inet6 addr: ::1/128 Scope:Host
>                   UP LOOPBACK RUNNING  MTU:16436  Metric:1
>                   RX packets:84427 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:84427 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:51372553 (51.3 MB)  TX bytes:51372553 (51.3 MB)
>
> On the compute hosts, they are like this:
>
>         $ ifconfig -a
>         br-int    Link encap:Ethernet  HWaddr c2:25:18:4b:8b:49
>                   BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>         br-tun    Link encap:Ethernet  HWaddr 1e:27:74:ee:a1:45
>                   BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>         eth0      Link encap:Ethernet  HWaddr fa:16:3e:07:fd:48
>                   inet addr:10.5.5.5  Bcast:10.5.5.255  Mask:255.255.255.0
>                   inet6 addr: fe80::f816:3eff:fe07:fd48/64 Scope:Link
>                   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>                   RX packets:309885 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:134774 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:1000
>                   RX bytes:422666082 (422.6 MB)  TX bytes:10489797 (10.4 MB)
>
>         lo        Link encap:Local Loopback
>                   inet addr:127.0.0.1  Mask:255.0.0.0
>                   inet6 addr: ::1/128 Scope:Host
>                   UP LOOPBACK RUNNING  MTU:16436  Metric:1
>                   RX packets:159 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:159 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:10580 (10.5 KB)  TX bytes:10580 (10.5 KB)
>
>         virbr0    Link encap:Ethernet  HWaddr 9a:21:30:f2:8a:3b
>                   inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
>                   UP BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>
> If anyone has any idea what I could possibly be doing wrong, I would be
> very grateful for any explanation.
>
> Thank you very much...
>
> Juergen
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud




More information about the Openstack mailing list