[Openstack] Unable to access guests in DevStack on OpenStack environment
Juergen Brendel
juergen at brendel.com
Thu Mar 27 09:49:23 UTC 2014
Hello!
Thank you for your reply.
On Thu, 2014-03-27 at 19:44 +1300, Robert Collins wrote:
> A few things to consider
> - you'll be running qemu I presume, not nested kvm? Thats very slow -
> perhaps the VM hasn't booted. nova console-log will let you see that
> and also potentially other failures at boot etc.
Good idea with the console-log. I do run 'nested kvm', but I confirmed
now that the instance has finished booting. So, that's not the issue.
However, I noticed this output in the console. Notice how there is no IP
address listed for eth0 in the 'ifconfig' output:
...
failed 15/20: up 213.60. request failed
failed 16/20: up 215.66. request failed
failed 17/20: up 217.72. request failed
failed 18/20: up 219.78. request failed
failed 19/20: up 221.83. request failed
failed 20/20: up 223.88. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 225.94. searched: nocloud configdrive ec2
failed to get instance-id of datasource
Starting dropbear sshd: generating rsa key... generating dsa key... OK
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,,8,fe80::f816:3eff:feea:465c
=== datasource: None None ===
=== cirros: current=0.3.1 uptime=227.17 ===
route: fscanf
=== pinging gateway failed, debugging connection ===
############ debug start ##############
### /etc/init.d/sshd start
Starting dropbear sshd: OK
route: fscanf
### ifconfig -a
eth0 Link encap:Ethernet HWaddr FA:16:3E:EA:46:5C
inet6 addr: fe80::f816:3eff:feea:465c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1094 (1.0 KiB) TX bytes:1520 (1.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1020 (1020.0 B) TX bytes:1020 (1020.0 B)
### route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
route: fscanf
### cat /etc/resolv.conf
cat: can't open '/etc/resolv.conf': No such file or directory
### gateway not found
/sbin/cirros-status: line 1: can't open /etc/resolv.conf: no such file
### pinging nameservers
### uname -a
Linux cirros 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 GNU/Linux
### lsmod
Module Size Used by Not tainted
nls_iso8859_1 12713 0
nls_cp437 16991 0
vfat 17585 0
fat 61512 1 vfat
isofs 40257 0
ip_tables 27473 0
x_tables 29891 1 ip_tables
pcnet32 42119 0
8139cp 27409 0
ne2k_pci 13691 0
8390 18856 1 ne2k_pci
e1000 108589 0
acpiphp 24231 0
### dmesg | tail
[ 1.885580] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004)
[ 1.899859] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend at alpha.franken.de
[ 1.913241] ip_tables: (C) 2000-2006 Netfilter Core Team
[ 1.984750] kjournald starting. Commit interval 5 seconds
[ 1.984761] EXT3-fs (vda): mounted filesystem with ordered data mode
[ 2.067740] kjournald starting. Commit interval 5 seconds
[ 2.177773] EXT3-fs (vda): using internal journal
[ 2.177779] EXT3-fs (vda): mounted filesystem with ordered data mode
[ 3.569157] EXT3-fs (vda): using internal journal
[ 15.280373] eth0: no IPv6 routers present
### tail -n 25 /var/log/messages
Mar 27 03:25:14 cirros kern.debug kernel: [ 1.073181] pnp 00:00: [io 0x0d00-0xffff window]
Mar 27 03:25:14 cirros kern.debug kernel: [ 1.073183] pnp 00:00: [mem 0x000a0000-0x000bffff window]
Mar 27 03:25:14 cirros kern.debug kernel: [ 1.073184] pnp 00:00: [mem 0xe0000000-0xfebfffff window]
Mar 27 03:25:14 cirros kern.debug kernel: [ 1.073219] pnp 00:00: Plug and Play ACPI device, IDs PNP0a03 (active)
...
...
Mar 27 03:25:14 cirros kern.debug kernel: [ 1.074031] pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
Mar 27 03:25:14 cirros kern.debug kernel: [ 1.074116] pnp 00:06: [io 0x02f8-0x02ff]
Mar 27 03:25:26 cirros kern.debug kernel: [ 15.280373] eth0: no IPv6 routers present
Mar 27 03:28:57 cirros authpriv.info dropbear[291]: Running in background
############ debug end ##############
____ ____ ____
/ __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/ /_/ \____/___/
http://cirros-cloud.net
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login:
So, while the instance boots, it doesn't seem to think that it has any
IP addresses at all. That's of course a problem. What could cause
something like this?
> - you should see DHCP requests and responses in syslog on the control
> node, if you don't the VM definitely hasn't come up.
Well, I do see some DHCP requests, however, they are just log entries
for the controller node itself getting its own address via DHCP. But
this is what I saw in the controller node's syslog:
Mar 27 09:25:45 jaxkia-controller ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --if-exists del-port tap4696f888-5e -- add-port br-int tap4696f888-5e -- set Interface tap4696f888-5e type=internal -- set Interface tap4696f888-5e external-ids:iface-id=4696f888-5e7f-4a47-b171-44b9dcf7a0e3 -- set Interface tap4696f888-5e external-ids:iface-status=active -- set Interface tap4696f888-5e external-ids:attached-mac=fa:16:3e:1a:50:b1
Mar 27 09:25:45 jaxkia-controller kernel: [ 6273.435010] device tap4696f888-5e entered promiscuous mode
Mar 27 09:25:47 jaxkia-controller ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port tap4696f888-5e tag=1
Mar 27 09:25:47 jaxkia-controller dnsmasq[13621]: started, version 2.59 cachesize 150
Mar 27 09:25:47 jaxkia-controller dnsmasq[13621]: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
Mar 27 09:25:47 jaxkia-controller dnsmasq[13621]: warning: no upstream servers configured
Mar 27 09:25:47 jaxkia-controller dnsmasq-dhcp[13621]: DHCP, static leases only on 10.0.0.0, lease time 1d
Mar 27 09:25:47 jaxkia-controller dnsmasq[13621]: cleared cache
Mar 27 09:25:47 jaxkia-controller dnsmasq-dhcp[13621]: read /opt/stack/data/neutron/dhcp/e88962a5-e60e-48ef-b74d-4ea656a4f246/host
Mar 27 09:25:47 jaxkia-controller dnsmasq-dhcp[13621]: read /opt/stack/data/neutron/dhcp/e88962a5-e60e-48ef-b74d-4ea656a4f246/opts
Mar 27 09:25:47 jaxkia-controller dnsmasq[13621]: cleared cache
Mar 27 09:25:47 jaxkia-controller dnsmasq-dhcp[13621]: read /opt/stack/data/neutron/dhcp/e88962a5-e60e-48ef-b74d-4ea656a4f246/host
Mar 27 09:25:47 jaxkia-controller dnsmasq-dhcp[13621]: read /opt/stack/data/neutron/dhcp/e88962a5-e60e-48ef-b74d-4ea656a4f246/opts
Mar 27 09:25:48 jaxkia-controller ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port tap4696f888-5e tag=1
Mar 27 09:25:49 jaxkia-controller ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port tap4696f888-5e tag=1
Mar 27 09:25:57 jaxkia-controller kernel: [ 6285.104023] tap4696f888-5e: no IPv6 routers present
Mar 27 09:26:10 jaxkia-controller dhclient: DHCPREQUEST of 10.5.5.6 on eth0 to 10.5.5.3 port 67
Mar 27 09:26:10 jaxkia-controller dhclient: DHCPACK of 10.5.5.6 from 10.5.5.3
Mar 27 09:26:10 jaxkia-controller dhclient: bound to 10.5.5.6 -- renewal in 53 seconds.
...
The 10.5.5.6 address is the controller node's address, so, that's
something else. However, the first line in the above excerpt is for
ovs-vsctl. It does something with the MAC address "fa:16:3e:1a:50:b1",
which is exactly the address of the cirros top-level guest, which
doesn't seem to get an IP address. So, it want's to do something with
it, but something doesn't seem to work out.
> - you should be able to ping the VM from the netns of the dhcp agent
> (if dhcp is working then this is a good place to start)
The DHCP namespace exists, but pinging isn't possible. No surprise,
considering that it doesn't get an IP address.
> - if dhcp isn't working, try using tcp dump on the compute nodes and
> debug your overlay network (there are many blog posts on this)
I guess that would be next.
> That all said - its very odd that the vm got an ip on the *public*
> network. I would have expected it to get an ip on the private network
> and be exposed on the public one via a floating ip. the public network
> probably has dhcp off (see neutron net-show and neutron subnet-show).
>
> At a guess - you booted the vm as admin, rather than the user account,
> and you got the public network as the network for the VM. Try using
> the demo user account, or make your own user.
Yes, you are right. I changed it now so that I create the VM as the demo
user. I promptly get an IP address assigned from the private network.
Well, nova and neutron think the address is assigned, but as we saw from
the console-log, the guest never received that address...
Maybe the above additional information is useful in determining where to
look next?
At any rate, I really appreciate you taking the time trying to help me.
Sincerely,
Juergen
> -Rob
>
>
> On 27 March 2014 18:25, Juergen Brendel <juergen at brendel.com> wrote:
> >
> > Hello!
> >
> > I would be very grateful if someone could please help me to
> > trouble-shoot a connectivity issue: I cannot ping or SSH into guests I
> > have created on top of DevStack.
> >
> > What might be a complicating factor is that I have the DevStack cluster
> > running on guests that are running on top of a base OpenStack setup, but
> > most likely, I assume, I'm just making a really silly mistake in my
> > setup somewhere. I just need to find it. Or maybe, there's just
> > something odd about my setup that triggers some known issues?
> >
> > Here's what I'm working with:
> >
> > 1. A base OpenStack install (Havana), using VLANs and a couple of
> > servers. One controller host and two compute hosts. This is my
> > base-cluster. It is not based on DevStack, but just an ordinary
> > OpenStack install.
> > 2. I spin up three guest machines ("nova boot..."). From the
> > controller host of the base-cluster I can log into those guests
> > without problem ("ip netns exec .... ssh ..."). These three
> > guests are my midlevel-cluster. The controller has private IP
> > address 10.5.5.2, the compute hosts have 10.5.5.5 and 10.5.5.6.
> > 3. I log into the midlevel-cluster hosts and download and install
> > DevStack ("stable/havana") on them. The stack.sh script runs
> > without error or problem.
> > 4. After stack.sh has run, I can see that a private and public
> > network have been created for that DevStack installation. On the
> > controller I also set some security group rules, to allow ICMP
> > and SSH to any guest instances that should be created within
> > DevStack.
> > 5. I now create a guest on that DevStack cluster ("nova boot... "
> > again). This is my toplevel-guest. I can see ("nova list...")
> > that the guest has booted and that an IP address has been
> > assigned, but no matter what I do, I cannot ping or login to
> > those toplevel-guests. I use the qrouter namespace to attempt
> > this ("ip netns exec qrouter-...."), but no luck.
> >
> > Some information, which might be useful for trouble shooting:
> >
> > On the midlevel-cluster hosts, I use these localrc files to install
> > DevStack (I'm just looking for simple GRE networking). It's a multi-node
> > install, so first, here is the localrc for the controller:
> >
> > # Passwords and tokens
> > ADMIN_PASSWORD=password
> > MYSQL_PASSWORD=password
> > RABBIT_PASSWORD=password
> > SERVICE_PASSWORD=password
> > SERVICE_TOKEN=tokentoken
> >
> > # Logging, screen, devstack behavior
> > API_RATE_LIMIT=False
> > VERBOSE=True
> > DEBUG=True
> > LOGFILE=/home/ubuntu/tempest_run/workspace/stack.sh.log
> > USE_SCREEN=True
> > SCREEN_LOGDIR=/home/ubuntu/tempest_run/workspace
> > RECLONE=Yes
> > OFFLINE=False
> > LIBVIRT_TYPE=kvm
> >
> > # Services
> > disable_service n-net
> > disable_service n-cpu
> > enable_service q-svc
> > enable_service q-agt
> > enable_service q-l3
> > enable_service q-meta
> > enable_service q-lbaas
> > enable_service q-dhcp
> > enable_service tempest
> > enable_service neutron
> >
> > # Networking
> > ENABLE_TENANT_TUNNELS=True
> > Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
> > Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=gre)
> > Q_SRV_EXTRA_OPTS=(tenant_network_type=gre)
> > Q_USE_NAMESPACE=True
> > Q_USE_SECGROUP=True
> >
> > Here is the localrc for the compute hosts:
> >
> > # Passwords and tokens
> > ADMIN_PASSWORD=password
> > MYSQL_PASSWORD=password
> > RABBIT_PASSWORD=password
> > SERVICE_PASSWORD=password
> > SERVICE_TOKEN=tokentoken
> >
> > # Logging, screen, devstack behavior
> > VERBOSE=True
> > DEBUG=True
> > LOGFILE=/home/ubuntu/tempest_run/workspace/stack.sh.log
> > USE_SCREEN=True
> > SCREEN_LOGDIR=/home/ubuntu/tempest_run/workspace
> > RECLONE=Yes
> > OFFLINE=False
> > LIBVIRT_TYPE=kvm
> >
> > # Controller connection
> > HOST_IP=10.5.5.5
> > SERVICE_HOST=10.5.5.2
> > MYSQL_HOST=10.5.5.2
> > RABBIT_HOST=10.5.5.2
> > Q_HOST=10.5.5.2
> > GLANCE_HOSTPORT=10.5.5.2:9292
> >
> > # Services
> > ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
> >
> > # Networking
> > ENABLE_TENANT_TUNNELS=True
> > Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
> > Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=gre)
> > Q_USE_NAMESPACE=True
> > Q_USE_SECGROUP=True
> >
> >
> > After stack.sh has run to completion, I get the following on the
> > DevStack controller:
> >
> > $ ip netns qrouter-293c2395-3a05-4ad7-99e5-e5b1ebb80a35
> >
> > $ neutron net-list
> > +--------------------------------------+---------+------------------------------------------------------+
> > | id | name | subnets |
> > +--------------------------------------+---------+------------------------------------------------------+
> > | 789cb9ce-9f91-4d0b-9069-6eb5b808bdfc | public | d206940f-5daf-464d-bf28-4dac527aba06 172.24.4.224/28 |
> > | c8f179ba-6675-49f3-92ba-6e58b38f59c1 | private | b2bc098b-2b87-46a5-bb5e-7a75d9520c17 10.0.0.0/24 |
> > +--------------------------------------+---------+------------------------------------------------------+
> >
> > I create a guest instance (toplevel-guest), like so:
> >
> > $ nova boot --image e09876f9-c755-48a8-ada7-c658f8736a9e --flavor 1 --key-name mykey foobar
> >
> > With "nova list" I see:
> >
> > $ nova list
> > +--------------------------------------+--------+--------+------------+-------------+---------------------+
> > | ID | Name | Status | Task State | Power State | Networks |
> > +--------------------------------------+--------+--------+------------+-------------+---------------------+
> > | 8794d3fd-8596-4ddb-bf20-d823b9804f0d | foobar | ACTIVE | - | Running | public=172.24.4.227 |
> > +--------------------------------------+--------+--------+------------+-------------+---------------------+
> >
> > The following routers are known to neutron:
> >
> > $ neutron router-list
> > +--------------------------------------+---------+-----------------------------------------------------------------------------+
> > | id | name | external_gateway_info |
> > +--------------------------------------+---------+-----------------------------------------------------------------------------+
> > | 293c2395-3a05-4ad7-99e5-e5b1ebb80a35 | router1 | {"network_id": "789cb9ce-9f91-4d0b-9069-6eb5b808bdfc", "enable_snat": true} |
> > +--------------------------------------+---------+-----------------------------------------------------------------------------+
> >
> > The following ports are known to neutron:
> >
> > $ neutron port-list
> > +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
> > | id | name | mac_address | fixed_ips |
> > +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
> > | 22126185-ae99-42d0-8876-bef9a96ff5a1 | | fa:16:3e:9d:be:fb | {"subnet_id": "d206940f-5daf-464d-bf28-4dac527aba06", "ip_address": "172.24.4.227"} |
> > | 8914b69b-b9f7-4555-bbcb-5af8ae0d340c | | fa:16:3e:0d:fd:d2 | {"subnet_id": "b2bc098b-2b87-46a5-bb5e-7a75d9520c17", "ip_address": "10.0.0.1"} |
> > | d5f45982-e7fa-49a7-a8b3-b3e7b93a227c | | fa:16:3e:2a:6f:ff | {"subnet_id": "d206940f-5daf-464d-bf28-4dac527aba06", "ip_address": "172.24.4.226"} |
> > +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
> >
> > I can ping the address 172.24.4.226 (presumably the router), but not the address of the guest (172.24.4.227).
> >
> > $ ping 172.24.4.227
> > PING 172.24.4.227 (172.24.4.227) 56(84) bytes of data.
> > From 172.24.4.225 icmp_seq=1 Destination Host Unreachable
> > From 172.24.4.225 icmp_seq=2 Destination Host Unreachable
> > From 172.24.4.225 icmp_seq=3 Destination Host Unreachable
> >
> > Even if I use the qrouter namespace, it still doesn't work:
> >
> > $ sudo ip netns exec qrouter-293c2395-3a05-4ad7-99e5-e5b1ebb80a35 ping 172.24.4.227
> > PING 172.24.4.227 (172.24.4.227) 56(84) bytes of data.
> > From 172.24.4.226 icmp_seq=1 Destination Host Unreachable
> > From 172.24.4.226 icmp_seq=2 Destination Host Unreachable
> > From 172.24.4.226 icmp_seq=3 Destination Host Unreachable
> >
> > In my desparation, I tried security groups in nova as well as neutron.
> > The nova rules were added like this
> >
> > $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
> > $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
> >
> > The neutron rules like this:
> >
> > $ neutron security-group-rule-create --protocol tcp --port-range-min 22 \
> > --port-range-max 22 --direction ingress default
> > $ neutron security-group-rule-create --protocol icmp \
> > --direction ingress foogroup
> >
> > But in both cases, no luck.
> >
> > I also tried bringing up the guest NIC on the private network. Still no
> > luck.
> >
> >
> > On the DevStack controller host, the OVS config looks like this:
> >
> > $ sudo ovs-vsctl show
> > 2a2472f2-eebc-4214-8a57-bfc05a21ae26
> > Bridge br-int
> > Port br-int
> > Interface br-int
> > type: internal
> > Port "qr-8914b69b-b9"
> > tag: 1
> > Interface "qr-8914b69b-b9"
> > type: internal
> > Port patch-tun
> > Interface patch-tun
> > type: patch
> > options: {peer=patch-int}
> > Bridge br-tun
> > Port br-tun
> > Interface br-tun
> > type: internal
> > Port "gre-10.5.5.5"
> > Interface "gre-10.5.5.5"
> > type: gre
> > options: {in_key=flow, local_ip="10.5.5.2", out_key=flow, remote_ip="10.5.5.5"}
> > Port patch-int
> > Interface patch-int
> > type: patch
> > options: {peer=patch-tun}
> > Bridge br-ex
> > Port "qg-d5f45982-e7"
> > Interface "qg-d5f45982-e7"
> > type: internal
> > Port br-ex
> > Interface br-ex
> > type: internal
> > ovs_version: "1.4.6"
> >
> > On the DevStack compute hosts, it's like this:
> >
> > $ sudo ovs-vsctl show
> > 9b280f62-130c-4c12-89ea-0bc2fa22156e
> > Bridge br-int
> > Port patch-tun
> > Interface patch-tun
> > type: patch
> > options: {peer=patch-int}
> > Port br-int
> > Interface br-int
> > type: internal
> > Bridge br-tun
> > Port br-tun
> > Interface br-tun
> > type: internal
> > Port "gre-10.5.5.2"
> > Interface "gre-10.5.5.2"
> > type: gre
> > options: {in_key=flow, local_ip="10.5.5.5", out_key=flow, remote_ip="10.5.5.2"}
> > Port patch-int
> > Interface patch-int
> > type: patch
> > options: {peer=patch-tun}
> > ovs_version: "1.4.6"
> >
> >
> > The network interfaces on the DevStack controller are like this:
> >
> > $ ifconfig -a
> > br-ex Link encap:Ethernet HWaddr 96:bf:ae:d5:a0:44
> > inet addr:172.24.4.225 Bcast:0.0.0.0 Mask:255.255.255.240
> > inet6 addr: fe80::94bf:aeff:fed5:a044/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:6 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:468 (468.0 B) TX bytes:468 (468.0 B)
> >
> > br-int Link encap:Ethernet HWaddr 62:28:61:90:0b:47
> > BROADCAST MULTICAST MTU:1500 Metric:1
> > RX packets:6 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:468 (468.0 B) TX bytes:0 (0.0 B)
> >
> > br-tun Link encap:Ethernet HWaddr 0e:f3:a7:96:91:43
> > BROADCAST MULTICAST MTU:1500 Metric:1
> > RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
> >
> > eth0 Link encap:Ethernet HWaddr fa:16:3e:dc:7e:82
> > inet addr:10.5.5.2 Bcast:10.5.5.255 Mask:255.255.255.0
> > inet6 addr: fe80::f816:3eff:fedc:7e82/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:438220 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:180410 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:580740049 (580.7 MB) TX bytes:14956005 (14.9 MB)
> >
> > lo Link encap:Local Loopback
> > inet addr:127.0.0.1 Mask:255.0.0.0
> > inet6 addr: ::1/128 Scope:Host
> > UP LOOPBACK RUNNING MTU:16436 Metric:1
> > RX packets:84427 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:84427 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:51372553 (51.3 MB) TX bytes:51372553 (51.3 MB)
> >
> > On the compute hosts, they are like this:
> >
> > $ ifconfig -a
> > br-int Link encap:Ethernet HWaddr c2:25:18:4b:8b:49
> > BROADCAST MULTICAST MTU:1500 Metric:1
> > RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
> >
> > br-tun Link encap:Ethernet HWaddr 1e:27:74:ee:a1:45
> > BROADCAST MULTICAST MTU:1500 Metric:1
> > RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
> >
> > eth0 Link encap:Ethernet HWaddr fa:16:3e:07:fd:48
> > inet addr:10.5.5.5 Bcast:10.5.5.255 Mask:255.255.255.0
> > inet6 addr: fe80::f816:3eff:fe07:fd48/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:309885 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:134774 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:422666082 (422.6 MB) TX bytes:10489797 (10.4 MB)
> >
> > lo Link encap:Local Loopback
> > inet addr:127.0.0.1 Mask:255.0.0.0
> > inet6 addr: ::1/128 Scope:Host
> > UP LOOPBACK RUNNING MTU:16436 Metric:1
> > RX packets:159 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:159 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:10580 (10.5 KB) TX bytes:10580 (10.5 KB)
> >
> > virbr0 Link encap:Ethernet HWaddr 9a:21:30:f2:8a:3b
> > inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
> > UP BROADCAST MULTICAST MTU:1500 Metric:1
> > RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:0
> > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
> >
> >
> > If anyone has any idea what I could possibly be doing wrong, I would be
> > very grateful for any explanation.
> >
> > Thank you very much...
> >
> > Juergen
> >
> >
> >
> > _______________________________________________
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack at lists.openstack.org
> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
More information about the Openstack
mailing list