[Openstack-operators] Instances not getting IP addresses when using FlatDHCPManager.

Schwartz, Philip Marc (RIS-BCT) Philip.Schwartz at lexisnexis.com
Wed Aug 15 15:05:15 UTC 2012


The controller is a compute node.

I have only setup the single node so far and attempted to run instances from it. The following is running on it.

Api
Objectstore
Compute
Network
Scheduler
Cert
Glance-registry
Glance-api
Keystone
Horizon as wsgi on apache.



Thank You,
Philip Schwartz
Senior Software Engineer
LexisNexis RS
O - 561 999 4472
C - 954 290 4024

From: Jānis Ģeņģeris [mailto:janis.gengeris at gmail.com]
Sent: Wednesday, August 15, 2012 10:52 AM
To: Schwartz, Philip Marc (RIS-BCT)
Cc: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Instances not getting IP addresses when using FlatDHCPManager.

Hello Philip,

Which nova services are you running on your compute nodes? If you are not getting network information for your instances, one of the causes might be that VM can't access the metadata service.

And I think your network config is missing something. Is the config snippet you copied from controller or compute node?

--janis
On Wed, Aug 15, 2012 at 3:21 PM, Schwartz, Philip Marc (RIS-BCT) <Philip.Schwartz at lexisnexis.com<mailto:Philip.Schwartz at lexisnexis.com>> wrote:
Hi All,

I am having an issue with a new cluster. (First of its config type, not first OS cluster.) The cluster is made up of the following:

12 Physical nodes (192 CPU Cores, 288 GB of ram, 96 TB of storage)
2 nic’s per node (1 nic cabled due to current switch space, not being changed for a while due to budget).

I am using .1 as the cloud controller and also a compute node, .2-.12 will be compute nodes only. All nodes are running CentOS 6.2 with EPEL installed openstack packages.

The following is my network config in /etc/nova/nova.conf.

# NETWORK Config
network_manager=nova.network.manager.FlatDHCPManager
public_interface=eth0
flat_network_bridge=br16
fixed_range=192.168.16.0/24<http://192.168.16.0/24>
network_size=256
force_dhcp_release=True
root_helper=sudo nova-rootwrap
flat_injected=False
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
multi_host=True


This is the line I am using to create the private network.

nova-manage network create private 192.168.16.0/24<http://192.168.16.0/24> 1 256


I have not added a public network yet, but it will be on the 10.173.16.0/24<http://10.173.16.0/24> network block where each of these nodes exist taking up the first 12 addresses in the block.

I can currently start up instances without issue. The problem comes after startup when the instances don’t appear to get network information. I am running ubuntu cloud images currently and cloud-init goes into a blocking 120 sec loop stating “cloud-init-nonet waiting 120 seconds for a network device.”.

The compute and network logs do not show any issues or errors.

ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8<http://127.0.0.1/8> scope host lo
    inet 169.254.169.254/32<http://169.254.169.254/32> scope link lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:25:90:48:f3:7a brd ff:ff:ff:ff:ff:ff
    inet 10.173.16.1/24<http://10.173.16.1/24> brd 10.173.16.255 scope global eth0
    inet6 fe80::225:90ff:fe48:f37a/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:25:90:48:f3:7b brd ff:ff:ff:ff:ff:ff
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:3a:15:e7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24<http://192.168.122.1/24> brd 192.168.122.255 scope global virbr0
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 52:54:00:3a:15:e7 brd ff:ff:ff:ff:ff:ff
7: br16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.3/24<http://192.168.16.3/24> brd 192.168.16.255 scope global br16
    inet6 fe80::94e9:76ff:fe91:70d8/64 scope link
       valid_lft forever preferred_lft forever

ip r:
10.173.16.0/24<http://10.173.16.0/24> dev eth0  proto kernel  scope link  src 10.173.16.1
192.168.16.0/24<http://192.168.16.0/24> dev br16  proto kernel  scope link  src 192.168.16.3
192.168.122.0/24<http://192.168.122.0/24> dev virbr0  proto kernel  scope link  src 192.168.122.1
169.254.0.0/16<http://169.254.0.0/16> dev eth0  scope link  metric 1002
default via 10.173.16.254 dev eth0

brctl show:
[root at cloud016001 osscripts]# brctl show
bridge name      bridge id                              STP enabled       interfaces
br16                       8000.fe163e1111d6         no                           vnet0
virbr0                    8000.5254003a15e7         yes                         virbr0-nic


Thank You,
Philip Schwartz
Senior Software Engineer
LexisNexis RS
O - 561 999 4472<tel:561%20999%204472>
C - 954 290 4024<tel:954%20290%204024>


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-----------------------------------------
The information contained in this e-mail message is intended only
for the personal and confidential use of the recipient(s) named
above. This message may be an attorney-client communication and/or
work product and as such is privileged and confidential. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are
hereby notified that you have received this document in error and
that any review, dissemination, distribution, or copying of this
message is strictly prohibited. If you have received this
communication in error, please notify us immediately by e-mail, and
delete the original message.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20120815/99b88d5b/attachment-0001.html>


More information about the OpenStack-operators mailing list