[Openstack-operators] Network Configuration - Centos 6.2

Ronald J. Yacketta yacketrj at potsdam.edu
Fri Jun 1 19:33:43 UTC 2012


On 6/1/2012 2:32 PM, Lorin Hochstein wrote:
>
>
>
> On Jun 1, 2012, at 2:27 PM, Ronald J. Yacketta wrote:
>
>> On 6/1/2012 2:16 PM, Lorin Hochstein wrote:
>>> On Jun 1, 2012, at 2:07 PM, Ronald J. Yacketta wrote:
>>>
>>>> Hello all!
>>>>
>>>> Trying to setup a simple (lol simple..) openstack single node 
>>>> configuration for testing. Been up and down and all around 
>>>> networking and noting seems to make sense or works.
>>>>
>>>> rpm -qa | grep openstack
>>>> openstack-swift-object-1.4.8-2.el6.noarch
>>>> openstack-swift-doc-1.4.8-2.el6.noarch
>>>> openstack-glance-2012.1-5.el6.noarch
>>>> openstack-dashboard-2012.1-4.el6.noarch
>>>> openstack-swift-container-1.4.8-2.el6.noarch
>>>> openstack-swift-proxy-1.4.8-2.el6.noarch
>>>> openstack-keystone-2012.1-3.el6.noarch
>>>> openstack-nova-2012.1-4.el6.noarch
>>>> openstack-swift-1.4.8-2.el6.noarch
>>>> openstack-swift-account-1.4.8-2.el6.noarch
>>>> openstack-quantum-2012.1-4.el6.noarch
>>>> openstack-utils-2012.1-1.el6.noarch
>>>>
>>>> Current attempt is to mirror the configuration shown here 
>>>> http://unchainyourbrain.com/openstack/13-networking-in-nova.
>>>>
>>>> network configuration
>>>>
>>>> eth0 [public]:
>>>> DEVICE="eth0"
>>>> BOOTPROTO="static"
>>>> NM_CONTROLLED="no"
>>>> ONBOOT="yes"
>>>> IPADDR=137.143.102.116
>>>> GATEWAY=137.143.110.254
>>>> NETMASK=255.255.240.0
>>>>
>>>> eth1 [private]: (tried with and without assigning an IP)
>>>> DEVICE="eth1"
>>>> MTU="1500"
>>>> ONBOOT=yes
>>>> NM_CONTROLLED="no"
>>>> IPADDR=192.168.0.1
>>>> NETMASK=255.255.255.0
>>>>
>>>> nova.conf
>>>> network_manager = nova.network.manager.FlatDHCPManager
>>>> fixed_range=10.200.0.0/24
>>>> flat_network_dhcp_start=10.200.0.2
>>>> flat_network_bridge=br100
>>>> flat_interface=eth1
>>>> flat_injected=False
>>>> public_interface=eth0
>>>>
>>>> brctl:
>>>> bridge name     bridge id               STP enabled     interfaces
>>>> br100           8000.000000000000       no
>>>> virbr0          8000.525400e396a2       yes             virbr0-nic
>>>>
>>>> created nova network via:
>>>> nova-manage network create demonet 10.200.0.0/24 1 256 --bridge=br100
>>>>
>>>> It is my understanding that nova-network _should_ configure br100 
>>>> and all the other network bits, correct? if so, then something is 
>>>> just not right in my config seeing that nova-network does not 
>>>> configure anything with br100 or eth1.
>>>>
>>>> Anyone see where I went wrong?
>>>>
>>>
>>> Ron:
>>>
>>> A lot of the Linux networking stuff done by nova-network doesn't 
>>> happen until you launch your first instance. (I'm surprised you 
>>> already have a br100 bridge already, did you create that manually?). 
>>> Have you tried to launch one yet?
>>>
>>> Take care,
>>>
>>> Lorin
>>> --
>>> Lorin Hochstein
>>> Lead Architect - Cloud Services
>>> Nimbis Services, Inc.
>>> www.nimbisservices.com <https://www.nimbisservices.com/>
>>>
>>>
>>>
>>>
>> Have tried several times over to launch an instance, in each and 
>> every case it is left in an ERROR state. Feel a lot of my issue is 
>> lack of concise documentation as well as being very very green with 
>> openstack.
>>
>> looking through the nova/network.log I see the following error for an 
>> instance I just tried to launch
>>
>> failed to bind listening socket for 10.0.0.1: Address already in use
>>
>> 10.0.0.1 was assigned to br100 by nova, even more confused now ;)
>>
>>
>
> I recommend you try tearing down that bridge and trying again.
>
> Take care,
>
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com <https://www.nimbisservices.com/>
>
>

br100 was destroyed

ifdown eth0
ip set link br100 down
brctl delif br100 eth0
brctl delbr br100

reboot (for good measures)

ifconfig shows eth0 up with no ip
eth0      Link encap:Ethernet  HWaddr 18:A9:05:76:95:78
           inet6 addr: fe80::1aa9:5ff:fe76:9578/64 Scope:Link
           UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
           RX packets:5310 errors:0 dropped:0 overruns:0 frame:0
           TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:819054 (799.8 KiB)  TX bytes:492 (492.0 b)
           Interrupt:31 Memory:f8000000-f8012800

brctl show does not list br100
brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400e396a2       yes             virbr0-nic


launch instance
nova boot myserver --flavor 2 --key_name mykey      --image $(glance 
index | grep f16-jeos | awk '{print $1}')

now i am left with

2012-06-01 15:27:14 INFO nova.rpc.impl_qpid 
[req-60eedbb4-b96e-4596-88e5-284358d2569b None None] Connected to AMQP 
server on localhost:5672
2012-06-01 15:27:27 ERROR nova.rpc.impl_qpid 
[req-ee6a2912-50e7-475c-9013-c0afbe659aa1 
8a43429ec5df4e4097d1e91cf6e9ba3a 5331fd10509546ccb4c16232a0c012f1] Timed 
out waiting for RPC response: None
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid Traceback (most recent call 
last):
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid   File 
"/usr/lib/python2.6/site-packages/nova/rpc/impl_qpid.py", line 359, in 
ensure
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid     return method(*args, 
**kwargs)
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid   File 
"/usr/lib/python2.6/site-packages/nova/rpc/impl_qpid.py", line 408, in 
_consume
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid     nxt_receiver = 
self.session.next_receiver(timeout=timeout)
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid   File "<string>", line 6, 
in next_receiver
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 
651, in next_receiver
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid     raise Empty
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid Empty: None
2012-06-01 15:27:27 TRACE nova.rpc.impl_qpid

enough banding my head against this brick wall for this week ;) time to 
call it quits and enjoy the weekend.

-Ron





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20120601/24b48717/attachment-0002.html>


More information about the Openstack-operators mailing list