[Openstack] [Neutron] How to setup the neutron with provider network and get it work?

Alex Yang alex890714 at gmail.com
Wed Mar 26 15:18:20 UTC 2014


Hi All,

I had try to setup a environment of neutron provider network. I follow some
instructions as belows.

http://trickycloud.wordpress.com/2013/11/12/setting-up-a-flat-network-with-neutron
http://developer.rackspace.com/blog/neutron-networking-simple-flat-network.html

There are three nodes in my environment. cloud-t1 is the controller node
and cloud-t2/cloud-t3 is the compute node. I use the ml2 plugin with OVS
and VXLAN. The multi-host model used by L3 agent.

There are three networks.
10.22.129.0/24 --> external
10.22.203.0/24 --> managment
192.168.129.0/24 --> internal

cloud-t1 (10.22.129.21/10.22.203.21/192.168.129.21)
cloud-t2 (10.22.129.22/10.22.203.22/192.168.129.22)
cloud-t3 (10.22.129.23/10.22.203.23/192.168.129.23)

Here is my configuration file:
https://gist.github.com/AlexYangYu/9782496


But I failed to get it work. I need your help.

*1. There are two error log occured when I try to create network.*

The command:

neutron net-create ext-net --shared --provider:network_type=flat
--provider:physical_network=phy-129
neutron subnet-create ext-net 10.22.129.0/24 --name=ext-129
--gateway=10.22.129.1 --enable_dhcp=True --allocation-pool
start=10.22.129.101,end=10.22.129.200


The error log(neutron-server.log):

2014-03-26 20:12:33.914 17360 ERROR
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are
associated with network 'f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c'. Unable to
send notification for 'network_create_end' with payload: {'network':
{'status': 'ACTIVE', 'subnets': [], 'name': u'ext-net',
'provider:physical_network': u'phy-129', 'admin_state_up': True,
'tenant_id': u'a6d3748a7a474e3d96d98bfbea6e8273', 'provider:network_type':
u'flat', 'shared': True, 'id': 'f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c',
'provider:segmentation_id': None}}
2014-03-26 20:12:34.279 17360 ERROR
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are
associated with network 'f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c'. Unable to
send notification for 'subnet_create_end' with payload: {'subnet': {'name':
u'ext-129', 'enable_dhcp': True, 'network_id':
u'f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c', 'tenant_id':
u'a6d3748a7a474e3d96d98bfbea6e8273', 'dns_nameservers': [],
'allocation_pools': [{'start': u'10.22.129.101', 'end': u'10.22.129.200'}],
'host_routes': [], 'ip_version': 4, 'gateway_ip': u'10.22.129.1', 'cidr': u'
10.22.129.0/24', 'id': '7f82c313-22a0-46da-8ecf-9e7c9c3cf60b


I check the agent list, the status of agent seems all right.

root at cloud-t1:~/alex_scripts# neutron agent-list
+--------------------------------------+--------------------+--------------+-------+----------------+
| id                                   | agent_type         | host
| alive | admin_state_up |
+--------------------------------------+--------------------+--------------+-------+----------------+
| 2884d489-9e6c-446c-a680-877a17e14101 | DHCP agent         | 10.22.203.22
| :-)   | True           |
| 8aa2dab5-ece2-4722-a3e4-cd3691f712f6 | L3 agent           | 10.22.203.23
| :-)   | True           |
| 8ef1fdc8-ef45-447e-85d5-45a601f02c89 | Open vSwitch agent | 10.22.203.23
| :-)   | True           |
| ba2992ea-9184-4309-9d8b-02c98e5386ac | L3 agent           | 10.22.203.22
| :-)   | True           |
| e420aec4-5b47-470a-8d73-a82f65dc2c3d | Open vSwitch agent | 10.22.203.22
| :-)   | True           |
| ecf28e47-6569-4c84-ac51-c772bd0c06fe | DHCP agent         | 10.22.203.23
| :-)   | True           |
+--------------------------------------+--------------------+--------------+-------+----------------+


*2. After I created an instancne and attatached it to ext-net, the instance
can't get an ip from dhcp and access the gateway. An error log also
occured.*

Error Log:

2014-03-26 20:15:00.476 17360 WARNING neutron.plugins.ml2.managers [-]
Failed to bind port da0b40c4-7251-4e97-a59a-7f5d524c7221 on host cloud-t2
2014-03-26 20:15:02.314 17360 WARNING neutron.plugins.ml2.rpc [-] Device
da0b40c4-7251-4e97-a59a-7f5d524c7221 requested by agent ovsaac14b4ea64f on
network f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c not bound, vif_type:
binding_failed


I found that the vm be attached to br-int with local VLAN tag 4096, but the
dnsmasq with local VLAN tag 1.

root at cloud-t2:~# ovs-vsctl show
ff82998c-08bc-4753-a301-afa110c0c4d2
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap7259b843-79"
            tag: 1
            Interface "tap7259b843-79"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvoda0b40c4-72"
            tag: 4095
            Interface "qvoda0b40c4-72"
        Port int-br-ex
            Interface int-br-ex
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
        Port "eth0"
            Interface "eth0"
    ovs_version: "1.10.2"

root at cloud-t2:~# ip netns
qdhcp-f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c

root at cloud-t2:~# ip netns exec qdhcp-f1ae9157-a9e6-4c96-9dd3-2da0bb188e8c
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
59: tap7259b843-79: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN
    link/ether fa:16:3e:96:15:f4 brd ff:ff:ff:ff:ff:ff
    inet 10.22.129.102/24 brd 10.22.129.255 scope global tap7259b843-79
    inet 169.254.169.254/16 brd 169.254.255.255 scope global tap7259b843-79
    inet6 fe80::f816:3eff:fe96:15f4/64 scope link
       valid_lft forever preferred_lft forever



How can I deal with this problems?

Best Reards,

-- 
  杨雨
  Email:       alex890714 at gmail.com
GitHub:       https://github.com/AlexYangYu
 Weibo:       http://www.weibo.com/alexyangyu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140326/22cf18c9/attachment.html>


More information about the Openstack mailing list