[Openstack] [ovsdb-dev] No IP assigned to Instances

Silvia Fichera fichera.sil at gmail.com
Fri Nov 27 17:23:52 UTC 2015


I have done a capture in all interfaces of my compute node (and in all the
other nodes connected to it) when it was sending the dhcp request.
Of course I noticed that the DHCP request starts from the tap interface
related to the instance but then I don't know why it disappear.
In the capture in the compute node I can see 2 DHCP packets: the first one
is the 261 and the second one is the 263 (but it seems exactly the same).

I have attached the capture, you can see it with wireshark.
I have no problems with the instances launched in the controller node. the
vxlan tunnel is created and I have the dhcp agent running.

Any suggestion?

2015-11-19 16:05 GMT+01:00 Silvia Fichera <fichera.sil at gmail.com>:

> Hi Sam,
> I'm still trying to assign addresses to the VM. I was reading carefully
> your slides and I noticed, if I'm not wrong, that br-int should have a tap
> inerface for the connection to the dhcp agent. Is it true?
> In my configuration I noticed that:
>
> - Without instances in the controller node I have:
>
> neutron port-list
>
> +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
> | id                                   | name | mac_address       |
> fixed_ips
>       |
>
> +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
> | 927db2be-5106-451d-95d1-5c1459d1a271 |      | fa:16:3e:40:7f:a0 |
> {"subnet_id": "547881c1-c4aa-4144-819a-64bc211f181b", "ip_address":
> "10.10.10.2"} |
>
> That I suppose is the port related to the dhcp agent. But
>
> sudo ovs-vsctl show
> a450ea16-a6c0-4f1b-ac49-5167b8c105ac
>     Manager "tcp:10.30.3.234:6640"
>         is_connected: true
>     Bridge br-ex
>         Controller "tcp:10.30.3.234:6653"
>             is_connected: true
>         fail_mode: secure
>         Port br-ex
>             Interface br-ex
>                 type: internal
>         Port "eth1"
>             Interface "eth1"
>         Port patch-int
>             Interface patch-int
>                 type: patch
>                 options: {peer=patch-ext}
>     Bridge br-int
>         Controller "tcp:10.30.3.234:6653"
>             is_connected: true
>         fail_mode: secure
>         Port br-int
>             Interface br-int
>                 type: internal
>         Port patch-ext
>             Interface patch-ext
>                 type: patch
>                 options: {peer=patch-int}
>         Port "vxlan-10.0.0.2"
>             Interface "vxlan-10.0.0.2"
>                 type: vxlan
>                 options: {key=flow, local_ip="10.0.0.1",
> remote_ip="10.0.0.2"}
>     ovs_version: "2.3.2"
>
> so there is no port associated in br-int. That's right?
> I also tried to follow the dhcp request when I do udhcpc from an instance.
> I've done a capture in the tap port related to the vm and I can see that
> the dhcp request is sent broadcast and then is lost.
>
> what is the next step?
>
> thank you.
> Silvia
>
> 2015-11-17 23:27 GMT+01:00 Sam Hague <shague at redhat.com>:
>
>> Silvia,
>>
>> yeah, only need odl-ovsdb-openstack. Make sure to delete the data and
>> snapshot dirs when you restart ODL otherwise those other features will be
>> loaded again. Also, somewhere in there l2switch was loaded because I can
>> see those flows in your output. You can't run l2switch with ovsdb. You
>> should totally clean your setup in-between runs.
>>
>> The thought about the port being down is not a problem. The LOCAL port is
>> normally down and only enabled if you actually add an IP to it but that is
>> rarely needed.
>>
>> This capture today with the port output is different than the flows
>> dumped before. In the flows you can see that tunnel ports were created. In
>> this capture today the tunnel ports are not there. Might be easiest to go
>> to a single compute+control node and see that work. Then add the compute
>> only node. Also the demo vm I mentioned is an easy way to see everything
>> working.
>>
>> Thanks, Sam
>>
>> On Tue, Nov 17, 2015 at 2:57 PM, Silvia Fichera <fichera.sil at gmail.com>
>> wrote:
>>
>>> Hi Sam,
>>> the ODL features that I have installed are:
>>>
>>> odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal-northbound
>>> odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux-core
>>>
>>> Are those correct?
>>>
>>> I didn't dowloaded yet your demo, maybe tomorrow I'll talk to whom is
>>> providing me the VMs to download it and set up the enviroment so I can
>>> check it and compare it with mine.
>>>
>>> I'll let you know! :)
>>>
>>> Thank you
>>>
>>> 2015-11-17 15:28 GMT+01:00 Sam Hague <shague at redhat.com>:
>>>
>>>> Silvia,
>>>>
>>>> looks like you also have the l2switch feature loaded. l2switch should
>>>> not be used with the openstack integration since both apps want to own the
>>>> openflow tables.
>>>>
>>>> I would disable l3 for now, until you get basic l2 working.
>>>>
>>>> I have added some comments below. Look for [sh] below.
>>>>
>>>> Normally when the dhcp fails it is because some path to the neutron
>>>> dhcp namespace failed. Typically a tunnel wasn't built back to the control
>>>> node or the flows to get there were not added. Or the packets failed to get
>>>> back to the dhcp node. in the flows below, it looks like the tunnels are
>>>> there and it looks like the dhcp requests are sent from the vms, but the
>>>> dhcp requests are not making it into the control node's tunnel port. Is the
>>>> local_ip address the right address to reach the nodes via the tunnels? I
>>>> would run tcpdump on the tunnel ports to see why the packts are not going
>>>> from node to node.
>>>>
>>>> These slides walk through how the flows and ports should look. You
>>>> should be able to match them to what you have in your setup to see where
>>>> the failure is.
>>>>
>>>>
>>>> https://docs.google.com/presentation/d/1KIuNDuUJGGEV37Zk9yzx9OSnWExt4iD2Z7afycFLf_I/edit#slide=id.p95
>>>>
>>>> Also curious, did you download the all-in-on setup off the main ovsdb
>>>> wiki? You should be able to simply fire that up and see a working setup. It
>>>> goes with the slides to help troubleshoot. Then see all the flows and
>>>> config. From there you can expand to a larger setup as you have.
>>>>
>>>> Thanks, Sam
>>>>
>>>> On Tue, Nov 17, 2015 at 3:38 AM, Silvia Fichera <fichera.sil at gmail.com>
>>>> wrote:
>>>>
>>>>> Hi
>>>>> These are my output:
>>>>> neutron agent-list
>>>>>
>>>>> +--------------------------------------+----------------+-----------+-------+----------------+------------------------+
>>>>> | id                                   | agent_type     | host      |
>>>>> alive | admin_state_up | binary                 |
>>>>>
>>>>> +--------------------------------------+----------------+-----------+-------+----------------+------------------------+
>>>>> | df34892b-4e00-4504-a9c2-6d73b89e7be7 | Metadata agent | devstack1 |
>>>>> :-)   | True           | neutron-metadata-agent |
>>>>> | e6497bd3-d34d-413d-8d99-c34df2d4de7e | DHCP agent     | devstack1 |
>>>>> :-)   | True           | neutron-dhcp-agent     |
>>>>>
>>>>> +--------------------------------------+----------------+-----------+-------+----------------+------------------------+
>>>>>
>>>>> br-int Controller/Compute node:
>>>>>
>>>>> sudo ovs-ofctl dump-flows br-int -O Openflow13
>>>>> OFPST_FLOW reply (OF1.3) (xid=0x2):
>>>>>  cookie=0x0, duration=64916.193s, table=0, n_packets=9, n_bytes=1458,
>>>>> in_port=7,dl_src=fa:16:3e:cf:fd:30
>>>>> actions=set_field:0x40e->tun_id,load:0x1->NXM_NX_REG0[],goto_table:20
>>>>>
>>>> [sh] guessing this is the dhcp port
>>>>
>>>>>  cookie=0x0, duration=66408.764s, table=0, n_packets=3, n_bytes=180,
>>>>> priority=0 actions=goto_table:20
>>>>>  cookie=0x0, duration=64916.188s, table=0, n_packets=0, n_bytes=0,
>>>>> priority=8192,in_port=7 actions=drop
>>>>>
>>>>  cookie=0x2b00000000000047, duration=64906.395s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=7 actions=output:5,output:6,CONTROLLER:65535
>>>>>
>>>>  cookie=0x2b0000000000003e, duration=64996.333s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=3
>>>>> actions=output:4,output:5,output:2,output:1,output:6,CONTROLLER:65535
>>>>>  cookie=0x2b00000000000046, duration=64906.395s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=6 actions=output:5,output:7,CONTROLLER:65535
>>>>>  cookie=0x2b00000000000045, duration=64906.400s, table=0,
>>>>> n_packets=3517, n_bytes=211377, priority=2,in_port=5
>>>>> actions=output:6,output:7
>>>>> [sh]  anything with this cookie=0x2b0000... is an l2switch flow. in
>>>>> this one here it is grabbing packets. Possibly they are the dhcp ones. For
>>>>> anything in the netvirt pipeline to work the packets have to go through the
>>>>> whole pipeline. Packets are marked in table 0 and go all the way to table
>>>>> 110 before they are l2-forwarded. Some l3 will exit earlier, but the point
>>>>> is that they packets have to get to table 110 to get forwarded correctly.
>>>>>  cookie=0x2b0000000000003f, duration=64996.332s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=1
>>>>> actions=output:4,output:5,output:2,output:3,output:6,CONTROLLER:65535
>>>>>  cookie=0x2b0000000000003b, duration=64996.335s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=4
>>>>> actions=output:5,output:2,output:3,output:1,output:6,CONTROLLER:65535
>>>>>  cookie=0x2b0000000000003d, duration=64996.333s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=2
>>>>> actions=output:4,output:5,output:3,output:1,output:6,CONTROLLER:65535
>>>>>  cookie=0x0, duration=64916.143s, table=0, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,in_port=6 actions=load:0x2->NXM_NX_REG0[],goto_table:20
>>>>>
>>>> [sh] this should be the tunnel into the node, so that is good. What
>>>> isn't good is I don't see any packets hitting it. So if dhcp is on this
>>>> node, then dhcp requests from the other nodes should be hitting this flow
>>>> as they are trying to reach the dhcp server.
>>>>
>>>>>  cookie=0x0, duration=66411.533s, table=0, n_packets=1189,
>>>>> n_bytes=124845, dl_type=0x88cc actions=CONTROLLER:65535
>>>>>  cookie=0x2b00000000000007, duration=66411.622s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
>>>>>
>>>>
>>>>  cookie=0x0, duration=66408.756s, table=20, n_packets=48, n_bytes=7470,
>>>>> priority=0 actions=goto_table:30
>>>>>
>>>> [sh] little weird, I conly count 12 packets in earlier flows that could
>>>> have made it here, so surprised to see 48
>>>>
>>>>>  cookie=0x0, duration=66408.749s, table=30, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:40
>>>>>  cookie=0x0, duration=64916.203s, table=40, n_packets=0, n_bytes=0,
>>>>> priority=36001,ip,in_port=7,dl_src=fa:16:3e:cf:fd:30,nw_src=10.10.10.7
>>>>> actions=goto_table:50
>>>>>  cookie=0x0, duration=66408.743s, table=40, n_packets=33,
>>>>> n_bytes=2520, priority=0 actions=goto_table:50
>>>>>  cookie=0x0, duration=66086.470s, table=40, n_packets=15,
>>>>> n_bytes=4950, priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
>>>>>  cookie=0x0, duration=64916.203s, table=40, n_packets=0, n_bytes=0,
>>>>> priority=61011,udp,in_port=7,tp_src=67,tp_dst=68 actions=drop
>>>>>  cookie=0x0, duration=66408.737s, table=50, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:60
>>>>>  cookie=0x0, duration=66408.732s, table=60, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:70
>>>>>  cookie=0x0, duration=66408.727s, table=70, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:80
>>>>>  cookie=0x0, duration=66408.722s, table=80, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:90
>>>>>  cookie=0x0, duration=66408.717s, table=90, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:100
>>>>>  cookie=0x0, duration=64916.217s, table=90, n_packets=0, n_bytes=0,
>>>>> priority=61006,udp,dl_src=fa:16:3e:48:27:5a,tp_src=67,tp_dst=68
>>>>> actions=goto_table:100
>>>>>  cookie=0x0, duration=66408.713s, table=100, n_packets=48,
>>>>> n_bytes=7470, priority=0 actions=goto_table:110
>>>>>  cookie=0x0, duration=64916.151s, table=110, n_packets=0, n_bytes=0,
>>>>> priority=8192,tun_id=0x40e actions=drop
>>>>>  cookie=0x0, duration=66408.687s, table=110, n_packets=3, n_bytes=180,
>>>>> priority=0 actions=drop
>>>>>
>>>> [sh] macthes the 3 from table 0 that are not from the dhcp port or the
>>>> tunnel port so they are dropped.
>>>>
>>>>>  cookie=0x0, duration=64916.178s, table=110, n_packets=0, n_bytes=0,
>>>>> priority=16384,reg0=0x2,tun_id=0x40e,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
>>>>> actions=output:7
>>>>>  cookie=0x0, duration=64916.164s, table=110, n_packets=9,
>>>>> n_bytes=1458,
>>>>> priority=16383,reg0=0x1,tun_id=0x40e,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
>>>>> actions=output:7,output:6
>>>>>
>>>> [sh] flood packets. Matches the 9 from the dhcp port
>>>>
>>>>>  cookie=0x0, duration=64916.183s, table=110, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,dl_dst=fa:16:3e:cf:fd:30 actions=output:7
>>>>>  cookie=0x0, duration=64914.605s, table=110, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,dl_dst=fa:16:3e:7e:0c:43 actions=output:6
>>>>>  cookie=0x0, duration=64914.763s, table=110, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,dl_dst=fa:16:3e:ef:2a:29 actions=output:6
>>>>>
>>>>>
>>>>> br-int Compute node:
>>>>>
>>>>> sudo ovs-ofctl dump-flows br-int -O Openflow13
>>>>> OFPST_FLOW reply (OF1.3) (xid=0x2):
>>>>>  cookie=0x0, duration=64953.73s, table=0, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,in_port=1 actions=load:0x2->NXM_NX_REG0[],goto_table:20
>>>>>
>>>> [sh] tunnel port. this is good. but no packets hitting the port so no
>>>> dhcp replies are coming back or the dhcp requests were never sent.
>>>>
>>>>>  cookie=0x0, duration=64953.782s, table=0, n_packets=9, n_bytes=1458,
>>>>> in_port=2,dl_src=fa:16:3e:ef:2a:29
>>>>> actions=set_field:0x40e->tun_id,load:0x1->NXM_NX_REG0[],goto_table:20
>>>>>  cookie=0x0, duration=64953.633s, table=0, n_packets=9, n_bytes=1458,
>>>>> in_port=4,dl_src=fa:16:3e:7e:0c:43
>>>>> actions=set_field:0x40e->tun_id,load:0x1->NXM_NX_REG0[],goto_table:20
>>>>>
>>>> [sh] guess you ahve two vms spawned. two flows above will filter
>>>> traffic coming from the vm.
>>>>
>>>>>  cookie=0x2b00000000000048, duration=64945.347s, table=0,
>>>>> n_packets=2859, n_bytes=171540, priority=2,in_port=3
>>>>> actions=output:2,output:4,output:1
>>>>>  cookie=0x2b0000000000004b, duration=64945.346s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=1
>>>>> actions=output:3,output:2,output:4,CONTROLLER:65535
>>>>>  cookie=0x0, duration=64953.777s, table=0, n_packets=0, n_bytes=0,
>>>>> priority=8192,in_port=2 actions=drop
>>>>>  cookie=0x2b00000000000049, duration=64945.347s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=2
>>>>> actions=output:3,output:4,output:1,CONTROLLER:65535
>>>>>  cookie=0x0, duration=64953.617s, table=0, n_packets=0, n_bytes=0,
>>>>> priority=8192,in_port=4 actions=drop
>>>>>  cookie=0x2b0000000000004a, duration=64945.347s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=2,in_port=4
>>>>> actions=output:3,output:2,output:1,CONTROLLER:65535
>>>>>  cookie=0x0, duration=65039.282s, table=0, n_packets=954,
>>>>> n_bytes=100170, dl_type=0x88cc actions=CONTROLLER:65535
>>>>>  cookie=0x2b00000000000009, duration=65039.291s, table=0, n_packets=0,
>>>>> n_bytes=0, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
>>>>>  cookie=0x0, duration=65035.376s, table=0, n_packets=12, n_bytes=858,
>>>>> priority=0 actions=goto_table:20
>>>>>  cookie=0x0, duration=65035.37s, table=20, n_packets=25, n_bytes=3356,
>>>>> priority=0 actions=goto_table:30
>>>>>  cookie=0x0, duration=65035.367s, table=30, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:40
>>>>>  cookie=0x0, duration=64953.798s, table=40, n_packets=0, n_bytes=0,
>>>>> priority=36001,ip,in_port=2,dl_src=fa:16:3e:ef:2a:29,nw_src=10.10.10.8
>>>>> actions=goto_table:50
>>>>>  cookie=0x0, duration=64953.639s, table=40, n_packets=0, n_bytes=0,
>>>>> priority=36001,ip,in_port=4,dl_src=fa:16:3e:7e:0c:43,nw_src=10.10.10.9
>>>>> actions=goto_table:50
>>>>>  cookie=0x0, duration=65035.361s, table=40, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:50
>>>>>  cookie=0x0, duration=64953.808s, table=40, n_packets=0, n_bytes=0,
>>>>> priority=61011,udp,in_port=2,tp_src=67,tp_dst=68 actions=drop
>>>>>  cookie=0x0, duration=64953.656s, table=40, n_packets=0, n_bytes=0,
>>>>> priority=61011,udp,in_port=4,tp_src=67,tp_dst=68 actions=drop
>>>>>  cookie=0x0, duration=65035.356s, table=50, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:60
>>>>>  cookie=0x0, duration=65035.352s, table=60, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:70
>>>>>  cookie=0x0, duration=65035.347s, table=70, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:80
>>>>>  cookie=0x0, duration=65035.343s, table=80, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:90
>>>>>  cookie=0x0, duration=65035.339s, table=90, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:100
>>>>>  cookie=0x0, duration=64953.813s, table=90, n_packets=0, n_bytes=0,
>>>>> priority=61006,udp,dl_src=fa:16:3e:48:27:5a,tp_src=67,tp_dst=68
>>>>> actions=goto_table:100
>>>>>  cookie=0x0, duration=65035.335s, table=100, n_packets=25,
>>>>> n_bytes=3356, priority=0 actions=goto_table:110
>>>>>  cookie=0x0, duration=64953.738s, table=110, n_packets=0, n_bytes=0,
>>>>> priority=8192,tun_id=0x40e actions=drop
>>>>>  cookie=0x0, duration=64953.611s, table=110, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,dl_dst=fa:16:3e:7e:0c:43 actions=output:4
>>>>>  cookie=0x0, duration=64953.768s, table=110, n_packets=0, n_bytes=0,
>>>>> tun_id=0x40e,dl_dst=fa:16:3e:ef:2a:29 actions=output:2
>>>>>  cookie=0x0, duration=64953.762s, table=110, n_packets=0, n_bytes=0,
>>>>> priority=16384,reg0=0x2,tun_id=0x40e,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
>>>>> actions=output:2,output:4
>>>>>  cookie=0x0, duration=64953.756s, table=110, n_packets=18,
>>>>> n_bytes=2916,
>>>>> priority=16383,reg0=0x1,tun_id=0x40e,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
>>>>> actions=output:2,output:1,output:4
>>>>>
>>>> [sh] this is good, since it says the broadcast packets are flooded. The
>>>> tunnel port should be in this list also. 18 packets to match the 9 from
>>>> each vm. Maybe these were the dhcp requests going out.
>>>>
>>>>>  cookie=0x0, duration=65035.33s, table=110, n_packets=7, n_bytes=440,
>>>>> priority=0 actions=drop
>>>>>
>>>>> Thanks
>>>>>
>>>>> 2015-11-17 8:42 GMT+01:00 Janki Chhatbar <jankihchhatbar at gmail.com>:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> Check for flows in both the bridges. Check if DHCP agent is running.
>>>>>> I faced the same issue but I was using GRE tunnels and manual installation.
>>>>>>
>>>>>> Thanks
>>>>>> Janki
>>>>>>
>>>>>> Sent from my BlackBerry 10 smartphone.
>>>>>> *From: *Silvia Fichera
>>>>>> *Sent: *Tuesday, 17 November 2015 13:09
>>>>>> *To: *Janki Chhatbar
>>>>>> *Cc: *<ovsdb-dev at lists.opendaylight.org>;
>>>>>> openstack at lists.openstack.org
>>>>>> *Subject: *Re: [Openstack] No IP assigned to Instances
>>>>>>
>>>>>> Hi Janki,
>>>>>> I'm using vxlan and I have both br-int and br-ex (but by now I'm not
>>>>>> caring about external connection, I think that if it is not able to ping
>>>>>> internally it won't be able to reach internet).
>>>>>>
>>>>>> Thanks,
>>>>>> Silvia
>>>>>>
>>>>>> 2015-11-16 16:27 GMT+01:00 Janki Chhatbar <jankihchhatbar at gmail.com>:
>>>>>>
>>>>>>> Hi Silvia
>>>>>>>
>>>>>>> Could you check flows in ovs bridges. If you are using GRE tunnels,
>>>>>>> there should be only 1 bridge - br-int in compute and Neutron node each.
>>>>>>> You will need to configure IP of the end point for ‎tunnel.
>>>>>>>
>>>>>>> Thanks
>>>>>>>  Janki
>>>>>>>
>>>>>>> Sent from my BlackBerry 10 smartphone.
>>>>>>> *From: *Silvia Fichera
>>>>>>> *Sent: *Monday, 16 November 2015 20:16
>>>>>>> *To: *<ovsdb-dev at lists.opendaylight.org>;
>>>>>>> openstack at lists.openstack.org
>>>>>>> *Subject: *[Openstack] No IP assigned to Instances
>>>>>>>
>>>>>>> Hi all,
>>>>>>> I'm going to integrate Opendaylight with Openstack via Devstack.
>>>>>>>  In a server I have 5 VMs:
>>>>>>>
>>>>>>> VM1: eth0 10.30.3.231
>>>>>>>          eth1 10.0.0.1
>>>>>>>          This is my OpenStack Controller+compute.
>>>>>>>
>>>>>>> VM2: eth0 10.30.3.232
>>>>>>>          eth1 10.0.0.2
>>>>>>>          Compute node
>>>>>>>
>>>>>>> VM3: eth0 10.30.3.233
>>>>>>>          eth1 10.0.0.3
>>>>>>>          Compute node
>>>>>>>
>>>>>>> VM4: eth0 10.30.3.234
>>>>>>>          ODL Controller
>>>>>>>
>>>>>>> VM7: eth0 10.30.3.237
>>>>>>>          eth1
>>>>>>>          eth2
>>>>>>>          eth3
>>>>>>>          This is my OpenVSwitch
>>>>>>>
>>>>>>> I was able to build up the setup, but when I create instances the IP
>>>>>>> address is not assigned so I can't ping them.
>>>>>>>
>>>>>>> It seems that dhcp fails
>>>>>>>
>>>>>>> Starting network...
>>>>>>> udhcpc (v1.20.1) started
>>>>>>> Sending discover...
>>>>>>> Sending discover...
>>>>>>> Sending discover...
>>>>>>> Usage: /sbin/cirros-dhcpc <up|down>
>>>>>>> No lease, failing
>>>>>>>
>>>>>>> I've checked the neutron agent-list and there are the metadata agent
>>>>>>> and the dhcp agent.
>>>>>>>
>>>>>>> In my local. conf I have disabled the l3 agent:
>>>>>>>
>>>>>>> disable_service q-l3
>>>>>>> Q_L3_ENABLED=True
>>>>>>> ODL_L3=True
>>>>>>>
>>>>>>>
>>>>>>> Could you please help me?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Silvia Fichera
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Silvia Fichera
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Silvia Fichera
>>>>>
>>>>> _______________________________________________
>>>>> ovsdb-dev mailing list
>>>>> ovsdb-dev at lists.opendaylight.org
>>>>> https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Silvia Fichera
>>>
>>
>>
>
>
> --
> Silvia Fichera
>



-- 
Silvia Fichera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20151127/7dd1ed88/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cap
Type: application/octet-stream
Size: 206393 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20151127/7dd1ed88/attachment.obj>


More information about the Openstack mailing list