[openstack-dev] [Quantum] security groups now enforced w/devstack

Yoshihiro Kaneko ykaneko0929 at gmail.com
Mon Sep 24 07:44:04 UTC 2012


Hi Dan,

2012/9/21 Dan Wendlandt <dan at nicira.com>:
> On Fri, Sep 21, 2012 at 12:49 AM, Yoshihiro Kaneko
> <ykaneko0929 at gmail.com> wrote:
>> Hi Akihiro,
>>
>> Does Security Groups function?
>> VM came to be able to obtain an IP address by DHCP, but Security
>> Groups still does not work. There is nova-compute chains for an
>> instance, but it does not match any packet.
>
> It certainly works in my setup, so the trick is figuring out the
> difference I guess.  How are you ping to reach the VM?  directly via a
> fixed IP (e.g., pinging from within the namespace?) or via a floating
> ip?  One possible issue is that the latter requires open vswitch to be
> version 1.4.3, due to a bug in OVS.  We're still waiting on this code
> to work its way through the precise updates (I believe), but you can
> access it directly via the main folsom testing PPA:
> https://launchpad.net/~openstack-ubuntu-testing/+archive/folsom-trunk-testing

Sorry. It is my error. Security Groups works.
Last time, I did ping to a floating IP on the host.  In that case, it seems that
these packets do not match iptables rules.
When I ping from remote host, it worked correctly.
Pinging from within the namespace was accepted by the following rule of
the nova-compute-inst-N chain.
    0     0 ACCEPT     all  --  *      *       10.0.0.0/24          0.0.0.0/0
I tried version 1.4.0-1ubuntu1.2 and 1.4.0-1ubuntu1.3~ppa of OVS. Both
versions were Ok.

>
>>
>> Security Groups makes iptables rules in the root name space. That
>> rule has fixed IP address of a target instance as a destination.
>> On the other, Quantum L3 agent makes SNAT rules in qrouter name
>> space, so I think fixed IP address is not found in the root name space.
>
> I don't believe this is the cause of the issue, as the vifs that
> represent the VM are actually in the root namespace, as are the
> iptables rules that filter traffic to/from them.
>
>> In addition, I think that Security Groups does not work correctly when
>> there is more than one VM with the same fixed IP address. If so, we
>> should choose between Security Groups or Overlapping IP range.
>
> That is correct. We're already doing some brainstorming to see if
> there's any non-invasive change we might be able to make here to allow
> both to co-exist.

I see.
Is it expected to be realized in Grizzly?

Thanks,
Kaneko

>
> Dan
>
>
>>
>> Thanks,
>> Kaneko
>>
>> 2012/9/20 Yoshihiro Kaneko <ykaneko0929 at gmail.com>:
>>> Hi Akihiro,
>>>
>>> I submitted bug report.
>>>   https://bugs.launchpad.net/nova/+bug/1053312
>>> Please add comment for the details.
>>>
>>> Thanks,
>>> Kaneko
>>>
>>> 2012/9/20 Yoshihiro Kaneko <ykaneko0929 at gmail.com>:
>>>> Hi Akihiro,
>>>>
>>>> 2012/9/20 Akihiro MOTOKI <amotoki at gmail.com>:
>>>>> Hi Yoshihiro,
>>>>>
>>>>> Thank for testing.
>>>>> Have you already filed a bug for this problem? If not yet, could you
>>>>> file it as a nova bug?
>>>>> It is a "folsom-rc-potential" bug and needs to be fixed I think.
>>>>
>>>> Sure, will do.
>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> 2012/9/20 Yoshihiro Kaneko <ykaneko0929 at gmail.com>:
>>>>>> 2012/9/20 Akihiro MOTOKI <motoki at da.jp.nec.com>:
>>>>>>> Hi Dan, Yoshihiro,
>>>>>>>
>>>>>>> I investigated the problem Yoshihiro reported more.
>>>>>>>
>>>>>>> When nova-compute launches an instance on KVM, vif driver plug() is called
>>>>>>> twice
>>>>>>> accorrding to the nova-compute log [1].
>>>>>>>
>>>>>>> LibvirtHybridOVSBridgeDriver plug() calles
>>>>>>> nova.network.linux_net._create_veth_pair()
>>>>>>> to create veth pair. _create_veth_pair() checks whether the specified device
>>>>>>> exists
>>>>>>> and in case the device exists it first deletes the device and (re)creates
>>>>>>> it.
>>>>>>> It may be a cause of the problem.
>>>>>>>
>>>>>>> After the following change [2], this problem seems to be addressed.
>>>>>>> If this direction is OK, I will post a patch to gerrit.
>>>>>>>
>>>>>>>> Yoshihiro,
>>>>>>> Could you test my patch on your environment?
>>>>>>
>>>>>> Great!
>>>>>> qvo was listed in brctl show and ovs-ofctl show and VM obtained an IP
>>>>>> address by DHCP.
>>>>>>
>>>>>> $ sudo ovs-vsctl show
>>>>>> 6d7929f6-8b09-41bc-b775-1ef50915eb7b
>>>>>>     Bridge br-int
>>>>>>         Port "tap94dddc19-c7"
>>>>>>             tag: 1
>>>>>>             Interface "tap94dddc19-c7"
>>>>>>                 type: internal
>>>>>>         Port "qvo4307567c-49"
>>>>>>             tag: 1
>>>>>>             Interface "qvo4307567c-49"
>>>>>>         Port "qr-56b7fd42-93"
>>>>>>             tag: 1
>>>>>>             Interface "qr-56b7fd42-93"
>>>>>>                 type: internal
>>>>>>         Port br-int
>>>>>>             Interface br-int
>>>>>>                 type: internal
>>>>>>     Bridge br-ex
>>>>>>         Port br-ex
>>>>>>             Interface br-ex
>>>>>>                 type: internal
>>>>>>         Port "eth1"
>>>>>>             Interface "eth1"
>>>>>>         Port "qg-e399dae7-66"
>>>>>>             Interface "qg-e399dae7-66"
>>>>>>                 type: internal
>>>>>>     ovs_version: "1.4.0+build0"
>>>>>> $ brctl show
>>>>>> bridge name     bridge id               STP enabled     interfaces
>>>>>> br-ex           0000.ae454d673043       no              qg-e399dae7-66
>>>>>> br-int          0000.a6c5c5495143       no              qr-56b7fd42-93
>>>>>>                                                         qvo4307567c-49
>>>>>>                                                         tap94dddc19-c7
>>>>>> qbr4307567c-49          8000.3ab31e81960e       no              qvb4307567c-49
>>>>>>                                                         vnet0
>>>>>> virbr0          8000.000000000000       yes
>>>>>> virbr1          8000.525400d8a792       yes             virbr1-nic
>>>>>> $ sudo ovs-ofctl show br-int
>>>>>> OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000a6c5c5495143
>>>>>> n_tables:255, n_buffers:256
>>>>>> features: capabilities:0xc7, actions:0xfff
>>>>>>  1(tap94dddc19-c7): addr:78:00:00:00:00:00
>>>>>>      config:     PORT_DOWN
>>>>>>      state:      LINK_DOWN
>>>>>>  2(qr-56b7fd42-93): addr:78:00:00:00:00:00
>>>>>>      config:     PORT_DOWN
>>>>>>      state:      LINK_DOWN
>>>>>>  3(qvo4307567c-49): addr:fe:9f:c2:68:9b:7a
>>>>>>      config:     0
>>>>>>      state:      0
>>>>>>      current:    10GB-FD COPPER
>>>>>>  LOCAL(br-int): addr:a6:c5:c5:49:51:43
>>>>>>      config:     PORT_DOWN
>>>>>>      state:      LINK_DOWN
>>>>>> OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0
>>>>>>
>>>>>> Thanks,
>>>>>> Kaneko
>>>>>>
>>>>>>>
>>>>>>> [1]
>>>>>>> $ grep "qvo612b2502-6e" ~/logs/screen-n-cpu.log | grep ^Running
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> show dev qvo612b2502-6e
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> add qvb612b2502-6e type veth peer name qvo612b2502-6e
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> set qvo612b2502-6e up
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> set qvo612b2502-6e promisc on
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
>>>>>>> ovs-vsctl -- --may-exist add-port br-int qvo612b2502-6e -- set Interface
>>>>>>> qvo612b2502-6e external-ids:iface-id=612b2502-6e8c-4a78-aedd-c5cb08738564
>>>>>>> external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:97:e4:aa
>>>>>>> external-ids:vm-uuid=f0321c00-8ef0-4a18-9921-8452236877f2
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> show dev qvo612b2502-6e
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> add qvb612b2502-6e type veth peer name qvo612b2502-6e
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> set qvo612b2502-6e up
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link
>>>>>>> set qvo612b2502-6e promisc on
>>>>>>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
>>>>>>> ovs-vsctl -- --may-exist add-port br-int qvo612b2502-6e -- set Interface
>>>>>>> qvo612b2502-6e external-ids:iface-id=612b2502-6e8c-4a78-aedd-c5cb08738564
>>>>>>> external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:97:e4:aa
>>>>>>> external-ids:vm-uuid=f0321c00-8ef0-4a18-9921-8452236877f2
>>>>>>>
>>>>>>> [2]
>>>>>>> diff --git a/nova/virt/libvirt/vif.py b/nova/virt/libvirt/vif.py
>>>>>>> index 1a64765..ea0834d 100644
>>>>>>> --- a/nova/virt/libvirt/vif.py
>>>>>>> +++ b/nova/virt/libvirt/vif.py
>>>>>>> @@ -212,15 +212,15 @@ class
>>>>>>> LibvirtHybridOVSBridgeDriver(LibvirtBridgeDriver,
>>>>>>>          br_name = self.get_br_name(iface_id)
>>>>>>>          v1_name, v2_name = self.get_veth_pair_names(iface_id)
>>>>>>>
>>>>>>> -        linux_net._create_veth_pair(v1_name, v2_name)
>>>>>>> -
>>>>>>>          if not linux_net._device_exists(br_name):
>>>>>>>              utils.execute('brctl', 'addbr', br_name, run_as_root=True)
>>>>>>>
>>>>>>> -        utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)
>>>>>>> -        utils.execute('brctl', 'addif', br_name, v1_name, run_as_root=True)
>>>>>>> -        self.create_ovs_vif_port(v2_name, iface_id, mapping['mac'],
>>>>>>> -                                 instance['uuid'])
>>>>>>> +        if not linux_net._device_exists(v2_name):
>>>>>>> +            linux_net._create_veth_pair(v1_name, v2_name)
>>>>>>> +            utils.execute('ip', 'link', 'set', br_name, 'up',
>>>>>>> run_as_root=True)
>>>>>>> +            utils.execute('brctl', 'addif', br_name, v1_name,
>>>>>>> run_as_root=True)
>>>>>>> +            self.create_ovs_vif_port(v2_name, iface_id, mapping['mac'],
>>>>>>> +                                     instance['uuid'])
>>>>>>>
>>>>>>>          network['bridge'] = br_name
>>>>>>>          return self._get_configurations(instance, network, mapping)
>>>>>>>
>>>>>>>
>>>>>>> (2012/09/20 13:13), Yoshihiro Kaneko wrote:
>>>>>>>>
>>>>>>>> Has nobody encountered this problem? Only me?
>>>>>>>> I will try this in other fresh environment.
>>>>>>>>
>>>>>>>> Kaneko
>>>>>>>>
>>>>>>>> 2012/9/19 Yoshihiro Kaneko <ykaneko0929 at gmail.com>:
>>>>>>>>>
>>>>>>>>> I found that qvo was not "internal". Therefore I set "type=internal"
>>>>>>>>> to qvo manually.
>>>>>>>>> As a result, qvo appeared in the output of "brctl show" and VM was
>>>>>>>>> able to obtain
>>>>>>>>> IP address by DHCP.
>>>>>>>>> Is this correct?
>>>>>>>>>
>>>>>>>>> Before set interface type
>>>>>>>>> ----------
>>>>>>>>> $ sudo ovs-vsctl show
>>>>>>>>> 6d7929f6-8b09-41bc-b775-1ef50915eb7b
>>>>>>>>>      Bridge br-int
>>>>>>>>>          Port "tap8c462a80-f6"
>>>>>>>>>              tag: 1
>>>>>>>>>              Interface "tap8c462a80-f6"
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "qvo076911a5-59"
>>>>>>>>>              tag: 1
>>>>>>>>>              Interface "qvo076911a5-59"
>>>>>>>>>          Port br-int
>>>>>>>>>              Interface br-int
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "qr-6dd16ab9-d6"
>>>>>>>>>              tag: 1
>>>>>>>>>              Interface "qr-6dd16ab9-d6"
>>>>>>>>>                  type: internal
>>>>>>>>>      Bridge br-ex
>>>>>>>>>          Port br-ex
>>>>>>>>>              Interface br-ex
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "qg-de1db666-3d"
>>>>>>>>>              Interface "qg-de1db666-3d"
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "eth1"
>>>>>>>>>              Interface "eth1"
>>>>>>>>>      ovs_version: "1.4.0+build0"
>>>>>>>>> $ brctl show
>>>>>>>>> bridge name     bridge id               STP enabled     interfaces
>>>>>>>>> br-ex           0000.ae454d673043       no              qg-de1db666-3d
>>>>>>>>> br-int          0000.c6d8e9064848       no              qr-6dd16ab9-d6
>>>>>>>>>                                                          tap8c462a80-f6
>>>>>>>>> qbr076911a5-59          8000.26397c9d0d15       no
>>>>>>>>> qvb076911a5-59
>>>>>>>>>                                                          vnet0
>>>>>>>>> virbr0          8000.000000000000       yes
>>>>>>>>> virbr1          8000.525400d8a792       yes             virbr1-nic
>>>>>>>>> ----------
>>>>>>>>>
>>>>>>>>> Set interface type to qvo
>>>>>>>>> ----------
>>>>>>>>> $ sudo ovs-vsctl set Interface qvo076911a5-59 type=internal
>>>>>>>>> $ sudo ovs-vsctl show
>>>>>>>>> 6d7929f6-8b09-41bc-b775-1ef50915eb7b
>>>>>>>>>      Bridge br-int
>>>>>>>>>          Port "tap8c462a80-f6"
>>>>>>>>>              tag: 1
>>>>>>>>>              Interface "tap8c462a80-f6"
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "qvo076911a5-59"
>>>>>>>>>              tag: 1
>>>>>>>>>              Interface "qvo076911a5-59"
>>>>>>>>>                  type: internal
>>>>>>>>>          Port br-int
>>>>>>>>>              Interface br-int
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "qr-6dd16ab9-d6"
>>>>>>>>>              tag: 1
>>>>>>>>>              Interface "qr-6dd16ab9-d6"
>>>>>>>>>                  type: internal
>>>>>>>>>      Bridge br-ex
>>>>>>>>>          Port br-ex
>>>>>>>>>              Interface br-ex
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "qg-de1db666-3d"
>>>>>>>>>              Interface "qg-de1db666-3d"
>>>>>>>>>                  type: internal
>>>>>>>>>          Port "eth1"
>>>>>>>>>              Interface "eth1"
>>>>>>>>>      ovs_version: "1.4.0+build0"
>>>>>>>>> $ brctl show
>>>>>>>>> bridge name     bridge id               STP enabled     interfaces
>>>>>>>>> br-ex           0000.ae454d673043       no              qg-de1db666-3d
>>>>>>>>> br-int          0000.c6d8e9064848       no              qr-6dd16ab9-d6
>>>>>>>>>                                                          qvo076911a5-59
>>>>>>>>>                                                          tap8c462a80-f6
>>>>>>>>> qbr076911a5-59          8000.26397c9d0d15       no
>>>>>>>>> qvb076911a5-59
>>>>>>>>>                                                          vnet0
>>>>>>>>> virbr0          8000.000000000000       yes
>>>>>>>>> virbr1          8000.525400d8a792       yes             virbr1-nic
>>>>>>>>> ----------
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> Kaneko
>>>>>>>>>
>>>>>>>>> 2012/9/19 Yoshihiro Kaneko <ykaneko0929 at gmail.com>:
>>>>>>>>>>
>>>>>>>>>> I re-added qvo to br-int manually, then it appeared in the output of
>>>>>>>>>> "brctl show".
>>>>>>>>>> And VM was able to obtain an IP address by DHCP.
>>>>>>>>>> What I did is as follows.
>>>>>>>>>> sudo ovs-vsctl del-port qvoXXXX
>>>>>>>>>> sudo ovs-vsctl add-port br-int qvoXXXX
>>>>>>>>>> sudo ovs-vsctl set port qvoXXXX tag=1
>>>>>>>>>>
>>>>>>>>>> I restarted devstack without re-add qvo, the problem was reproduced.
>>>>>>>>>> VM could not obtain IP address from DHCP server.
>>>>>>>>>>
>>>>>>>>>> And I re-added qvo manually again. In this time, I did a thing same as
>>>>>>>>>> vif-driver recorded in a syslog. Then, VM obtains an IP address by DHCP
>>>>>>>>>> again.
>>>>>>>>>>
>>>>>>>>>> sudo ovs-vsctl del-port qvoXXXX
>>>>>>>>>> sudo ovs-vsctl -- --may-exist add-port br-int qvoXXX -- set Interface
>>>>>>>>>> qvoXXXX
>>>>>>>>>> external-ids:iface-id=XXXX external-ids:iface-status=active
>>>>>>>>>> external-ids:attached-mac=XXXX external-ids:vm-uuid=XXXX
>>>>>>>>>> (VLAN tag was added automatically)
>>>>>>>>>>
>>>>>>>>>> What should I do next?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> Kaneko
>>>>>>>>>>
>>>>>>>>>> 2012/9/18 Yoshihiro Kaneko <ykaneko0929 at gmail.com>:
>>>>>>>>>>>
>>>>>>>>>>> 2012/9/18 Dan Wendlandt <dan at nicira.com>:
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 17, 2012 at 11:12 PM, Yoshihiro Kaneko
>>>>>>>>>>>> <ykaneko0929 at gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Dan,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks for the reply.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This is the result of "brctl show".
>>>>>>>>>>>>> ----------
>>>>>>>>>>>>> $ brctl show
>>>>>>>>>>>>> bridge name     bridge id               STP enabled     interfaces
>>>>>>>>>>>>> br-ex           0000.6e47647aee44       no
>>>>>>>>>>>>> qg-8c3a652b-f1
>>>>>>>>>>>>> br-int          0000.4af0ded34a40       no
>>>>>>>>>>>>> qr-f9a49be5-c1
>>>>>>>>>>>>>
>>>>>>>>>>>>> tap5a418ee0-53
>>>>>>>>>>>>> qbr99eea189-fc          8000.5e15f4c7d4f2       no
>>>>>>>>>>>>> qvb99eea189-fc
>>>>>>>>>>>>>                                                          vnet0
>>>>>>>>>>>>> virbr0          8000.000000000000       yes
>>>>>>>>>>>>> virbr1          8000.525400d8a792       yes             virbr1-nic
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> can you send the output of
>>>>>>>>>>>>
>>>>>>>>>>>> ovs-vsctl list-ports br-int (or ovs-vsctl show)?
>>>>>>>>>>>>
>>>>>>>>>>>> Its odd that the above output does not include qvo99eea189-fc, which
>>>>>>>>>>>> should be attached to br-int.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> This is the output. I restarted devstack, so the interface name is
>>>>>>>>>>> different from above.
>>>>>>>>>>> There is not "qvo" in the output of "brctl show". But it exists in
>>>>>>>>>>> "ovs-vsctl list-ports br-int"
>>>>>>>>>>> (and "ovs-vsctl show"). And not found in "ovs-dpctl show br-int".
>>>>>>>>>>>
>>>>>>>>>>> ----------
>>>>>>>>>>> $ sudo brctl show
>>>>>>>>>>> bridge name     bridge id               STP enabled     interfaces
>>>>>>>>>>> br-ex           0000.2a93351db24b       no              qg-669c94f9-41
>>>>>>>>>>> br-int          0000.363636554a4e       no              qr-185216c4-b7
>>>>>>>>>>>                                                          tap4061a128-21
>>>>>>>>>>> qbra5838945-25          8000.d639643f11ef       no
>>>>>>>>>>> qvba5838945-25
>>>>>>>>>>>                                                          vnet0
>>>>>>>>>>> virbr0          8000.000000000000       yes
>>>>>>>>>>> virbr1          8000.525400d8a792       yes             virbr1-nic
>>>>>>>>>>> $
>>>>>>>>>>>
>>>>>>>>>>> $ sudo ovs-vsctl list-ports br-int
>>>>>>>>>>> qr-185216c4-b7
>>>>>>>>>>> qvoa5838945-25
>>>>>>>>>>> tap4061a128-21
>>>>>>>>>>> $
>>>>>>>>>>> $ sudo ovs-dpctl show br-int
>>>>>>>>>>> system at br-int:
>>>>>>>>>>>          lookups: hit:7 missed:14 lost:0
>>>>>>>>>>>          flows: 0
>>>>>>>>>>>          port 0: br-int (internal)
>>>>>>>>>>> Sep 18
>>>>>>>>>>> 16:35:45|00001|netdev_linux|WARN|/sys/class/net/tap4061a128-21/carrier:
>>>>>>>>>>> open failed: No such file or directory
>>>>>>>>>>>          port 1: tap4061a128-21 (internal)
>>>>>>>>>>> Sep 18
>>>>>>>>>>> 16:35:45|00002|netdev_linux|WARN|/sys/class/net/qr-185216c4-b7/carrier:
>>>>>>>>>>> open failed: No such file or directory
>>>>>>>>>>>          port 2: qr-185216c4-b7 (internal)
>>>>>>>>>>> ----------
>>>>>>>>>>>
>>>>>>>>>>> Kaneko
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> dan
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>
>>>>>>>>>>>>> Following is the result that was running tcpdump on each terminal
>>>>>>>>>>>>> concurrently.
>>>>>>>>>>>>> I don't know why each DHCP packet was received twice on vnet0. And
>>>>>>>>>>>>> the tap
>>>>>>>>>>>>> device used by dnsmasq did not receive udp packet in qdhcp-
>>>>>>>>>>>>> namespace.
>>>>>>>>>>>>> ----------
>>>>>>>>>>>>> # tcpdump -ni vnet0 udp
>>>>>>>>>>>>> tcpdump: WARNING: vnet0: no IPv4 address assigned
>>>>>>>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>>>>>>>>>>>> decode
>>>>>>>>>>>>> listening on vnet0, link-type EN10MB (Ethernet), capture size 65535
>>>>>>>>>>>>> bytes
>>>>>>>>>>>>> 12:00:56.619340 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:00:56.619423 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:00:59.624796 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:00:59.624858 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:01:02.630474 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:01:02.630564 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>>
>>>>>>>>>>>>> # tcpdump -ni qvb99eea189-fc udp
>>>>>>>>>>>>> tcpdump: WARNING: qvb99eea189-fc: no IPv4 address assigned
>>>>>>>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>>>>>>>>>>>> decode
>>>>>>>>>>>>> listening on qvb99eea189-fc, link-type EN10MB (Ethernet), capture
>>>>>>>>>>>>> size
>>>>>>>>>>>>> 65535 bytes
>>>>>>>>>>>>> 12:00:56.619436 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:00:59.624872 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:01:02.630586 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>>
>>>>>>>>>>>>> # tcpdump -ni qvo99eea189-fc udp
>>>>>>>>>>>>> tcpdump: WARNING: qvo99eea189-fc: no IPv4 address assigned
>>>>>>>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>>>>>>>>>>>> decode
>>>>>>>>>>>>> listening on qvo99eea189-fc, link-type EN10MB (Ethernet), capture
>>>>>>>>>>>>> size
>>>>>>>>>>>>> 65535 bytes
>>>>>>>>>>>>> 12:00:56.619467 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:00:59.624904 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>> 12:01:02.630633 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP,
>>>>>>>>>>>>> Request from fa:16:3e:48:aa:85, length 280
>>>>>>>>>>>>>
>>>>>>>>>>>>> # tcpdump -ni tap5a418ee0-53 udp
>>>>>>>>>>>>> tcpdump: tap5a418ee0-53: No such device exists
>>>>>>>>>>>>> (SIOCGIFHWADDR: No such device)
>>>>>>>>>>>>> # ip netns exec qdhcp-c95edcb8-3df6-4e92-a367-4fce5bce6e63 tcpdump
>>>>>>>>>>>>> -ni
>>>>>>>>>>>>> tap5a418ee0-53 udp
>>>>>>>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>>>>>>>>>>>> decode
>>>>>>>>>>>>> listening on tap5a418ee0-53, link-type EN10MB (Ethernet), capture
>>>>>>>>>>>>> size
>>>>>>>>>>>>> 65535 bytes
>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>
>>>>>>>>>>>>> When using
>>>>>>>>>>>>> "libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver",
>>>>>>>>>>>>> VM can obtain an IP address by DHCP.
>>>>>>>>>>>>> ----------
>>>>>>>>>>>>> $ grep libvirt_vif_driver /etc/nova/nova.conf
>>>>>>>>>>>>> libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
>>>>>>>>>>>>> $ nova console-log vm1
>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>> Starting network...
>>>>>>>>>>>>> udhcpc (v1.18.5) started
>>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>>> Sending select for 10.0.0.3...
>>>>>>>>>>>>> Lease of 10.0.0.3 obtained, lease time 120
>>>>>>>>>>>>> deleting routers
>>>>>>>>>>>>> route: SIOCDELRT: No such process
>>>>>>>>>>>>> adding dns 10.0.0.2
>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Kaneko
>>>>>>>>>>>>>
>>>>>>>>>>>>> 2012/9/17 Dan Wendlandt <dan at nicira.com>:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Kaneko,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks for the detailed report.  At a glance, things look correct.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I'd suggest running tcpdump on various interfaces to figure how far
>>>>>>>>>>>>>> the DHCP requests/responses are reaching.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> tcpdump -ni vnetX udp (replace vnetX with correct device... can't
>>>>>>>>>>>>>> tell
>>>>>>>>>>>>>> from your output.  try running brctl show).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> tcpdump -ni qvb99eea189-fc udp   (veth device on VIF-specific
>>>>>>>>>>>>>> bridge)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> tcpdump -ni qvo99eea189-fc udp   (veth device on main OVS bridge)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> tcpdump -ni tap5a418ee0-53 udp (tap device used by dnsmasq)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Another interesting data point would be whether you only see these
>>>>>>>>>>>>>> issues when using
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> "libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver",
>>>>>>>>>>>>>> or whether you also see it with
>>>>>>>>>>>>>> "libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Dan
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Sep 14, 2012 at 3:13 AM, Yoshihiro Kaneko
>>>>>>>>>>>>>> <ykaneko0929 at gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Dan,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I tried it on Ubuntu 12.04. However VM could not obtain an IP
>>>>>>>>>>>>>>> address from
>>>>>>>>>>>>>>> a DHCP server.
>>>>>>>>>>>>>>> Any advice?
>>>>>>>>>>>>>>> If more information is needed, please let me know.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> $ git clone git://github.com/openstack-dev/devstack
>>>>>>>>>>>>>>> Cloning into 'devstack'...
>>>>>>>>>>>>>>> remote: Counting objects: 6415, done.
>>>>>>>>>>>>>>> remote: Compressing objects: 100% (2186/2186), done.
>>>>>>>>>>>>>>> remote: Total 6415 (delta 4352), reused 6145 (delta 4151)
>>>>>>>>>>>>>>> Receiving objects: 100% (6415/6415), 1.10 MiB | 470 KiB/s, done.
>>>>>>>>>>>>>>> Resolving deltas: 100% (4352/4352), done.
>>>>>>>>>>>>>>> $ cd devstack/
>>>>>>>>>>>>>>> $ git fetch https://review.openstack.org/openstack-dev/devstack
>>>>>>>>>>>>>>> refs/changes/50/11650/8 && git checkout FETCH_HEAD
>>>>>>>>>>>>>>> remote: Counting objects: 5, done
>>>>>>>>>>>>>>> remote: Finding sources: 100% (3/3)
>>>>>>>>>>>>>>> remote: Total 3 (delta 2), reused 3 (delta 2)
>>>>>>>>>>>>>>> Unpacking objects: 100% (3/3), done.
>>>>>>>>>>>>>>>  From https://review.openstack.org/openstack-dev/devstack
>>>>>>>>>>>>>>>   * branch            refs/changes/50/11650/8 -> FETCH_HEAD
>>>>>>>>>>>>>>> Note: checking out 'FETCH_HEAD'.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You are in 'detached HEAD' state. You can look around, make
>>>>>>>>>>>>>>> experimental
>>>>>>>>>>>>>>> changes and commit them, and you can discard any commits you make
>>>>>>>>>>>>>>> in this
>>>>>>>>>>>>>>> state without impacting any branches by performing another
>>>>>>>>>>>>>>> checkout.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If you want to create a new branch to retain commits you create,
>>>>>>>>>>>>>>> you may
>>>>>>>>>>>>>>> do so (now or later) by using -b with the checkout command again.
>>>>>>>>>>>>>>> Example:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>    git checkout -b new_branch_name
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> HEAD is now at cea6c51... Quantum enhancements
>>>>>>>>>>>>>>> $ vi localrc
>>>>>>>>>>>>>>> $ cat localrc
>>>>>>>>>>>>>>> disable_service n-net
>>>>>>>>>>>>>>> enable_service q-svc q-agt q-dhcp q-l3 quantum
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> $ cp samples/local.sh .
>>>>>>>>>>>>>>> $ ./stack.sh
>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>> $ . ./openrc demo demo
>>>>>>>>>>>>>>> $ nova image-list
>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>> $ quantum net-list
>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>> $ nova boot --flavor 6 --image 7777d0ed-294c-4634-9304-769f64a52c81
>>>>>>>>>>>>>>> --nic net-id=c95edcb8-3df6-4e92-a367-4fce5bce6e63 vm1
>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>> $ nova list
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> +--------------------------------------+------+--------+---------------+
>>>>>>>>>>>>>>> | ID                                   | Name | Status | Networks
>>>>>>>>>>>>>>> |
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> +--------------------------------------+------+--------+---------------+
>>>>>>>>>>>>>>> | 723d76c4-4dea-4fe4-a4d8-6ca8d08bb936 | vm1  | ACTIVE |
>>>>>>>>>>>>>>> net1=10.0.0.3 |
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> +--------------------------------------+------+--------+---------------+
>>>>>>>>>>>>>>> $ nova console-log vm1
>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>> Starting network...
>>>>>>>>>>>>>>> udhcpc (v1.18.5) started
>>>>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>>>>> No lease, failing
>>>>>>>>>>>>>>> WARN: /etc/rc3.d/S40-network failed
>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> $ sudo ovs-vsctl show
>>>>>>>>>>>>>>> 6d7929f6-8b09-41bc-b775-1ef50915eb7b
>>>>>>>>>>>>>>>      Bridge br-ex
>>>>>>>>>>>>>>>          Port "qg-8c3a652b-f1"
>>>>>>>>>>>>>>>              Interface "qg-8c3a652b-f1"
>>>>>>>>>>>>>>>                  type: internal
>>>>>>>>>>>>>>>          Port br-ex
>>>>>>>>>>>>>>>              Interface br-ex
>>>>>>>>>>>>>>>                  type: internal
>>>>>>>>>>>>>>>      Bridge br-int
>>>>>>>>>>>>>>>          Port "tap5a418ee0-53"
>>>>>>>>>>>>>>>              tag: 1
>>>>>>>>>>>>>>>              Interface "tap5a418ee0-53"
>>>>>>>>>>>>>>>                  type: internal
>>>>>>>>>>>>>>>          Port "qr-f9a49be5-c1"
>>>>>>>>>>>>>>>              tag: 1
>>>>>>>>>>>>>>>              Interface "qr-f9a49be5-c1"
>>>>>>>>>>>>>>>                  type: internal
>>>>>>>>>>>>>>>          Port "qvo99eea189-fc"
>>>>>>>>>>>>>>>              tag: 1
>>>>>>>>>>>>>>>              Interface "qvo99eea189-fc"
>>>>>>>>>>>>>>>          Port br-int
>>>>>>>>>>>>>>>              Interface br-int
>>>>>>>>>>>>>>>                  type: internal
>>>>>>>>>>>>>>>      ovs_version: "1.4.0+build0"
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> $ ip netns
>>>>>>>>>>>>>>> qrouter-aed60aac-288b-41ee-bc4a-9ed5a356a16d
>>>>>>>>>>>>>>> qdhcp-c95edcb8-3df6-4e92-a367-4fce5bce6e63
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> $ ps ax|grep dnsmasq
>>>>>>>>>>>>>>>   6107 ?        S      0:00 dnsmasq --no-hosts --no-resolv
>>>>>>>>>>>>>>> --strict-order --bind-interfaces --interface=tap5a418ee0-53
>>>>>>>>>>>>>>> --except-interface=lo --domain=openstacklocal
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --pid-file=/opt/stack/data/dhcp/c95edcb8-3df6-4e92-a367-4fce5bce6e63/pid
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --dhcp-hostsfile=/opt/stack/data/dhcp/c95edcb8-3df6-4e92-a367-4fce5bce6e63/host
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --dhcp-optsfile=/opt/stack/data/dhcp/c95edcb8-3df6-4e92-a367-4fce5bce6e63/opts
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --dhcp-script=/opt/stack/quantum/bin/quantum-dhcp-agent-dnsmasq-lease-update
>>>>>>>>>>>>>>> --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,120s
>>>>>>>>>>>>>>>   6108 ?        S      0:00 dnsmasq --no-hosts --no-resolv
>>>>>>>>>>>>>>> --strict-order --bind-interfaces --interface=tap5a418ee0-53
>>>>>>>>>>>>>>> --except-interface=lo --domain=openstacklocal
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --pid-file=/opt/stack/data/dhcp/c95edcb8-3df6-4e92-a367-4fce5bce6e63/pid
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --dhcp-hostsfile=/opt/stack/data/dhcp/c95edcb8-3df6-4e92-a367-4fce5bce6e63/host
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --dhcp-optsfile=/opt/stack/data/dhcp/c95edcb8-3df6-4e92-a367-4fce5bce6e63/opts
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --dhcp-script=/opt/stack/quantum/bin/quantum-dhcp-agent-dnsmasq-lease-update
>>>>>>>>>>>>>>> --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,120s
>>>>>>>>>>>>>>> 28913 pts/1    S+     0:00 grep --color=auto dnsmasq
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Kaneko
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 2012/9/14 Dan Wendlandt <dan at nicira.com>:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi quantum hackers,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We're pushing a change to devstack to use a new vif-driver for
>>>>>>>>>>>>>>>> Quantum
>>>>>>>>>>>>>>>> with Open vSwitch (https://review.openstack.org/#/c/11650/).  The
>>>>>>>>>>>>>>>> benefit of this driver is that it is compatible with Nova's
>>>>>>>>>>>>>>>> security
>>>>>>>>>>>>>>>> group filtering.  This is "a good thing", since it more closely
>>>>>>>>>>>>>>>> maps
>>>>>>>>>>>>>>>> to how real users will deploy Quantum.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> However, this may catch developers by surprise who are suddenly
>>>>>>>>>>>>>>>> unable
>>>>>>>>>>>>>>>> to ping or SSH to instances because the security groups drop
>>>>>>>>>>>>>>>> traffic
>>>>>>>>>>>>>>>> by default.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Preferred method of dealing with this is to add the following
>>>>>>>>>>>>>>>> lines to
>>>>>>>>>>>>>>>> local.sh in your devstack directory, which open up your VMs for
>>>>>>>>>>>>>>>> ping
>>>>>>>>>>>>>>>> and SSH for the 'demo' user:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
>>>>>>>>>>>>>>>> nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Another work around is to disable nova security groups by adding
>>>>>>>>>>>>>>>> 'LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver' to
>>>>>>>>>>>>>>>> your localrc
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Dan
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>>>>>>>>>> Dan Wendlandt
>>>>>>>>>>>>>>>> Nicira, Inc: www.nicira.com
>>>>>>>>>>>>>>>> twitter: danwendlandt
>>>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>>>>>>>> Dan Wendlandt
>>>>>>>>>>>>>> Nicira, Inc: www.nicira.com
>>>>>>>>>>>>>> twitter: danwendlandt
>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>>>>>> Dan Wendlandt
>>>>>>>>>>>> Nicira, Inc: www.nicira.com
>>>>>>>>>>>> twitter: danwendlandt
>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> OpenStack-dev mailing list
>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Akihiro MOTOKI <amotoki at gmail.com>
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list