[openstack-dev] [Openstack] Instances running on VMware ESXi are unable to configure IP

Dan Wendlandt dan at nicira.com
Thu Sep 12 16:46:04 UTC 2013


Hi Rahul,

Thanks for the detailed description of your setup.

>From my understanding of your diagram, you are trying to mix and match two
incompatible mechanisms: ESX networking and the OVS Neutron plugin with GRE
tunneling.

If you're just trying to get something simple working with ESX, you can use
basic nova-networking with ESX.  Otherwise, I'd suggest you check out a
compatible Neutron plugin (see the compatibility list here:
http://docs.openstack.org/trunk/openstack-network/admin/content/flexibility.html
 )


Dan





On Thu, Sep 12, 2013 at 4:48 AM, Rahul Sharma <rahulsharmaait at gmail.com>wrote:

> Hi All,****
>
> ** **
>
> When we create port-group “br-int” on ESX and launch instance, instance
> gets launched on ESX and is assigned port-group br-int. Since this br-int
> is unable to communicate with network-node over GRE, communication fails.
> Diagram with “initial-setup” shown below lists the connectivity of
> Nova-compute placed on ESX-host and instances getting launched on ESX host:-
> ****
>
> ** **
>
> ****
>
> ** **
>
> To allow vm’s to communicate with network node over GRE, we can assign one
> more nic(eth2) to nova-compute, put br-int(esx) in promiscuous mode and add
>  eth2 to “br-int” on nova-compute. Now the packet will traverse as VM ->
> br-int(esx) -> eth2(compute) -> br-int(compute) -> br-tun(compute) ->
> Network-Node(over GRE tunnel). Below diagram explains the same:-****
>
> ** **
>
> ****
>
> ** **
>
> ** **
>
> ** **
>
> *Still this will not work because the rules configured on openvswitches
> (br-int and br-tun) will drop the packets!!!*
>
> ** **
>
> Inbuilt Openvswitch-controller configures the vswitches to allow only
> specific flows which matches the rules installed on them. Even if we add
> eth2 to br-int, we will also need to add generic rules to br-int and br-tun
> such that they are able to pass the packets received from eth2 to br-int,
> then to br-tun and then to network node over GRE tunnel. Here is one sample
> output of the flow-dumps of br-int and br-tun of compute node:-****
>
> ** **
>
> *br-int flows:-*
>
> NXST_FLOW reply (xid=0x4):****
>
> cookie=0x0, duration=96.138s, table=0, n_packets=0, n_bytes=0, priority=1
> actions=NORMAL****
>
> ** **
>
> *br-tun flows:-*
>
> NXST_FLOW reply (xid=0x4):****
>
> cookie=0x0, duration=98.322s, table=0, n_packets=0, n_bytes=0, priority=1
> actions=drop****
>
> ** **
>
> ** **
>
> Can someone help me in identifying what flows I should add such that I am
> not breaking any functionality of quantum. Though the above workaround will
> allow vm’s on ESX to communicate with one another which should not be
> allowed(if they are under different tenants), rest everything almost works
> fine.****
>
> ** **
>
> Any inputs or suggestions for this would be greatly acknowledged.****
>
> ** **
>
> Thanks and Regards****
>
> Rahul Sharma****
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130912/66e76eb5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 22523 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130912/66e76eb5/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 24410 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130912/66e76eb5/attachment-0001.png>


More information about the OpenStack-dev mailing list