[openstack-dev] [Openstack] Instances running on VMware ESXi are unable to configure IP
Arturo Ochoa
arturo.era at gmail.com
Fri Sep 13 16:26:02 UTC 2013
Rahul,
I'm also been assigned to test OpenStack against ESXi environment, can you
help pointing me to the guide or articles you consider the most useful??
Thanks in advance!
Ing Arturo Ochoa
about.me/arturoochoa
On Thu, Sep 12, 2013 at 12:58 PM, Rahul Sharma <rahulsharmaait at gmail.com>wrote:
> Hi Dan,
>
> Thanks for the reply. I agree to your point of using supported Distributed
> Virtual Switch plugin for ESX
> rather than going on with standard vSwitch of ESX.
>
> Currently, we are using the Grizzly release and have KVM with Openvswitch.
> We had the requirement of integrating ESX as well with the current setup.
> As we know that support for multiple neutron plugins is not there in
> Grizzly, hence we opted of having a workaround to see if we could use
> openvswitch and obtain the same functionality.
>
> Today we were able to achieve the end-to-end flow of traffic by adding
> rules manually to the openvswitch-switches in nova-compute vm. If support
> for the configuring flows in switches is added through API's, maybe we can
> support openvswitch as well. Though, ideally one should not use vSwitch as
> its having minimal capabilities and one should always go with the DVS for
> ESX.
>
> -Regards
> Rahul Sharma
>
>
> On Thu, Sep 12, 2013 at 10:16 PM, Dan Wendlandt <dan at nicira.com> wrote:
>
>> Hi Rahul,
>>
>> Thanks for the detailed description of your setup.
>>
>> From my understanding of your diagram, you are trying to mix and match
>> two incompatible mechanisms: ESX networking and the OVS Neutron plugin with
>> GRE tunneling.
>>
>> If you're just trying to get something simple working with ESX, you can
>> use basic nova-networking with ESX. Otherwise, I'd suggest you check out a
>> compatible Neutron plugin (see the compatibility list here:
>> http://docs.openstack.org/trunk/openstack-network/admin/content/flexibility.html
>> )
>>
>>
>> Dan
>>
>>
>>
>>
>>
>> On Thu, Sep 12, 2013 at 4:48 AM, Rahul Sharma <rahulsharmaait at gmail.com>wrote:
>>
>>> Hi All,****
>>>
>>> ** **
>>>
>>> When we create port-group “br-int” on ESX and launch instance, instance
>>> gets launched on ESX and is assigned port-group br-int. Since this br-int
>>> is unable to communicate with network-node over GRE, communication fails.
>>> Diagram with “initial-setup” shown below lists the connectivity of
>>> Nova-compute placed on ESX-host and instances getting launched on ESX host:-
>>> ****
>>>
>>> ** **
>>>
>>> ****
>>>
>>> ** **
>>>
>>> To allow vm’s to communicate with network node over GRE, we can assign
>>> one more nic(eth2) to nova-compute, put br-int(esx) in promiscuous mode and
>>> add eth2 to “br-int” on nova-compute. Now the packet will traverse as VM
>>> -> br-int(esx) -> eth2(compute) -> br-int(compute) -> br-tun(compute) ->
>>> Network-Node(over GRE tunnel). Below diagram explains the same:-****
>>>
>>> ** **
>>>
>>> ****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> *Still this will not work because the rules configured on openvswitches
>>> (br-int and br-tun) will drop the packets!!!*
>>>
>>> ** **
>>>
>>> Inbuilt Openvswitch-controller configures the vswitches to allow only
>>> specific flows which matches the rules installed on them. Even if we add
>>> eth2 to br-int, we will also need to add generic rules to br-int and br-tun
>>> such that they are able to pass the packets received from eth2 to br-int,
>>> then to br-tun and then to network node over GRE tunnel. Here is one sample
>>> output of the flow-dumps of br-int and br-tun of compute node:-****
>>>
>>> ** **
>>>
>>> *br-int flows:-*
>>>
>>> NXST_FLOW reply (xid=0x4):****
>>>
>>> cookie=0x0, duration=96.138s, table=0, n_packets=0, n_bytes=0,
>>> priority=1 actions=NORMAL****
>>>
>>> ** **
>>>
>>> *br-tun flows:-*
>>>
>>> NXST_FLOW reply (xid=0x4):****
>>>
>>> cookie=0x0, duration=98.322s, table=0, n_packets=0, n_bytes=0,
>>> priority=1 actions=drop****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> Can someone help me in identifying what flows I should add such that I
>>> am not breaking any functionality of quantum. Though the above workaround
>>> will allow vm’s on ESX to communicate with one another which should not be
>>> allowed(if they are under different tenants), rest everything almost works
>>> fine.****
>>>
>>> ** **
>>>
>>> Any inputs or suggestions for this would be greatly acknowledged.****
>>>
>>> ** **
>>>
>>> Thanks and Regards****
>>>
>>> Rahul Sharma****
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com
>> twitter: danwendlandt
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130913/13d35a49/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 22523 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130913/13d35a49/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 24410 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130913/13d35a49/attachment-0001.png>
More information about the OpenStack-dev
mailing list