<div dir="ltr"><div><div><div><div><div><div><div>Hi Andreas,<br><br></div>Thanks a lot. <br><br></div>Now my both installation seems working.<br><br></div>1) 1 node complete installation<br></div>2) 2 node complete installation<br><br></div>I was stuck in these steps due to some confussion in comcept for some days. <br><br></div>Your mail help me to fix this.<br><br></div>Thanks again for your great support.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 26, 2014 at 5:01 PM, Andreas Scheuring <span dir="ltr"><<a href="mailto:scheuran@linux.vnet.ibm.com" target="_blank">scheuran@linux.vnet.ibm.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The Neutron-Openvswitch-Agent is doing it for you.<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
--<br>
Andreas<br>
(irc: scheuran)<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On Wed, 2014-11-26 at 16:34 +0530, Geo Varghese wrote:<br>
> Thanks again Andreas.<br>
><br>
><br>
> One more question about creating - Port "gre-c0a801ca"<br>
><br>
><br>
> It will automatically created or do we need to create it?<br>
><br>
><br>
> Thanks for your time.<br>
><br>
><br>
> On Wed, Nov 26, 2014 at 4:08 PM, Andreas Scheuring<br>
> <<a href="mailto:scheuran@linux.vnet.ibm.com">scheuran@linux.vnet.ibm.com</a>> wrote:<br>
>         You're right. The .202 in my case is the controller.<br>
>         One interface for your compute node is sufficient.<br>
><br>
>         In your compute's ml2_conf.ini just point to the computes ip<br>
>         address of<br>
>         the eth interface. This should work. The controllers IP is<br>
>         specified<br>
>         somewhere in the neutron.conf file.<br>
>         And as mentioned before: br-tun does not required an ip<br>
>         address, so I<br>
>         would recommend to remove it.<br>
><br>
><br>
>         Hope this helps<br>
><br>
><br>
>         Yes, in my scenario I have a couple of network interfaces. You<br>
>         could<br>
>         also configure it with your kvm. Just create an additional<br>
>         network in<br>
>         libvirt (e.g. using virsh net-define)<br>
><br>
><br>
>         --<br>
>         Andreas<br>
>         (irc: scheuran)<br>
><br>
><br>
>         On Wed, 2014-11-26 at 13:54 +0530, Geo Varghese wrote:<br>
>         > Hi Andreas,<br>
>         ><br>
>         ><br>
>         > Thanks a lot for the reply.<br>
>         ><br>
>         > options: {in_key=flow, local_ip="192.168.1.190",<br>
>         > out_key=flow, remote_ip="192.168.1.202"}<br>
>         ><br>
>         ><br>
>         > 192.168.1.202 => is IP of controller node right?<br>
>         ><br>
>         ><br>
>         ><br>
>         > In this case do you have two network interfaces?<br>
>         ><br>
>         ><br>
>         > I am using Virual machines(KVM) and currently i have only<br>
>         one<br>
>         > interface eth0 in compute node.<br>
>         ><br>
>         ><br>
>         > How can i configure this.<br>
>         ><br>
>         ><br>
>         > Thanks for your time.<br>
>         ><br>
>         ><br>
>         ><br>
>         ><br>
>         > On Wed, Nov 26, 2014 at 1:04 PM, Andreas Scheuring<br>
>         > <<a href="mailto:scheuran@linux.vnet.ibm.com">scheuran@linux.vnet.ibm.com</a>> wrote:<br>
>         >         Hi Geo,<br>
>         >         your packet will be sent out of br-tun via the<br>
>         gre-xxxxx port<br>
>         >         "pointing"<br>
>         >         to your compute node. So what you have to do is to<br>
>         put the ip<br>
>         >         address to<br>
>         >         your eth interface connected to the wire. The same<br>
>         of course<br>
>         >         on the<br>
>         >         controller node. Putting the ip address on the<br>
>         br-tun does not<br>
>         >         make<br>
>         >         sense, as it just hangs around without any physical<br>
>         port<br>
>         >         plugged in.<br>
>         ><br>
>         >         This is how my ovs-vsctl show looks like on the<br>
>         compute node.<br>
>         ><br>
>         >         ovs-vsctl show<br>
>         >         9253a3f8-3c28-4953-ae85-ca94aa0d4fb1<br>
>         >             Bridge br-tun<br>
>         >                 Port br-tun<br>
>         >                     Interface br-tun<br>
>         >                         type: internal<br>
>         >                 Port "gre-c0a801ca"<br>
>         >                     Interface "gre-c0a801ca"<br>
>         >                         type: gre<br>
>         >                         options: {in_key=flow,<br>
>         >         local_ip="192.168.1.190",<br>
>         >         out_key=flow, remote_ip="192.168.1.202"}<br>
>         >                 Port patch-int<br>
>         >                     Interface patch-int<br>
>         >                         type: patch<br>
>         >                         options: {peer=patch-tun}<br>
>         >             Bridge br-int<br>
>         >                 Port br-int<br>
>         >                     Interface br-int<br>
>         >                         type: internal<br>
>         >                 Port patch-tun<br>
>         >                     Interface patch-tun<br>
>         >                         type: patch<br>
>         >                         options: {peer=patch-int}<br>
>         >                 Port "qvo1c352474-0c"<br>
>         >                     tag: 1<br>
>         >                     Interface "qvo1c352474-0c"<br>
>         >                 Port "qvo5782980b-be"<br>
>         >                     tag: 4095<br>
>         >                     Interface "qvo5782980b-be"<br>
>         >             ovs_version: "2.1.2"<br>
>         ><br>
>         ><br>
>         ><br>
>         >         And this is the ip configuration of the net device<br>
>         used:<br>
>         ><br>
>         >         22: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu<br>
>         1500 qdisc<br>
>         >         pfifo_fast<br>
>         >         state DOWN qlen 1000<br>
>         >             link/ether 02:00:00:39:13:9a brd<br>
>         ff:ff:ff:ff:ff:ff<br>
>         >             inet <a href="http://192.168.1.190/24" target="_blank">192.168.1.190/24</a> brd 192.168.1.255 scope<br>
>         global eth0<br>
>         >                valid_lft forever preferred_lft forever<br>
>         ><br>
>         ><br>
>         ><br>
>         ><br>
>         ><br>
>         >         And this is my ml2.conf on the compute node<br>
>         ><br>
>         >         [ovs]<br>
>         >         local_ip = 192.168.1.190<br>
>         >         tunnel_type = gre<br>
>         >         enable_tunneling = True<br>
>         ><br>
>         ><br>
>         ><br>
>         >         Regards<br>
>         ><br>
>         ><br>
>         >         --<br>
>         >         Andreas<br>
>         >         (irc: scheuran)<br>
>         ><br>
>         ><br>
>         >         On Wed, 2014-11-26 at 01:39 +0530, Geo Varghese<br>
>         wrote:<br>
>         >         > Hi Team,<br>
>         >         ><br>
>         >         ><br>
>         >         ><br>
>         >         > I need some help to setup instance tunnel.<br>
>         >         ><br>
>         >         ><br>
>         >         > I tried 2 node installation today and successfully<br>
>         added new<br>
>         >         compute1<br>
>         >         > node.<br>
>         >         ><br>
>         >         > Later i have created new VM on new compute1 node.<br>
>         It<br>
>         >         successfully<br>
>         >         > spawned but the issue is it couldn't get dhcp ip.<br>
>         >         ><br>
>         >         > Due to this eth0 is not assigned with an IP<br>
>         address. I think<br>
>         >         instance<br>
>         >         > tunnel interface is not correct.<br>
>         >         ><br>
>         >         > Currently I set IP to automatically created<br>
>         interface<br>
>         >         br-tun, do i<br>
>         >         > need to create new instance tunnel?<br>
>         >         ><br>
>         >         > =========================================<br>
>         >         > br-tun Link encap:Ethernet HWaddr<br>
>         6a:65:2d:ed:90:44<br>
>         >         > inet addr:192.168.123.179 Bcast:192.168.123.255<br>
>         >         Mask:255.255.255.0<br>
>         >         > inet6 addr: fe80::98c7:a1ff:fef6:bcd7/64<br>
>         Scope:Link<br>
>         >         > UP BROADCAST RUNNING MTU:1500 Metric:1<br>
>         >         > RX packets:0 errors:0 dropped:0 overruns:0 frame:0<br>
>         >         > TX packets:8 errors:0 dropped:0 overruns:0<br>
>         carrier:0<br>
>         >         > collisions:0 txqueuelen:0<br>
>         >         > RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)<br>
>         >         > =========================================<br>
>         >         ><br>
>         >         > And given this Ip addres(192.168.123.179) in [mvs]<br>
>         >         configuration<br>
>         >         > (/etc/neutron/plugins/ml2/ml2_conf.ini)<br>
>         >         ><br>
>         >         > [ovs]<br>
>         >         > local_ip = 192.168.123.179<br>
>         >         > tunnel_type = gre<br>
>         >         > enable_tunneling = True<br>
>         >         ><br>
>         >         > Is it correct? Please help me.<br>
>         >         ><br>
>         >         ><br>
>         >         > --<br>
>         >         > Regards,<br>
>         >         > Geo Varghese<br>
>         ><br>
>         >         > _______________________________________________<br>
>         >         > OpenStack-operators mailing list<br>
>         >         > <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
>         >         ><br>
>         ><br>
>          <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>         ><br>
>         ><br>
>         >         _______________________________________________<br>
>         >         OpenStack-operators mailing list<br>
>         >         <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
>         ><br>
>          <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>         ><br>
>         ><br>
>         ><br>
>         > --<br>
>         > --<br>
>         > Regards,<br>
>         > Geo Varghese<br>
><br>
><br>
><br>
><br>
><br>
> --<br>
> --<br>
> Regards,<br>
> Geo Varghese<br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr">--<div>Regards,</div><div>Geo Varghese</div></div></div>
</div>