<p dir="ltr">Good idea! Thank you</p>
<p dir="ltr">Can you please share changes to nova and neutron config. That go with this?</p>
<p dir="ltr">I have a Havana cluster with nova net and am trying to migrate my Dev cloud to neutron with flat physical network. </p>
<p dir="ltr">Regards<br>
Amit<br>
</p>
<div class="gmail_quote">On Apr 25, 2014 11:54 AM, "Matej" <<a href="mailto:matej@tam.si">matej@tam.si</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div>Hello Amit, I am replying also to the group, perhaps someone will find this useful one day :-)<br><br></div>I have two physical networks, let's say they are: <a href="http://192.168.22.0/24" target="_blank">192.168.22.0/24</a> and <a href="http://102.203.103.80/29" target="_blank">102.203.103.80/29</a>. I have a HW router that is the gateway for both networks and there are 2 NICs from every node (compute, network/controller combined in my case). Every of those 2 NICs is connected to the appropriately connected port on the router.<br>

<br></div><br></div></div></div>OVS configuration<br>[ovs]<br>debug = False<br>tenant_network_type = gre<br>tunnel_id_ranges = 1:1000<br>enable_tunneling = True<br>local_ip = 192.168.22.10<br>integration_bridge = br-int<br>

tunnel_bridge = br-tun<br>network_vlan_ranges = physnet1,physnet2<br>bridge_mappings = physnet1:br-em1,physnet2:br-em2<br><br>[agent]<br>polling_interval = 2<br><br>[securitygroup]<br>firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>

<br></div>br-em1 is the bridge for em1 interface, br-em2 is bridge for em2<br><br></div></div><br></div>Networks are created normally via neutron, for example public net:<br><br>net-create --provider:physical_network=physnet1 --provider:network_type=flat --shared public_net<br>

<br>subnet-create public_net <a href="http://102.203.103.80/29" target="_blank">102.203.103.80/29</a> --name public_subnet --no-gateway --host-route destination=<a href="http://0.0.0.0/0,nexthop=102.203.103.81" target="_blank">0.0.0.0/0,nexthop=102.203.103.81</a> --allocation-pool start=102.203.103.83,end=102.203.103.86 --dns-nameservers list=true 8.8.8.8<br>

<br></div><br><div><div><div><br></div></div></div>That's just basics, if you need any other information and I will be able to help, I will be happy to.<br><br></div>Best regards,<br>Matej<br><br></div><div class="gmail_extra">

<br><br><div class="gmail_quote">On Fri, Apr 25, 2014 at 11:58 AM, amit gupta <span dir="ltr"><<a href="mailto:sameidea@gmail.com" target="_blank">sameidea@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">


  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <div><br>
      Hi Matej,<br>
      <br>
      Great! glad to hear that.<br>
      <br>
      I have been trying to do this as well so can you please summarize
      how you did this and also post some configurations.<br>
      <br>
      Regards,<br>
      Amit<div><div><br>
      <br>
      On 4/25/2014 1:48 AM, Matej wrote:<br>
    </div></div></div><div><div>
    <blockquote type="cite">
      
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>Hello Zuo,<br>
                    <br>
                  </div>
                  thank you the information. You are right, br-int
                  cannot be used in bridge and that was one of my
                  mistakes. <br>
                </div>
                I was able to solve my issue entirely with the following
                set-up:<br>
              </div>
              two physical interfaces on each network and compute node
              and one physical interface is used for private (<a href="https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.0/24&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=1668f040c678d9a9564f28ca93152458aeb9befba077d0ec9ef1786bc74f73ae" target="_blank">192.168.22.0/24</a>)
              traffic, the other for public networks.<br>
              <br>
            </div>
            And things work just as intended to work! <br>
            <br>
          </div>
          Thank you very much for all the information provided, this
          list is very helpful resource.<br>
          <br>
        </div>
        Matej<br>
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">On Fri, Apr 25, 2014 at 4:11 AM, Zuo
          Changqian <span dir="ltr"><<a href="mailto:dummyhacker85@gmail.com" target="_blank">dummyhacker85@gmail.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr">Hi, Matej. About
              <div><br>
                <div><br>
                    network_vlan_ranges = physnet1<br>
                    bridge_mappings = physnet1:br-int<br>
                  <br>
                </div>
              </div>
              <div>I think br-int can not be used here.<br>
                <br>
                You may need another physical interface (or something
                can function like this) on all compute nodes, let's say
                ethX, and create a new bridge like:<br>
                <br>
              </div>
              <div>  ovs-vsctl add-br flatnet-br<br>
              </div>
              <div>  ovs-vsctl add-port flatnet-br ethX<br>
                <br>
              </div>
              <div>This must be done on all your compute nodes. On
                network node, I think just adding flatnet-br is enough,
                for there is no VM running here.<br>
                <br>
              </div>
              <div>Then change all your ovs_neutron_plugin.ini like:<br>
                <br>
              </div>
              <div>  network_vlan_ranges = flatnet<br>
              </div>
              <div>  bridge_mappings = flatnet:flatnet-br<br>
                <br>
              </div>
              <div>Now you can use flatnet as your provider network, and
                VM should connect through it directly to outside
                physical network environment. It bases on our VLAN +
                flat testing envrionment (We totally disabled L3 agent
                and NAT), hope this could help.<br>
              </div>
              <div><br>
                <br>
              </div>
              <div><br>
              </div>
              <div><br>
                <br>
              </div>
              <div><br>
                <br>
              </div>
            </div>
            <div class="gmail_extra"><br>
              <br>
              <div class="gmail_quote">2014-04-24 0:29 GMT+08:00 Matej <span dir="ltr"><<a href="mailto:matej@tam.si" target="_blank">matej@tam.si</a>></span>:<br>
                <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                  <div>
                    <div>
                      <div dir="ltr">
                        <div>Hello,<br>
                          <br>
                          <pre>To hopefully move into the right way (first phase with using flat network with private IPs and then moving further to public IPs), I have removed all previous routers and networks, 



my plan now is to use only hardware router (IP 192.168.22.1) and having a flat network type.</pre>
                          <br>
                          <br>
                          I have added the following two lines to
                          /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
                          on Controller and Compute:<br>
                          <br>
                          network_vlan_ranges = physnet1<br>
                          bridge_mappings = physnet1:br-int<br>
                          <br>
                        </div>
                        <div>My current ovs_neutron_plugin.ini on
                          Controller:<br>
                          <br>
                        </div>
                        <div>
                          <div>[ovs]<br>
                            tenant_network_type = gre<br>
                            tunnel_id_ranges = 1:1000<br>
                            enable_tunneling = True<br>
                            local_ip = 192.168.22.10<br>
                            integration_bridge = br-int<br>
                            tunnel_bridge = br-tun<br>
                            tunnel_types=gre<br>
                          </div>
                          network_vlan_ranges = physnet1<br>
                          bridge_mappings = physnet1:br-int
                          <div><br>
                            <br>
                            [agent]<br>
                            polling_interval = 2<br>
                            <br>
                            [securitygroup]<br>
                            firewall_driver =
                            neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
                            <br>
                          </div>
                        </div>
                        <div>My current ovs_neutron_plugin.ini on
                          Compute:<br>
                          <br>
                        </div>
                        <div>
                          <div>[ovs]<br>
                            tenant_network_type = gre<br>
                            tunnel_id_ranges = 1:1000<br>
                            enable_tunneling = True<br>
                          </div>
                          local_ip = 192.168.22.11<br>
                          tunnel_bridge = br-tun<br>
                          integration_bridge = br-int<br>
                          tunnel_types = gre<br>
                          network_vlan_ranges = physnet1<br>
                          bridge_mappings = physnet1:br-int
                          <div>
                            <br>
                            <br>
                            [agent]<br>
                            polling_interval = 2<br>
                            <br>
                            [securitygroup]<br>
                            firewall_driver =
                            neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
                            <br>
                          </div>
                        </div>
                        <div>My first goal is to get VMs having IP
                          addresses from the subnet <a href="https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.0/24&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=1668f040c678d9a9564f28ca93152458aeb9befba077d0ec9ef1786bc74f73ae" target="_blank">192.168.22.0/24</a>, namely
                          from the pool  <br>
                        </div>
                        <div>
                          <pre></pre>
                          <pre>Now I am able to create a net:<div>
+---------------------------+--------------------------------------+
| Field                     | Value                                |

+---------------------------+--------------------------------------+

| admin_state_up            | True                                 |
</div>| id                        | 43796de1-ea43-4cbe-809a-0554ed4de55f |
| name                      | privat                               |


| provider:network_type     | flat                                 |

| provider:physical_network | physnet1                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |



| status                    | ACTIVE                               |
| subnets                   | db596734-3f9a-4699-abe5-7887a2a15b88 |
| tenant_id                 | a0edd2a531bb41e6b17e0fd644bfd494     |
+---------------------------+--------------------------------------+




</pre>
                          <pre>And a subnet:
</pre>
                          <pre>| Field            | Value                                                   |
+------------------+---------------------------------------------------------+
| allocation_pools | {"start": "192.168.22.201", "end": "192.168.22.254"}    |



| cidr             | <a href="https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.0/24&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=1668f040c678d9a9564f28ca93152458aeb9befba077d0ec9ef1786bc74f73ae" target="_blank">192.168.22.0/24</a>                                         |
| dns_nameservers  |                                                         |
| enable_dhcp      | False                                                   |



| gateway_ip       |                                                         |
| host_routes      | {"destination": "<a href="https://urldefense.proofpoint.com/v1/url?u=http://0.0.0.0/0&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=c1e870779ed1f1e00c7d60718803f6e567e728f1f8f825ba4a054776a2997745" target="_blank">0.0.0.0/0</a>", "nexthop": "192.168.22.1"} |



| id               | db596734-3f9a-4699-abe5-7887a2a15b88                    |
| ip_version       | 4                                                       |
| name             | privat-subnet                                           |



| network_id       | 43796de1-ea43-4cbe-809a-0554ed4de55f                    |
| tenant_id        | a0edd2a531bb41e6b17e0fd644bfd494                        |
+------------------+---------------------------------------------------------+




</pre>
                          <pre>I am not using DHCP and then I start CirrOS instance
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| ID                                   | Name | Status | Task State | Power State | Networks              |



+--------------------------------------+------+--------+------------+-------------+-----------------------+
| 10925a36-fbcb-4348-b569-a3fcd5b242a2 | c1   | ACTIVE | -          | Running     | privat=192.168.22.203 |



+--------------------------------------+------+--------+------------+-------------+-----------------------+


</pre>
                          <pre>Then I log-in to the CirrOS instance via Console and set IP <a href="https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.203&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=675d079d80799f8fcc722baa899c3a2fea103d894be10f90e97e42e83c35b972" target="_blank">192.168.22.203</a>: sudo ifconfig eth0 inet 192.168.22.203 netmask 255.255.255.0, but no traffic goes thru.



</pre>
                          <pre>I have also tried to update network router:external to True, but with no success.

</pre>
                          <pre>What am I doing wrong here? I am in the phase of building a new infrastructure and can *afford* changes, but after spending so much time around those networking issues I really hope that I will be able to move further on.



</pre>
                          <pre></pre>
                          <pre>Thank you for all the ideas in advance.<span><font color="#888888">
Matej
</font></span></pre>
                        </div>
                        <div><br>
                        </div>
                      </div>
                      <div>
                        <div>
                          <div class="gmail_extra">
                            <br>
                            <br>
                            <div class="gmail_quote">On Wed, Apr 23,
                              2014 at 10:47 AM, Robert van Leeuwen <span dir="ltr"><<a href="mailto:Robert.vanLeeuwen@spilgames.com" target="_blank">Robert.vanLeeuwen@spilgames.com</a>></span>
                              wrote:<br>
                              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                <div>> neutron net-create public
                                  --tenant_id
                                  a0edd2a531bb41e6b17e0fd644bfd494
                                   --provider:network_type flat
                                  --provider:physical_network default
                                  --shared True<br>
                                  > Invalid input for
                                  provider:physical_network. Reason:
                                  '[u'default', u'True']' is not a valid
                                  string.<br>
                                  ><br>
                                  > For being able to use
                                  --provider:physical_network I need
                                  bridge_mappings in configuration,
                                  right? When I add it, my existing GRE
                                  network stops working.<br>
                                  > It seems I am lost here ...<br>
                                  <br>
                                </div>
                                You should be able to run bridge-mapped
                                networks and GRE tunnels at the same
                                time.<br>
                                Adding the bridge map config should not
                                break GRE. (always do this in a test
                                setup first ;)<br>
                                We used to do this up to Folsom (maybe
                                even grizzly, do not remember exact
                                timelines)<br>
                                <br>
                                We moved to a full VLAN setup later on
                                because GRE was adding complexity
                                without any real benefits.<br>
                                (Since we do not expect to have
                                thousands of networks we do not expect
                                to run out of VLANs)<br>
                                <br>
                                Cheers,<br>
                                Robert van Leeuwen<br>
                                <br>
                                <br>
                                <br>
                                <br>
                              </blockquote>
                            </div>
                            <br>
                          </div>
                        </div>
                      </div>
                      <br>
                    </div>
                  </div>
                  <div>_______________________________________________<br>
                    Mailing list: <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=979a651ea91c98acb0ef591c690834ac6b018a74a79a6914729eed1aa2cf46b3" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>


                    Post to     : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
                    Unsubscribe : <a href="https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=979a651ea91c98acb0ef591c690834ac6b018a74a79a6914729eed1aa2cf46b3" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>


                    <br>
                  </div>
                </blockquote>
              </div>
              <br>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      <pre>_______________________________________________
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to     : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</pre>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div>
</blockquote></div>