<div dir="ltr"><div><div><div><div><div><div>Hello Zuo,<br><br></div>thank you the information. You are right, br-int cannot be used in bridge and that was one of my mistakes. <br></div>I was able to solve my issue entirely with the following set-up:<br>
</div>two physical interfaces on each network and compute node and one physical interface is used for private (<a href="http://192.168.22.0/24">192.168.22.0/24</a>) traffic, the other for public networks.<br><br></div>And things work just as intended to work! <br>
<br></div>Thank you very much for all the information provided, this list is very helpful resource.<br><br></div>Matej<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Apr 25, 2014 at 4:11 AM, Zuo Changqian <span dir="ltr"><<a href="mailto:dummyhacker85@gmail.com" target="_blank">dummyhacker85@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi, Matej. About<div class=""><br><div><br> network_vlan_ranges = physnet1<br> bridge_mappings = physnet1:br-int<br>
<br></div></div><div>I think br-int can not be used here.<br><br>You may need another physical interface (or something can function like this) on all compute nodes, let's say ethX, and create a new bridge like:<br>
<br></div><div> ovs-vsctl add-br flatnet-br<br></div><div> ovs-vsctl add-port flatnet-br ethX<br><br></div><div>This must be done on all your compute nodes. On network node, I think just adding flatnet-br is enough, for there is no VM running here.<br>
<br></div><div>Then change all your ovs_neutron_plugin.ini like:<br><br></div><div> network_vlan_ranges = flatnet<br></div><div> bridge_mappings = flatnet:flatnet-br<br><br></div><div>Now you can use flatnet as your provider network, and VM should connect through it directly to outside physical network environment. It bases on our VLAN + flat testing envrionment (We totally disabled L3 agent and NAT), hope this could help.<br>
</div><div><br><br></div><div><br></div><div><br><br></div><div><br><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-04-24 0:29 GMT+08:00 Matej <span dir="ltr"><<a href="mailto:matej@tam.si" target="_blank">matej@tam.si</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div>Hello,<br><br><pre>To hopefully move into the right way (first phase with using flat network with private IPs and then moving further to public IPs), I have removed all previous routers and networks, <br>
my plan now is to use only hardware router (IP 192.168.22.1) and having a flat network type.</pre><br><br>I have added the following two lines to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Controller and Compute:<br>
<br>network_vlan_ranges = physnet1<br>bridge_mappings = physnet1:br-int<br><br></div><div>My current ovs_neutron_plugin.ini on Controller:<br><br></div><div><div>[ovs]<br>tenant_network_type = gre<br>tunnel_id_ranges = 1:1000<br>
enable_tunneling = True<br>
local_ip = 192.168.22.10<br>integration_bridge = br-int<br>tunnel_bridge = br-tun<br>tunnel_types=gre<br></div>network_vlan_ranges = physnet1<br>bridge_mappings = physnet1:br-int<div><br><br>[agent]<br>polling_interval = 2<br>
<br>[securitygroup]<br>
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br><br></div></div><div>My current ovs_neutron_plugin.ini on Compute:<br><br></div><div><div>[ovs]<br>tenant_network_type = gre<br>
tunnel_id_ranges = 1:1000<br>
enable_tunneling = True<br></div>local_ip = 192.168.22.11<br>tunnel_bridge = br-tun<br>integration_bridge = br-int<br>tunnel_types = gre<br>network_vlan_ranges = physnet1<br>bridge_mappings = physnet1:br-int<div>
<br><br>[agent]<br>polling_interval = 2<br>
<br>[securitygroup]<br>firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br><br></div></div><div>My first goal is to get VMs having IP addresses from the subnet <a href="http://192.168.22.0/24" target="_blank">192.168.22.0/24</a>, namely from the pool <br>
</div><div><pre><br></pre><pre>Now I am able to create a net:<div><br>+---------------------------+--------------------------------------+<br>| Field | Value |<br>
+---------------------------+--------------------------------------+<br>
| admin_state_up | True |<br></div>| id | 43796de1-ea43-4cbe-809a-0554ed4de55f |<br>| name | privat |<br>
| provider:network_type | flat |<br>
| provider:physical_network | physnet1 |<br>| provider:segmentation_id | |<br>| router:external | False |<br>| shared | True |<br>
| status | ACTIVE |<br>| subnets | db596734-3f9a-4699-abe5-7887a2a15b88 |<br>| tenant_id | a0edd2a531bb41e6b17e0fd644bfd494 |<br>+---------------------------+--------------------------------------+<br>
<br></pre><pre>And a subnet:<br></pre><pre><br>| Field | Value |<br>+------------------+---------------------------------------------------------+<br>| allocation_pools | {"start": "192.168.22.201", "end": "192.168.22.254"} |<br>
| cidr | <a href="http://192.168.22.0/24" target="_blank">192.168.22.0/24</a> |<br>| dns_nameservers | |<br>| enable_dhcp | False |<br>
| gateway_ip | |<br>| host_routes | {"destination": "<a href="http://0.0.0.0/0" target="_blank">0.0.0.0/0</a>", "nexthop": "192.168.22.1"} |<br>
| id | db596734-3f9a-4699-abe5-7887a2a15b88 |<br>| ip_version | 4 |<br>| name | privat-subnet |<br>
| network_id | 43796de1-ea43-4cbe-809a-0554ed4de55f |<br>| tenant_id | a0edd2a531bb41e6b17e0fd644bfd494 |<br>+------------------+---------------------------------------------------------+<br>
<br></pre><pre>I am not using DHCP and then I start CirrOS instance<br>+--------------------------------------+------+--------+------------+-------------+-----------------------+<br>| ID | Name | Status | Task State | Power State | Networks |<br>
+--------------------------------------+------+--------+------------+-------------+-----------------------+<br>| 10925a36-fbcb-4348-b569-a3fcd5b242a2 | c1 | ACTIVE | - | Running | privat=192.168.22.203 |<br>
+--------------------------------------+------+--------+------------+-------------+-----------------------+<br><br><br></pre><pre>Then I log-in to the CirrOS instance via Console and set IP <a href="http://192.168.22.203" target="_blank">192.168.22.203</a>: sudo ifconfig eth0 inet 192.168.22.203 netmask 255.255.255.0, but no traffic goes thru.<br>
</pre><pre>I have also tried to update network router:external to True, but with no success.<br><br></pre><pre>What am I doing wrong here? I am in the phase of building a new infrastructure and can *afford* changes, but after spending so much time around those networking issues I really hope that I will be able to move further on.<br>
</pre><pre></pre><pre><br></pre><pre>Thank you for all the ideas in advance.<span><font color="#888888"><br>Matej<br></font></span></pre></div><div><br></div></div><div><div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Wed, Apr 23, 2014 at 10:47 AM, Robert van Leeuwen <span dir="ltr"><<a href="mailto:Robert.vanLeeuwen@spilgames.com" target="_blank">Robert.vanLeeuwen@spilgames.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>> neutron net-create public --tenant_id a0edd2a531bb41e6b17e0fd644bfd494 --provider:network_type flat --provider:physical_network default --shared True<br>
> Invalid input for provider:physical_network. Reason: '[u'default', u'True']' is not a valid string.<br>
><br>
> For being able to use --provider:physical_network I need bridge_mappings in configuration, right? When I add it, my existing GRE network stops working.<br>
> It seems I am lost here ...<br>
<br>
</div>You should be able to run bridge-mapped networks and GRE tunnels at the same time.<br>
Adding the bridge map config should not break GRE. (always do this in a test setup first ;)<br>
We used to do this up to Folsom (maybe even grizzly, do not remember exact timelines)<br>
<br>
We moved to a full VLAN setup later on because GRE was adding complexity without any real benefits.<br>
(Since we do not expect to have thousands of networks we do not expect to run out of VLANs)<br>
<br>
Cheers,<br>
Robert van Leeuwen<br>
<br>
<br>
<br>
<br>
</blockquote></div><br></div>
</div></div><br></div></div><div class="">_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
<br></div></blockquote></div><br></div>
</blockquote></div><br></div>