[Openstack] Public IPs without NAT

Matej matej at tam.si
Fri Apr 25 08:48:29 UTC 2014


Hello Zuo,

thank you the information. You are right, br-int cannot be used in bridge
and that was one of my mistakes.
I was able to solve my issue entirely with the following set-up:
two physical interfaces on each network and compute node and one physical
interface is used for private (192.168.22.0/24) traffic, the other for
public networks.

And things work just as intended to work!

Thank you very much for all the information provided, this list is very
helpful resource.

Matej


On Fri, Apr 25, 2014 at 4:11 AM, Zuo Changqian <dummyhacker85 at gmail.com>wrote:

> Hi, Matej. About
>
>
>   network_vlan_ranges = physnet1
>   bridge_mappings = physnet1:br-int
>
> I think br-int can not be used here.
>
> You may need another physical interface (or something can function like
> this) on all compute nodes, let's say ethX, and create a new bridge like:
>
>   ovs-vsctl add-br flatnet-br
>   ovs-vsctl add-port flatnet-br ethX
>
> This must be done on all your compute nodes. On network node, I think just
> adding flatnet-br is enough, for there is no VM running here.
>
> Then change all your ovs_neutron_plugin.ini like:
>
>   network_vlan_ranges = flatnet
>   bridge_mappings = flatnet:flatnet-br
>
> Now you can use flatnet as your provider network, and VM should connect
> through it directly to outside physical network environment. It bases on
> our VLAN + flat testing envrionment (We totally disabled L3 agent and NAT),
> hope this could help.
>
>
>
>
>
>
>
>
>
> 2014-04-24 0:29 GMT+08:00 Matej <matej at tam.si>:
>
>> Hello,
>>
>> To hopefully move into the right way (first phase with using flat network with private IPs and then moving further to public IPs), I have removed all previous routers and networks,
>>
>>
>> my plan now is to use only hardware router (IP 192.168.22.1) and having a flat network type.
>>
>>
>>
>> I have added the following two lines to
>> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Controller and
>> Compute:
>>
>> network_vlan_ranges = physnet1
>> bridge_mappings = physnet1:br-int
>>
>> My current ovs_neutron_plugin.ini on Controller:
>>
>> [ovs]
>> tenant_network_type = gre
>> tunnel_id_ranges = 1:1000
>> enable_tunneling = True
>> local_ip = 192.168.22.10
>> integration_bridge = br-int
>> tunnel_bridge = br-tun
>> tunnel_types=gre
>> network_vlan_ranges = physnet1
>> bridge_mappings = physnet1:br-int
>>
>>
>> [agent]
>> polling_interval = 2
>>
>> [securitygroup]
>> firewall_driver =
>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>
>> My current ovs_neutron_plugin.ini on Compute:
>>
>> [ovs]
>> tenant_network_type = gre
>> tunnel_id_ranges = 1:1000
>> enable_tunneling = True
>> local_ip = 192.168.22.11
>> tunnel_bridge = br-tun
>> integration_bridge = br-int
>> tunnel_types = gre
>> network_vlan_ranges = physnet1
>> bridge_mappings = physnet1:br-int
>>
>>
>> [agent]
>> polling_interval = 2
>>
>> [securitygroup]
>> firewall_driver =
>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>
>> My first goal is to get VMs having IP addresses from the subnet
>> 192.168.22.0/24, namely from the pool
>>
>>
>> Now I am able to create a net:
>>
>> +---------------------------+--------------------------------------+
>> | Field                     | Value                                |
>>
>> +---------------------------+--------------------------------------+
>>
>> | admin_state_up            | True                                 |
>> | id                        | 43796de1-ea43-4cbe-809a-0554ed4de55f |
>> | name                      | privat                               |
>>
>> | provider:network_type     | flat                                 |
>>
>> | provider:physical_network | physnet1                             |
>> | provider:segmentation_id  |                                      |
>> | router:external           | False                                |
>> | shared                    | True                                 |
>>
>>
>> | status                    | ACTIVE                               |
>> | subnets                   | db596734-3f9a-4699-abe5-7887a2a15b88 |
>> | tenant_id                 | a0edd2a531bb41e6b17e0fd644bfd494     |
>> +---------------------------+--------------------------------------+
>>
>>
>> And a subnet:
>>
>>
>> | Field            | Value                                                   |
>> +------------------+---------------------------------------------------------+
>> | allocation_pools | {"start": "192.168.22.201", "end": "192.168.22.254"}    |
>>
>>
>> | cidr             | 192.168.22.0/24                                         |
>> | dns_nameservers  |                                                         |
>> | enable_dhcp      | False                                                   |
>>
>>
>> | gateway_ip       |                                                         |
>> | host_routes      | {"destination": "0.0.0.0/0", "nexthop": "192.168.22.1"} |
>>
>>
>> | id               | db596734-3f9a-4699-abe5-7887a2a15b88                    |
>> | ip_version       | 4                                                       |
>> | name             | privat-subnet                                           |
>>
>>
>> | network_id       | 43796de1-ea43-4cbe-809a-0554ed4de55f                    |
>> | tenant_id        | a0edd2a531bb41e6b17e0fd644bfd494                        |
>> +------------------+---------------------------------------------------------+
>>
>>
>> I am not using DHCP and then I start CirrOS instance
>> +--------------------------------------+------+--------+------------+-------------+-----------------------+
>> | ID                                   | Name | Status | Task State | Power State | Networks              |
>>
>>
>> +--------------------------------------+------+--------+------------+-------------+-----------------------+
>> | 10925a36-fbcb-4348-b569-a3fcd5b242a2 | c1   | ACTIVE | -          | Running     | privat=192.168.22.203 |
>>
>>
>> +--------------------------------------+------+--------+------------+-------------+-----------------------+
>>
>>
>> Then I log-in to the CirrOS instance via Console and set IP 192.168.22.203: sudo ifconfig eth0 inet 192.168.22.203 netmask 255.255.255.0, but no traffic goes thru.
>>
>> I have also tried to update network router:external to True, but with no success.
>>
>> What am I doing wrong here? I am in the phase of building a new infrastructure and can *afford* changes, but after spending so much time around those networking issues I really hope that I will be able to move further on.
>>
>>
>> Thank you for all the ideas in advance.
>> Matej
>>
>>
>>
>>
>> On Wed, Apr 23, 2014 at 10:47 AM, Robert van Leeuwen <
>> Robert.vanLeeuwen at spilgames.com> wrote:
>>
>>> > neutron net-create public --tenant_id a0edd2a531bb41e6b17e0fd644bfd494
>>>  --provider:network_type flat --provider:physical_network default --shared
>>> True
>>> > Invalid input for provider:physical_network. Reason: '[u'default',
>>> u'True']' is not a valid string.
>>> >
>>> > For being able to use --provider:physical_network I need
>>> bridge_mappings in configuration, right? When I add it, my existing GRE
>>> network stops working.
>>> > It seems I am lost here ...
>>>
>>> You should be able to run bridge-mapped networks and GRE tunnels at the
>>> same time.
>>> Adding the bridge map config should not break GRE. (always do this in a
>>> test setup first ;)
>>> We used to do this up to Folsom (maybe even grizzly, do not remember
>>> exact timelines)
>>>
>>> We moved to a full VLAN setup later on because GRE was adding complexity
>>> without any real benefits.
>>> (Since we do not expect to have thousands of networks we do not expect
>>> to run out of VLANs)
>>>
>>> Cheers,
>>> Robert van Leeuwen
>>>
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140425/b882b2a5/attachment.html>


More information about the Openstack mailing list