[Openstack] Public IPs without NAT
Matej
matej at tam.si
Sat May 3 16:51:46 UTC 2014
Hi, sorry for the delay.
I am attaching my Neutron and Nova configuration files.
nova.conf on Controller:
[DEFAULT]
neutron_metadata_proxy_shared_secret = pa55
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://Controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=pa55
neutron_admin_auth_url=http://Controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
novncproxy_base_url=http://Controller:6080/vnc_auto.html
novncproxy_host=0.0.0.0
novncproxy_port=6080
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
metadata_host=192.168.22.10
metadata_listen=0.0.0.0
my_ip=192.168.22.10
vncserver_listen=192.168.22.10
vncserver_proxyclient_address=192.168.22.10
auth_protocol = http
auth_strategy=keystone
rpc_backend = nova.rpc.impl_kombu
rabbit_host = Controller
rabbit_password = pa55
rabbit_port = 5672
rabbit_use_ssl=false
rabbit_userid=guest
[database]
connection = mysql://nova:pa55@Controller/nova
[keystone_authtoken]
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
neutron.conf on Controller:
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://Controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 234dbccecd994909a1453620c7b9c09d
nova_admin_password = pa55
nova_admin_auth_url = http://Controller:5000/v2.0/
debug = false
verbose = false
auth_host = Controller
admin_tenant_name = service
admin_user = neutron
admin_password = pa55
auth_port = 35357
auth_protocol = http
rabbit_host = Controller
rabbit_password = pa55
rabbit_port = 5672
rabbit_use_ssl=false
rabbit_userid=guest
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin =
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
api_paste_config = /etc/neutron/api-paste.ini
fake_rabbit = False
notification_driver = neutron.openstack.common.notifier.rpc_notifier
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_host = tController
admin_tenant_name = service
admin_user = neutron
admin_password = pa55
#auth_url = http://Controller:35357/v2.0
auth_port = 35357
auth_protocol = http
auth_strategy = keystone
signing_dir = $state_path/keystone-signing
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = Controller
rabbit_port = 5672
rabbit_password = pa55
[database]
connection = mysql://neutron:pa55@Controller/neutron
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
In case you need any other information, I will be glad to help you.
Best regards,
Matej
On Fri, Apr 25, 2014 at 10:31 PM, Amit <sameidea at gmail.com> wrote:
> Good idea! Thank you
>
> Can you please share changes to nova and neutron config. That go with this?
>
> I have a Havana cluster with nova net and am trying to migrate my Dev
> cloud to neutron with flat physical network.
>
> Regards
> Amit
> On Apr 25, 2014 11:54 AM, "Matej" <matej at tam.si> wrote:
>
>> Hello Amit, I am replying also to the group, perhaps someone will find
>> this useful one day :-)
>>
>> I have two physical networks, let's say they are: 192.168.22.0/24 and
>> 102.203.103.80/29. I have a HW router that is the gateway for both
>> networks and there are 2 NICs from every node (compute, network/controller
>> combined in my case). Every of those 2 NICs is connected to the
>> appropriately connected port on the router.
>>
>>
>> OVS configuration
>> [ovs]
>> debug = False
>> tenant_network_type = gre
>> tunnel_id_ranges = 1:1000
>> enable_tunneling = True
>> local_ip = 192.168.22.10
>> integration_bridge = br-int
>> tunnel_bridge = br-tun
>> network_vlan_ranges = physnet1,physnet2
>> bridge_mappings = physnet1:br-em1,physnet2:br-em2
>>
>> [agent]
>> polling_interval = 2
>>
>> [securitygroup]
>> firewall_driver =
>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>
>> br-em1 is the bridge for em1 interface, br-em2 is bridge for em2
>>
>>
>> Networks are created normally via neutron, for example public net:
>>
>> net-create --provider:physical_network=physnet1
>> --provider:network_type=flat --shared public_net
>>
>> subnet-create public_net 102.203.103.80/29 --name public_subnet
>> --no-gateway --host-route destination=0.0.0.0/0,nexthop=102.203.103.81--allocation-pool start=102.203.103.83,end=102.203.103.86 --dns-nameservers
>> list=true 8.8.8.8
>>
>>
>>
>> That's just basics, if you need any other information and I will be able
>> to help, I will be happy to.
>>
>> Best regards,
>> Matej
>>
>>
>>
>> On Fri, Apr 25, 2014 at 11:58 AM, amit gupta <sameidea at gmail.com> wrote:
>>
>>>
>>> Hi Matej,
>>>
>>> Great! glad to hear that.
>>>
>>> I have been trying to do this as well so can you please summarize how
>>> you did this and also post some configurations.
>>>
>>> Regards,
>>> Amit
>>>
>>>
>>> On 4/25/2014 1:48 AM, Matej wrote:
>>>
>>> Hello Zuo,
>>>
>>> thank you the information. You are right, br-int cannot be used in
>>> bridge and that was one of my mistakes.
>>> I was able to solve my issue entirely with the following set-up:
>>> two physical interfaces on each network and compute node and one
>>> physical interface is used for private (192.168.22.0/24<https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.0/24&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=1668f040c678d9a9564f28ca93152458aeb9befba077d0ec9ef1786bc74f73ae>)
>>> traffic, the other for public networks.
>>>
>>> And things work just as intended to work!
>>>
>>> Thank you very much for all the information provided, this list is very
>>> helpful resource.
>>>
>>> Matej
>>>
>>>
>>> On Fri, Apr 25, 2014 at 4:11 AM, Zuo Changqian <dummyhacker85 at gmail.com>wrote:
>>>
>>>> Hi, Matej. About
>>>>
>>>>
>>>> network_vlan_ranges = physnet1
>>>> bridge_mappings = physnet1:br-int
>>>>
>>>> I think br-int can not be used here.
>>>>
>>>> You may need another physical interface (or something can function like
>>>> this) on all compute nodes, let's say ethX, and create a new bridge like:
>>>>
>>>> ovs-vsctl add-br flatnet-br
>>>> ovs-vsctl add-port flatnet-br ethX
>>>>
>>>> This must be done on all your compute nodes. On network node, I think
>>>> just adding flatnet-br is enough, for there is no VM running here.
>>>>
>>>> Then change all your ovs_neutron_plugin.ini like:
>>>>
>>>> network_vlan_ranges = flatnet
>>>> bridge_mappings = flatnet:flatnet-br
>>>>
>>>> Now you can use flatnet as your provider network, and VM should
>>>> connect through it directly to outside physical network environment. It
>>>> bases on our VLAN + flat testing envrionment (We totally disabled L3 agent
>>>> and NAT), hope this could help.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 2014-04-24 0:29 GMT+08:00 Matej <matej at tam.si>:
>>>>
>>>>> Hello,
>>>>>
>>>>> To hopefully move into the right way (first phase with using flat network with private IPs and then moving further to public IPs), I have removed all previous routers and networks,
>>>>>
>>>>>
>>>>>
>>>>> my plan now is to use only hardware router (IP 192.168.22.1) and having a flat network type.
>>>>>
>>>>>
>>>>>
>>>>> I have added the following two lines to
>>>>> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Controller and
>>>>> Compute:
>>>>>
>>>>> network_vlan_ranges = physnet1
>>>>> bridge_mappings = physnet1:br-int
>>>>>
>>>>> My current ovs_neutron_plugin.ini on Controller:
>>>>>
>>>>> [ovs]
>>>>> tenant_network_type = gre
>>>>> tunnel_id_ranges = 1:1000
>>>>> enable_tunneling = True
>>>>> local_ip = 192.168.22.10
>>>>> integration_bridge = br-int
>>>>> tunnel_bridge = br-tun
>>>>> tunnel_types=gre
>>>>> network_vlan_ranges = physnet1
>>>>> bridge_mappings = physnet1:br-int
>>>>>
>>>>>
>>>>> [agent]
>>>>> polling_interval = 2
>>>>>
>>>>> [securitygroup]
>>>>> firewall_driver =
>>>>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>>>
>>>>> My current ovs_neutron_plugin.ini on Compute:
>>>>>
>>>>> [ovs]
>>>>> tenant_network_type = gre
>>>>> tunnel_id_ranges = 1:1000
>>>>> enable_tunneling = True
>>>>> local_ip = 192.168.22.11
>>>>> tunnel_bridge = br-tun
>>>>> integration_bridge = br-int
>>>>> tunnel_types = gre
>>>>> network_vlan_ranges = physnet1
>>>>> bridge_mappings = physnet1:br-int
>>>>>
>>>>>
>>>>> [agent]
>>>>> polling_interval = 2
>>>>>
>>>>> [securitygroup]
>>>>> firewall_driver =
>>>>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>>>
>>>>> My first goal is to get VMs having IP addresses from the subnet
>>>>> 192.168.22.0/24<https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.0/24&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=1668f040c678d9a9564f28ca93152458aeb9befba077d0ec9ef1786bc74f73ae>,
>>>>> namely from the pool
>>>>>
>>>>> Now I am able to create a net:
>>>>>
>>>>> +---------------------------+--------------------------------------+
>>>>> | Field | Value |
>>>>>
>>>>> +---------------------------+--------------------------------------+
>>>>>
>>>>> | admin_state_up | True |
>>>>> | id | 43796de1-ea43-4cbe-809a-0554ed4de55f |
>>>>> | name | privat |
>>>>>
>>>>>
>>>>> | provider:network_type | flat |
>>>>>
>>>>> | provider:physical_network | physnet1 |
>>>>> | provider:segmentation_id | |
>>>>> | router:external | False |
>>>>> | shared | True |
>>>>>
>>>>>
>>>>>
>>>>> | status | ACTIVE |
>>>>> | subnets | db596734-3f9a-4699-abe5-7887a2a15b88 |
>>>>> | tenant_id | a0edd2a531bb41e6b17e0fd644bfd494 |
>>>>> +---------------------------+--------------------------------------+
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> And a subnet:
>>>>>
>>>>> | Field | Value |
>>>>> +------------------+---------------------------------------------------------+
>>>>> | allocation_pools | {"start": "192.168.22.201", "end": "192.168.22.254"} |
>>>>>
>>>>>
>>>>>
>>>>> | cidr | 192.168.22.0/24 <https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.0/24&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=1668f040c678d9a9564f28ca93152458aeb9befba077d0ec9ef1786bc74f73ae> |
>>>>> | dns_nameservers | |
>>>>> | enable_dhcp | False |
>>>>>
>>>>>
>>>>>
>>>>> | gateway_ip | |
>>>>> | host_routes | {"destination": "0.0.0.0/0 <https://urldefense.proofpoint.com/v1/url?u=http://0.0.0.0/0&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=c1e870779ed1f1e00c7d60718803f6e567e728f1f8f825ba4a054776a2997745>", "nexthop": "192.168.22.1"} |
>>>>>
>>>>>
>>>>>
>>>>> | id | db596734-3f9a-4699-abe5-7887a2a15b88 |
>>>>> | ip_version | 4 |
>>>>> | name | privat-subnet |
>>>>>
>>>>>
>>>>>
>>>>> | network_id | 43796de1-ea43-4cbe-809a-0554ed4de55f |
>>>>> | tenant_id | a0edd2a531bb41e6b17e0fd644bfd494 |
>>>>> +------------------+---------------------------------------------------------+
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I am not using DHCP and then I start CirrOS instance
>>>>> +--------------------------------------+------+--------+------------+-------------+-----------------------+
>>>>> | ID | Name | Status | Task State | Power State | Networks |
>>>>>
>>>>>
>>>>>
>>>>> +--------------------------------------+------+--------+------------+-------------+-----------------------+
>>>>> | 10925a36-fbcb-4348-b569-a3fcd5b242a2 | c1 | ACTIVE | - | Running | privat=192.168.22.203 |
>>>>>
>>>>>
>>>>>
>>>>> +--------------------------------------+------+--------+------------+-------------+-----------------------+
>>>>>
>>>>>
>>>>>
>>>>> Then I log-in to the CirrOS instance via Console and set IP 192.168.22.203 <https://urldefense.proofpoint.com/v1/url?u=http://192.168.22.203&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=675d079d80799f8fcc722baa899c3a2fea103d894be10f90e97e42e83c35b972>: sudo ifconfig eth0 inet 192.168.22.203 netmask 255.255.255.0, but no traffic goes thru.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I have also tried to update network router:external to True, but with no success.
>>>>>
>>>>>
>>>>> What am I doing wrong here? I am in the phase of building a new infrastructure and can *afford* changes, but after spending so much time around those networking issues I really hope that I will be able to move further on.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Thank you for all the ideas in advance.
>>>>> Matej
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Apr 23, 2014 at 10:47 AM, Robert van Leeuwen <
>>>>> Robert.vanLeeuwen at spilgames.com> wrote:
>>>>>
>>>>>> > neutron net-create public --tenant_id
>>>>>> a0edd2a531bb41e6b17e0fd644bfd494 --provider:network_type flat
>>>>>> --provider:physical_network default --shared True
>>>>>> > Invalid input for provider:physical_network. Reason: '[u'default',
>>>>>> u'True']' is not a valid string.
>>>>>> >
>>>>>> > For being able to use --provider:physical_network I need
>>>>>> bridge_mappings in configuration, right? When I add it, my existing GRE
>>>>>> network stops working.
>>>>>> > It seems I am lost here ...
>>>>>>
>>>>>> You should be able to run bridge-mapped networks and GRE tunnels at
>>>>>> the same time.
>>>>>> Adding the bridge map config should not break GRE. (always do this in
>>>>>> a test setup first ;)
>>>>>> We used to do this up to Folsom (maybe even grizzly, do not remember
>>>>>> exact timelines)
>>>>>>
>>>>>> We moved to a full VLAN setup later on because GRE was adding
>>>>>> complexity without any real benefits.
>>>>>> (Since we do not expect to have thousands of networks we do not
>>>>>> expect to run out of VLANs)
>>>>>>
>>>>>> Cheers,
>>>>>> Robert van Leeuwen
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list:
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack<https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=979a651ea91c98acb0ef591c690834ac6b018a74a79a6914729eed1aa2cf46b3>
>>>>> Post to : openstack at lists.openstack.org
>>>>> Unsubscribe :
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack<https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack&k=uWCMTgG0stZxwOEwDWvrOA%3D%3D%0A&r=SrS6LVctHAotDvfalKfFvzCQXOUI4d%2BwuHpBCRhEqKk%3D%0A&m=dyJUUfOWysXrOJTA6fC22O%2FzWvhPr3QAv4w3w0kMIAg%3D%0A&s=979a651ea91c98acb0ef591c690834ac6b018a74a79a6914729eed1aa2cf46b3>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack at lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140503/5cb95579/attachment.html>
More information about the Openstack
mailing list