[Openstack] network question on openstack installation
Uwe Sauter
uwe.sauter.de at gmail.com
Mon Jun 29 21:16:20 UTC 2015
Yes. Just keep in mind that if you extend your configuration with a new config file, then you must change your init script / unit file to reference that file. And it would probably be a good idea to re-sync the DB with that additional file as an option. Or you keep your plugin configuration in a single file and be happy with the current layout.
Am 29. Juni 2015 22:47:22 MESZ, schrieb "Yngvi Páll Þorfinnsson" <yngvith at siminn.is>:
>Hi Uwe
>
>I just ran this some minutes ago, i.e. did the "population of db"
>again,
>According to the manual.
>Should'nt this be enough ?
>
>
>root at controller2:/# su -s /bin/sh -c "neutron-db-manage --config-file
>/etc/neutron/neutron.conf \
>> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno"
>neutron
>
>INFO [alembic.migration] Context impl MySQLImpl.
>INFO [alembic.migration] Will assume non-transactional DDL.
>INFO [alembic.migration] Running upgrade None -> havana,
>havana_initial
>INFO [alembic.migration] Running upgrade havana -> e197124d4b9, add
>unique constraint to members
>INFO [alembic.migration] Running upgrade e197124d4b9 -> 1fcfc149aca4,
>Add a unique constraint on (agent_type, host) columns to prevent a race
>condition when an agent entry is 'upserted'.
>INFO [alembic.migration] Running upgrade 1fcfc149aca4 -> 50e86cb2637a,
>nsx_mappings
>INFO [alembic.migration] Running upgrade 50e86cb2637a -> 1421183d533f,
>NSX DHCP/metadata support
>INFO [alembic.migration] Running upgrade 1421183d533f -> 3d3cb89d84ee,
>nsx_switch_mappings
>INFO [alembic.migration] Running upgrade 3d3cb89d84ee -> 4ca36cfc898c,
>nsx_router_mappings
>INFO [alembic.migration] Running upgrade 4ca36cfc898c -> 27cc183af192,
>ml2_vnic_type
>INFO [alembic.migration] Running upgrade 27cc183af192 -> 50d5ba354c23,
>ml2 binding:vif_details
>INFO [alembic.migration] Running upgrade 50d5ba354c23 -> 157a5d299379,
>ml2 binding:profile
>INFO [alembic.migration] Running upgrade 157a5d299379 -> 3d2585038b95,
>VMware NSX rebranding
>INFO [alembic.migration] Running upgrade 3d2585038b95 -> abc88c33f74f,
>lb stats
>INFO [alembic.migration] Running upgrade abc88c33f74f -> 1b2580001654,
>nsx_sec_group_mapping
>INFO [alembic.migration] Running upgrade 1b2580001654 -> e766b19a3bb,
>nuage_initial
>INFO [alembic.migration] Running upgrade e766b19a3bb -> 2eeaf963a447,
>floatingip_status
>INFO [alembic.migration] Running upgrade 2eeaf963a447 -> 492a106273f8,
>Brocade ML2 Mech. Driver
>INFO [alembic.migration] Running upgrade 492a106273f8 -> 24c7ea5160d7,
>Cisco CSR VPNaaS
>INFO [alembic.migration] Running upgrade 24c7ea5160d7 -> 81c553f3776c,
>bsn_consistencyhashes
>INFO [alembic.migration] Running upgrade 81c553f3776c -> 117643811bca,
>nec: delete old ofc mapping tables
>INFO [alembic.migration] Running upgrade 117643811bca -> 19180cf98af6,
>nsx_gw_devices
>INFO [alembic.migration] Running upgrade 19180cf98af6 -> 33dd0a9fa487,
>embrane_lbaas_driver
>INFO [alembic.migration] Running upgrade 33dd0a9fa487 -> 2447ad0e9585,
>Add IPv6 Subnet properties
>INFO [alembic.migration] Running upgrade 2447ad0e9585 -> 538732fa21e1,
>NEC Rename quantum_id to neutron_id
>INFO [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051,
>n1kv segment allocs for cisco n1kv plugin
>INFO [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse,
>icehouse
>INFO [alembic.migration] Running upgrade icehouse -> 54f7549a0e5f,
>set_not_null_peer_address
>INFO [alembic.migration] Running upgrade 54f7549a0e5f -> 1e5dd1d09b22,
>set_not_null_fields_lb_stats
>INFO [alembic.migration] Running upgrade 1e5dd1d09b22 -> b65aa907aec,
>set_length_of_protocol_field
>INFO [alembic.migration] Running upgrade b65aa907aec -> 33c3db036fe4,
>set_length_of_description_field_metering
>INFO [alembic.migration] Running upgrade 33c3db036fe4 -> 4eca4a84f08a,
>Remove ML2 Cisco Credentials DB
>INFO [alembic.migration] Running upgrade 4eca4a84f08a -> d06e871c0d5,
>set_admin_state_up_not_null_ml2
>INFO [alembic.migration] Running upgrade d06e871c0d5 -> 6be312499f9,
>set_not_null_vlan_id_cisco
>INFO [alembic.migration] Running upgrade 6be312499f9 -> 1b837a7125a9,
>Cisco APIC Mechanism Driver
>INFO [alembic.migration] Running upgrade 1b837a7125a9 -> 10cd28e692e9,
>nuage_extraroute
>INFO [alembic.migration] Running upgrade 10cd28e692e9 -> 2db5203cb7a9,
>nuage_floatingip
>INFO [alembic.migration] Running upgrade 2db5203cb7a9 -> 5446f2a45467,
>set_server_default
>INFO [alembic.migration] Running upgrade 5446f2a45467 -> db_healing,
>Include all tables and make migrations unconditional.
>INFO [alembic.migration] Context impl MySQLImpl.
>INFO [alembic.migration] Will assume non-transactional DDL.
>INFO [alembic.autogenerate.compare] Detected server default on column
>'cisco_ml2_apic_epgs.provider'
>INFO [alembic.autogenerate.compare] Detected removed index
>'cisco_n1kv_vlan_allocations_ibfk_1' on 'cisco_n1kv_vlan_allocations'
>INFO [alembic.autogenerate.compare] Detected server default on column
>'cisco_n1kv_vxlan_allocations.allocated'
>INFO [alembic.autogenerate.compare] Detected removed index
>'cisco_n1kv_vxlan_allocations_ibfk_1' on 'cisco_n1kv_vxlan_allocations'
>INFO [alembic.autogenerate.compare] Detected removed index
>'embrane_pool_port_ibfk_2' on 'embrane_pool_port'
>INFO [alembic.autogenerate.compare] Detected removed index
>'firewall_rules_ibfk_1' on 'firewall_rules'
>INFO [alembic.autogenerate.compare] Detected removed index
>'firewalls_ibfk_1' on 'firewalls'
>INFO [alembic.autogenerate.compare] Detected server default on column
>'meteringlabelrules.excluded'
>INFO [alembic.autogenerate.compare] Detected server default on column
>'ml2_port_bindings.host'
>INFO [alembic.autogenerate.compare] Detected added column
>'nuage_routerroutes_mapping.destination'
>INFO [alembic.autogenerate.compare] Detected added column
>'nuage_routerroutes_mapping.nexthop'
>INFO [alembic.autogenerate.compare] Detected server default on column
>'poolmonitorassociations.status'
>INFO [alembic.autogenerate.compare] Detected added index
>'ix_quotas_tenant_id' on '['tenant_id']'
>INFO [alembic.autogenerate.compare] Detected NULL on column
>'tz_network_bindings.phy_uuid'
>INFO [alembic.autogenerate.compare] Detected NULL on column
>'tz_network_bindings.vlan_id'
>INFO [neutron.db.migration.alembic_migrations.heal_script] Detected
>removed foreign key u'nuage_floatingip_pool_mapping_ibfk_2' on table
>u'nuage_floatingip_pool_mapping'
>INFO [alembic.migration] Running upgrade db_healing -> 3927f7f7c456,
>L3 extension distributed mode
>INFO [alembic.migration] Running upgrade 3927f7f7c456 -> 2026156eab2f,
>L2 models to support DVR
>INFO [alembic.migration] Running upgrade 2026156eab2f -> 37f322991f59,
>removing_mapping_tables
>INFO [alembic.migration] Running upgrade 37f322991f59 -> 31d7f831a591,
>add constraint for routerid
>INFO [alembic.migration] Running upgrade 31d7f831a591 -> 5589aa32bf80,
>L3 scheduler additions to support DVR
>INFO [alembic.migration] Running upgrade 5589aa32bf80 -> 884573acbf1c,
>Drop NSX table in favor of the extra_attributes one
>INFO [alembic.migration] Running upgrade 884573acbf1c -> 4eba2f05c2f4,
>correct Vxlan Endpoint primary key
>INFO [alembic.migration] Running upgrade 4eba2f05c2f4 -> 327ee5fde2c7,
>set_innodb_engine
>INFO [alembic.migration] Running upgrade 327ee5fde2c7 -> 3b85b693a95f,
>Drop unused servicedefinitions and servicetypes tables.
>INFO [alembic.migration] Running upgrade 3b85b693a95f -> aae5706a396,
>nuage_provider_networks
>INFO [alembic.migration] Running upgrade aae5706a396 -> 32f3915891fd,
>cisco_apic_driver_update
>INFO [alembic.migration] Running upgrade 32f3915891fd -> 58fe87a01143,
>cisco_csr_routing
>INFO [alembic.migration] Running upgrade 58fe87a01143 -> 236b90af57ab,
>ml2_type_driver_refactor_dynamic_segments
>INFO [alembic.migration] Running upgrade 236b90af57ab -> 86d6d9776e2b,
>Cisco APIC Mechanism Driver
>INFO [alembic.migration] Running upgrade 86d6d9776e2b -> 16a27a58e093,
>ext_l3_ha_mode
>INFO [alembic.migration] Running upgrade 16a27a58e093 -> 3c346828361e,
>metering_label_shared
>INFO [alembic.migration] Running upgrade 3c346828361e -> 1680e1f0c4dc,
>Remove Cisco Nexus Monolithic Plugin
>INFO [alembic.migration] Running upgrade 1680e1f0c4dc -> 544673ac99ab,
>add router port relationship
>INFO [alembic.migration] Running upgrade 544673ac99ab -> juno, juno
>root at controller2:/#
>
>-----Original Message-----
>From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
>Sent: 29. júní 2015 20:45
>To: Yngvi Páll Þorfinnsson; YANG LI
>Cc: openstack at lists.openstack.org
>Subject: Re: [Openstack] network question on openstack installation
>
>Go to http://docs.openstack.org/ , select the OpenStack version, then
>the installation guide for your distribution and navigate to
>
>6. Add a networking component
> - OpenStack Networking (neutron)
> - Install and configure controller node
>
>and follow the database related stuff:
>- create DB
>- grant privileges to neutron user
>- populate the DB (search for "neutron-db-manage")
>
>Depending on your distribution, you probably need to use sudo.
>
>The number of config files depends on the file layout you are working
>with, so this is not an exact answer.
>
>I did the following:
>
>[on controller node]
>
>cd /etc/neutron
>cat plugins/ml2/ml2_conf.ini > plugin.ini cd plugins/ml2 rm -f
>ml2_config.ini ln -s ../../plugin.ini ml2_conf.ini
>
>
>[on network and compute nodes]
>
>cd /etc/neutron
>cat plugins/ml2/ml2_conf.ini > plugin.ini cat
>plugins/openvswitch/ovs_neutron_plugin.ini >> plugin.ini cd plugins/ml2
>rm -f ml2_config.ini ln -s ../../plugin.ini ml2_conf.ini cd
>../openvswitch rm -f ovs_neutron_plugin.ini ln -s ../../plugin.ini
>ovs_neutron_plugin.ini
>
>
>
>Then I did that sed trick that is needed on RDO because of a packaging
>bug with Juno (changes systemd unit file to load
>/etc/neutron/plugin.ini instead of
>/etc/neutron/plugins/ml2/ml2_plugin.ini).
>And then 'su -s /bin/sh -c "neutron-db-manage --config-file
>/etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade
>juno" neutron' on the controller node.
>
>As far as I understood the configuration system used in Neutron, each
>config file will be merged into the same configuration namespace; only
>the order will determine, which value is the current one if they exist
>in more then one file [] section.
>So you could have just one ini file in addition to
>/etc/neutron/neutron.conf (my way) or several. But then you need to
>load them all when sync'ing the DB *and* starting the services.
>
>
>Hopefully this brightens the dark side of Neutron configuration,
>
> Uwe
>
>
>
>
>
>Am 29.06.2015 um 22:19 schrieb Yngvi Páll Þorfinnsson:
>> HI Uwe
>> No, I did'nt drop the keystone ;-)
>>
>> But is this the correct way to resync neutron ?
>>
>> # neutron-db-manage --config-file /etc/neutron/neutron.conf \ #
>> --config-file /etc/neutron/plugins/ml2/ml2_plugin.ini
>>
>> I mean, how many config files is necessary to have in the cmd ?
>>
>> best regards
>> Yngvi
>>
>> -----Original Message-----
>> From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
>> Sent: 29. júní 2015 18:08
>> To: YANG LI
>> Cc: openstack at lists.openstack.org
>> Subject: Re: [Openstack] network question on openstack installation
>>
>> It depends on your switch… some drop tagged packets on an access
>port, others allow tagged packets, if the packet VLAN ID equals the
>configured VLAN ID.
>>
>> I'd reconfigure the provider network type to "flat" but that's
>> personal taste. You could also reconfigure the switch port to be a
>> trunking port (with only one VLAN ID). Currently your in between
>those
>> configurations that might or might not work lateron…
>>
>>
>> Regards,
>>
>> Uwe
>>
>>
>> Am 29.06.2015 um 19:42 schrieb YANG LI:
>>> thank you, Uwe. our provider network actually is untagged, but I did
>
>>> specified VLAN ID when I create our external network and everything
>still works. will this cause issue later on?
>>>
>>> eutron net-create --provider:network_type=vlan
>>> —provider:segmentation_id=<vlan id>
>>> --provider:physical_network=physnet1
>>> --router:external=true public
>>>
>>>
>>> Thanks,
>>> Yang
>>>> On Jun 27, 2015, at 11:17 AM, Uwe Sauter <uwe.sauter.de at gmail.com
><mailto:uwe.sauter.de at gmail.com>> wrote:
>>>>
>>>> Hi Yang,
>>>>
>>>> it depends on whether your provider network is tagged or untagged.
>If it is untagged (the switch port is an "access"
>>>> port) then you don't specify the VLAN ID for the external network
>>>> (as it will get tagged by the switch). If the provider network is
>>>> tagged (the switch port is a "trunk" port) then you have to
>configure the VLAN ID because the switch might refuse the traffic
>(depending if there is a default VLAN ID defined on the switch's port).
>>>>
>>>> Regards,
>>>>
>>>> Uwe
>>>>
>>>>
>>>>
>>>> Am 27.06.2015 um 13:47 schrieb YANG LI:
>>>>> Thank you so much, James! This is so helpful. Another confusion I
>>>>> have is about network_vlan_ranges. Is this network VLAN id range?
>>>>> If so, does it has to match external network? For example, we only
>have one external VLAN we can use as Our provider network and that VLAN
>id is 775 (xxx.xxx.xxx.0/26). Should I define network_vlan_ranges as
>following?
>>>>>
>>>>> [ml2]
>>>>> type_drivers=vlan
>>>>> tenant_network_types = vlan
>>>>> mechanism_drivers=openvswitch
>>>>> #
>>>>> [ml2_type_vlan]
>>>>>
>#thistellsOpenstackthattheinternalname"physnet1"providesthevlanrang
>>>>> e
>>>>> 100-199
>>>>> network_vlan_ranges=physnet1:775
>>>>> #
>>>>>
>>>>> Thanks,
>>>>> Yang
>>>>> Sent from my iPhone
>>>>>
>>>>> On Jun 26, 2015, at 8:54 AM, "James Denton"
>>>>> <james.denton at rackspace.com <mailto:james.denton at rackspace.com>
>>>>> <mailto:james.denton at rackspace.com>> wrote:
>>>>>
>>>>>> You can absolutely have instances in the same network span
>>>>>> different compute nodes. As an admin, you can run ‘nova show
><instanceid>’ and see the host in the output:
>>>>>>
>>>>>> root at controller01:~# nova show
>>>>>> 7bb18175-87da-4d1f-8dca-2ef07fee9d21
>>>>>> | grep host
>>>>>> | OS-EXT-SRV-ATTR:host | compute02
> |
>>>>>>
>>>>>> That info is not available to non-admin users by default.
>>>>>>
>>>>>> James
>>>>>>
>>>>>>> On Jun 26, 2015, at 7:38 AM, YANG LI <yangli at clemson.edu
>>>>>>> <mailto:yangli at clemson.edu> <mailto:yangli at clemson.edu>>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Thanks, James for the explanation. it make more sense now.
>>>>>>> <http://now.it/> it is possible that a instances on same tenant
>network reside on different compute nodes right? how do I tell which
>compute node a instance is on?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Yang
>>>>>>>
>>>>>>>> On Jun 24, 2015, at 10:27 AM, James Denton
>>>>>>>> <james.denton at rackspace.com <mailto:james.denton at rackspace.com>
>>>>>>>> <mailto:james.denton at rackspace.com>> wrote:
>>>>>>>>
>>>>>>>> Hello.
>>>>>>>>
>>>>>>>>> all three nodes will have eth0 on management/api network.
>since
>>>>>>>>> I am using ml2 plugin with vlan for tenant network, I think
>all
>>>>>>>>> compute node should have eth1 as the second nic on provider
>>>>>>>>> network. Is this correct? I understand provider network is
>for instance to get external access to internet, but how is instance
>live on compute1 communicate with instance live on compute2? are they
>also go through provider network?
>>>>>>>>
>>>>>>>> In short, yes. If you’re connecting instances to vlan
>“provider”
>>>>>>>> networks, traffic between instances on different compute nodes
>>>>>>>> will traverse the “provider bridge”, get tagged out eth1, and
>>>>>>>> hit the physical switching fabric. Your external gateway device
>could also sit in that vlan, and the default route on the instance
>would direct external traffic to that device.
>>>>>>>>
>>>>>>>> In reality, every network has ‘provider’ attributes that
>>>>>>>> describe the network type, segmentation id, and bridge
>interface
>>>>>>>> (for vlan/flat only). So tenant networks that leverage vlans
>>>>>>>> would have provider attributes set by Neutron automatically
>>>>>>>> based on the configuration set in the ML2 config file. If you
>>>>>>>> use Neutron routers that connect to both ‘tenant’ vlan-based
>networks and external ‘provider’ networks, all of that traffic could
>traverse the same provider bridge on the controller/network node, but
>would be tagged accordingly based on the network (ie. vlan 100 for
>external network, vlan 200 for tenant network).
>>>>>>>>
>>>>>>>> Hope that’s not too confusing!
>>>>>>>>
>>>>>>>> James
>>>>>>>>
>>>>>>>>> On Jun 24, 2015, at 8:54 AM, YANG LI <yangli at clemson.edu
>>>>>>>>> <mailto:yangli at clemson.edu> <mailto:yangli at clemson.edu>>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> I am working on install openstack from scratch, but get
>>>>>>>>> confused with network part. I want to have one controller
>node, two compute nodes.
>>>>>>>>>
>>>>>>>>> the controller node will only handle following services:
>>>>>>>>> glance-api
>>>>>>>>> glance-registry
>>>>>>>>> keystone
>>>>>>>>> nova-api
>>>>>>>>> nova-cert
>>>>>>>>> nova-conductor
>>>>>>>>> nova-consoleauth
>>>>>>>>> nova-novncproxy
>>>>>>>>> nova-scheduler
>>>>>>>>> qpid
>>>>>>>>> mysql
>>>>>>>>> neutron-server
>>>>>>>>>
>>>>>>>>> compute nodes will have following services:
>>>>>>>>> neutron-dhcp-agent
>>>>>>>>> neutron-l3-agent
>>>>>>>>> neutron-metadata-agent
>>>>>>>>> neutron-openvswitch-agent
>>>>>>>>> neutron-ovs-cleanup
>>>>>>>>> openvswtich
>>>>>>>>> nova-compute
>>>>>>>>>
>>>>>>>>> all three nodes will have eth0 on management/api network.
>since
>>>>>>>>> I am using ml2 plugin with vlan for tenant network, I think
>all
>>>>>>>>> compute node should have eth1 as the second nic on provider
>>>>>>>>> network. Is this correct? I understand provider network is
>for instance to get external access to internet, but how is instance
>live on compute1 communicate with instance live on compute2? are they
>also go through provider network?
>>>>>>>>> _______________________________________________
>>>>>>>>> Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>> Post to : openstack at lists.openstack.org
><mailto:openstack at lists.openstack.org>
>>>>>>>>> <mailto:openstack at lists.openstack.org>
>>>>>>>>> Unsubscribe :
>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>> Post to : openstack at lists.openstack.org
><mailto:openstack at lists.openstack.org>
>>>>> Unsubscribe :
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to : openstack at lists.openstack.org
><mailto:openstack at lists.openstack.org>
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>> _______________________________________________
>> Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150629/6d189613/attachment.html>
More information about the Openstack
mailing list