[Openstack] network question on openstack installation
Yngvi Páll Þorfinnsson
yngvith at siminn.is
Mon Jun 29 23:35:14 UTC 2015
root at network2:/# nmap -sO compute5
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:29 GMT
Nmap scan report for compute5 (172.22.14.17)
Host is up (0.00014s latency).
rDNS record for 172.22.14.17: compute5.siminn.is
Not shown: 245 closed protocols
PROTOCOL STATE SERVICE
1 open icmp
2 open|filtered igmp
6 open tcp
17 open udp
47 open|filtered gre
96 open scc-sp
103 open|filtered pim
136 open|filtered udplite
166 open unknown
182 open unknown
255 open|filtered unknown
MAC Address: 0C:C4:7A:1E:77:7E (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 310.51 seconds
-----Original Message-----
From: Yngvi Páll Þorfinnsson
Sent: 29. júní 2015 23:35
To: 'Uwe Sauter'; 'YANG LI'
Cc: 'openstack at lists.openstack.org'
Subject: RE: [Openstack] network question on openstack installation
It's taking up to 5 minutes to finish
root at compute5:/# nmap -sO 172.22.14.14
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:28 GMT
Warning: 172.22.14.14 giving up on port because retransmission cap hit (10).
Nmap scan report for network2.siminn.is (172.22.14.14) Host is up (0.000091s latency).
Not shown: 246 closed protocols
PROTOCOL STATE SERVICE
1 open icmp
2 open|filtered igmp
6 open tcp
7 open|filtered cbt
17 open udp
36 open xtp
47 open|filtered gre
103 open|filtered pim
136 open|filtered udplite
255 open|filtered unknown
MAC Address: 0C:C4:7A:1E:77:3D (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 307.19 seconds
-----Original Message-----
From: Yngvi Páll Þorfinnsson
Sent: 29. júní 2015 23:28
To: 'Uwe Sauter'; YANG LI
Cc: openstack at lists.openstack.org
Subject: RE: [Openstack] network question on openstack installation
Well this one finished finally
Should I use the tunnel or mgmt IP ?
root at compute5:/# nmap -sO 172.22.15.14
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:21 GMT
Warning: 172.22.15.14 giving up on port because retransmission cap hit (10).
Nmap scan report for 172.22.15.14
Host is up (0.00017s latency).
Not shown: 247 closed protocols
PROTOCOL STATE SERVICE
1 open icmp
2 open|filtered igmp
6 open tcp
17 open udp
47 open|filtered gre
54 open narp
103 open|filtered pim
136 open|filtered udplite
144 open unknown
MAC Address: 0C:C4:7A:1E:77:3D (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 297.15 seconds root at compute5:/#
-----Original Message-----
From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
Sent: 29. júní 2015 23:18
To: Yngvi Páll Þorfinnsson; YANG LI
Cc: openstack at lists.openstack.org
Subject: RE: [Openstack] network question on openstack installation
Hm, I'm running out of ideas. Can you run those two commands to verify that GRE traffic can pass the firewalls:
Network node: nmap -sO <IP of compute node>
Compute node: nmap -sO <IP of network node>
In both cases, that's a big o, not a zero.
Am 30. Juni 2015 01:07:04 MESZ, schrieb "Yngvi Páll Þorfinnsson" <yngvith at siminn.is>:
>OK, so I ran the command you sent me , and now it looks it‘s allowed,
>
>Controller:
>
>root at controller2:/# iptables -L -nv --line-numbers Chain INPUT (policy
>ACCEPT 6437 packets, 2609K bytes)
>num pkts bytes target prot opt in out source
> destination
>1 54411 17M nova-api-INPUT all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 0 0 ACCEPT 47 -- * * 172.22.15.0/24
> 172.22.15.0/24
>
>Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 nova-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 0 0 nova-api-FORWARD all -- * * 0.0.0.0/0
> 0.0.0.0/0
>
>Chain OUTPUT (policy ACCEPT 6228 packets, 2573K bytes)
>num pkts bytes target prot opt in out source
> destination
>1 51439 16M nova-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 51439 16M nova-api-OUTPUT all -- * * 0.0.0.0/0
> 0.0.0.0/0
>
>Chain nova-api-FORWARD (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain nova-api-INPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 ACCEPT tcp -- * * 0.0.0.0/0
> 172.22.14.22 tcp dpt:8775
>
>Chain nova-api-OUTPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain nova-api-local (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain nova-filter-top (2 references)
>num pkts bytes target prot opt in out source
> destination
>1 51439 16M nova-api-local all -- * * 0.0.0.0/0
> 0.0.0.0/0
>root at controller2:/#
>
>
>Network:
>
>root at network2:/# iptables -L -nv --line-numbers Chain INPUT (policy
>ACCEPT 8 packets, 512 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 2215 700K neutron-openvswi-INPUT all -- * *
>0.0.0.0/0 0.0.0.0/0
>2 0 0 ACCEPT 47 -- * * 172.22.15.0/24
> 172.22.15.0/24
>
>Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 neutron-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 0 0 neutron-openvswi-FORWARD all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>Chain OUTPUT (policy ACCEPT 5 packets, 664 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 1827 332K neutron-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 1827 332K neutron-openvswi-OUTPUT all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>Chain neutron-filter-top (2 references)
>num pkts bytes target prot opt in out source
> destination
>1 1827 332K neutron-openvswi-local all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>Chain neutron-openvswi-FORWARD (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain neutron-openvswi-INPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain neutron-openvswi-OUTPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain neutron-openvswi-local (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain neutron-openvswi-sg-chain (0 references)
>num pkts bytes target prot opt in out source
> destination
>
>Chain neutron-openvswi-sg-fallback (0 references)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 DROP all -- * * 0.0.0.0/0
> 0.0.0.0/0
>root at network2:/#
>
>compute:
>
>root at compute5:/# iptables -L -nv --line-numbers Chain INPUT (policy
>ACCEPT 24 packets, 7092 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0
> 0.0.0.0/0 udp dpt:53
>2 0 0 ACCEPT 47 -- * * 172.22.15.0/24
> 172.22.15.0/24
>3 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0
> 0.0.0.0/0 tcp dpt:53
>4 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0
> 0.0.0.0/0 udp dpt:67
>5 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0
> 0.0.0.0/0 tcp dpt:67
>
>Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 ACCEPT all -- * virbr0 0.0.0.0/0
> 192.168.122.0/24 ctstate RELATED,ESTABLISHED
>2 0 0 ACCEPT all -- virbr0 * 192.168.122.0/24
> 0.0.0.0/0
>3 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0
> 0.0.0.0/0
>4 0 0 REJECT all -- * virbr0 0.0.0.0/0
> 0.0.0.0/0 reject-with icmp-port-unreachable
>5 0 0 REJECT all -- virbr0 * 0.0.0.0/0
> 0.0.0.0/0 reject-with icmp-port-unreachable
>
>Chain OUTPUT (policy ACCEPT 14 packets, 2340 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 ACCEPT udp -- * virbr0 0.0.0.0/0
> 0.0.0.0/0 udp dpt:68
>
>
>I tried once more to create an instance, but if failed as well
>
>Best regards
>Yngvi
>
>From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
>Sent: 29. júní 2015 23:01
>To: Yngvi Páll Þorfinnsson; YANG LI
>Cc: openstack at lists.openstack.org
>Subject: RE: [Openstack] network question on openstack installation
>
>I'm not sure if there is something wrong but on both hosts I don't see
>any rule that accepts GRE traffic. You need to allow GRE traffic on
>your internal network so that tunneling can actually work. Without that
>it's like having your network configured but not plugged in...
>Am 30. Juni 2015 00:50:28 MESZ, schrieb "Yngvi Páll Þorfinnsson"
><yngvith at siminn.is<mailto:yngvith at siminn.is>>:
>Network node:
>
>
>root at network2:/# iptables -L -nv --line-numbers Chain INPUT (policy
>ACCEPT 1286 packets, 351K bytes)
>num pkts bytes target prot opt in out source
> destination
>1 1171 338K neutron-openvswi-INPUT all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>
>Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 neutron-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 0 0 neutron-openvswi-FORWARD all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>
>Chain OUTPUT (policy ACCEPT 1124 packets, 180K bytes)
>num pkts bytes target prot opt in out source
> destination
>1 988 164K neutron-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 988 164K neutron-openvswi-OUTPUT all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>
>Chain neutron-filter-top (2 references)
>num pkts bytes target prot opt in out source
> destination
>1 988 164K neutron-openvswi-local all -- * *
>0.0.0.0/0 0.0.0.0/0
>
>
>Chain neutron-openvswi-FORWARD (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain neutron-openvswi-INPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain neutron-openvswi-OUTPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain neutron-openvswi-local (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain neutron-openvswi-sg-chain (0 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain neutron-openvswi-sg-fallback (0 references)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 DROP all -- * * 0.0.0.0/0
> 0.0.0.0/0
>root at network2:/#
>
>
>Controller node
>
>
>root at controller2:/# iptables -L -nv --line-numbers Chain INPUT (policy
>ACCEPT 25498 packets, 7540K bytes)
>num pkts bytes target prot opt in out source
> destination
>1 25498 7540K nova-api-INPUT all -- * * 0.0.0.0/0
> 0.0.0.0/0
>
>
>Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 nova-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 0 0 nova-api-FORWARD all -- * * 0.0.0.0/0
> 0.0.0.0/0
>
>
>Chain OUTPUT (policy ACCEPT 24216 packets, 7244K bytes)
>num pkts bytes target prot opt in out source
> destination
>1 24216 7244K nova-filter-top all -- * * 0.0.0.0/0
> 0.0.0.0/0
>2 24216 7244K nova-api-OUTPUT all -- * * 0.0.0.0/0
> 0.0.0.0/0
>
>
>Chain nova-api-FORWARD (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain nova-api-INPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>1 0 0 ACCEPT tcp -- * * 0.0.0.0/0
> 172.22.14.22 tcp dpt:8775
>
>
>Chain nova-api-OUTPUT (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain nova-api-local (1 references)
>num pkts bytes target prot opt in out source
> destination
>
>
>Chain nova-filter-top (2 references)
>num pkts bytes target prot opt in out source
> destination
>1 24216 7244K nova-api-local all -- * * 0.0.0.0/0
> 0.0.0.0/0
>root at controller2:/#
>
>
>
>
>
>
>From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
>Sent: 29. júní 2015 22:46
>To: Yngvi Páll Þorfinnsson; YANG LI
>Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>Subject: RE: [Openstack] network question on openstack installation
>
>
>One more thing. Please provide
>
>iptables -L -nv --line-numbers
>
>for network and compute nodes.
>Am 30. Juni 2015 00:25:45 MESZ, schrieb "Yngvi Páll Þorfinnsson"
><yngvith at siminn.is<mailto:yngvith at siminn.is>>:
>In fact, I don‘t think I‘ll need more than one „external network“ so,
>am I on the wrong path here, i.e. when I‘m configuing the external
>network as a VLAN ?
>
>
>Best regards
>Yngvi
>
>
>From: Yngvi Páll Þorfinnsson
>Sent: 29. júní 2015 21:57
>To: Uwe Sauter; YANG LI
>Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>Subject: Re: [Openstack] network question on openstack installation
>
>
>OK, I only found one fresh error
>Compute node; nova-compute.log ( as usually when I create instance)
>
>
>grep ERR nova-compute.log
>2015-06-29 21:11:11.801 4166 ERROR nova.compute.manager [-] [instance:
>af901a2b-2462-4c19-b1f1-237371fd8177] Instance failed to spawn
>
>
>I‘ve attached the neutron agent-show and neutron (sub)net-list in the
>attached file.
>
>
>Best regards
>Yngvi
>
>
>
>
>From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
>Sent: 29. júní 2015 21:34
>To: Yngvi Páll Þorfinnsson; YANG LI
>Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>Subject: RE: [Openstack] network question on openstack installation
>
>
>Can you check for ERRORs in:
>Network node: neutron server log, neutron openvswitch agent log,
>openvswitch log Nova controller node: nova api log, nova scheduler log
>Compute node: nova compute log, neutron openvswitch agent log,
>openvswitch log
>
>Also please list again neutron agent-show for the different agents and
>neutron net-show and neutron subnet-show for your (sub)networks.
>Am 29. Juni 2015 23:24:48 MESZ, schrieb "Yngvi Páll Þorfinnsson"
><yngvith at siminn.is<mailto:yngvith at siminn.is>>:
>Thanks a lot for your effort Uwe ;-)
>It‘s relly helpful !
>
>
>Now , I keep creating instances, and have the same error.
>I still get a strange comment in the
>Neutron server.log file when I try to create an instance:
>
>
>2015-06-29 21:11:11.576 1960 DEBUG
>neutron.plugins.ml2.drivers.mech_openvswitch
>[req-1e603e4b-61e6-4896-8f81-208daba8569b None] Checking segment:
>{'segmentation_id': 1102L, 'physical_network': u'external', 'id':
>u'11fab5ad-c457-4175-9e5a-f505fc0e072d', 'network_type': u'vlan'} for
>mappings: {u'external': u'br-ex'} with tunnel_types: [u'gre']
>check_segment_for_agent
>/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_openv
>switch.py:52
>
>
>But still this segment is not listed for ‚neutron net-list‘ nor
>‚neutron-subnet-list‘ !
>
>
>
>
>root at controller2:/# source admin-openrc.sh root at controller2:/#
>root at controller2:/# neutron net-list
>+--------------------------------------+-------------+-----------------------------------------------------+
>| id | name | subnets
> |
>+--------------------------------------+-------------+-----------------------------------------------------+
>| b43da44a-42d5-4b1f-91c2-d06a923deb29 | ext_net1101 |
>c40fa8e3-cd8e-4566-ade6-5f3eabed121c 157.157.8.0/24 |
>| 3446e54b-346f-45e5-89a2-1ec4eef251ab | demo-net |
>2c79bb00-0ace-4319-8151-81210ee3dfb2 172.22.18.0/24 |
>+--------------------------------------+-------------+-----------------------------------------------------+
>root at controller2:/#
>root at controller2:/# neutron subnet-list
>+--------------------------------------+--------------------+----------------+---------------------------------------------------+
>| id | name | cidr
> | allocation_pools |
>+--------------------------------------+--------------------+----------------+---------------------------------------------------+
>| c40fa8e3-cd8e-4566-ade6-5f3eabed121c | ext_net1101-subnet |
>157.157.8.0/24 | {"start": "157.157.8.51", "end": "157.157.8.250"} |
>| 2c79bb00-0ace-4319-8151-81210ee3dfb2 | demo-subnet |
>172.22.18.0/24 | {"start": "172.22.18.2", "end": "172.22.18.254"} |
>+--------------------------------------+--------------------+----------------+---------------------------------------------------+
>
>
>
>
>And the output of listing it is empty:
>
>
>root at controller2:/# neutron net-show
>11fab5ad-c457-4175-9e5a-f505fc0e072d
>Unable to find network with name '11fab5ad-c457-4175-9e5a-f505fc0e072d'
>root at controller2:/#
>root at controller2:/# source demo-openrc.sh root at controller2:/# neutron
>net-show 11fab5ad-c457-4175-9e5a-f505fc0e072d
>Unable to find network with name '11fab5ad-c457-4175-9e5a-f505fc0e072d'
>
>
>This is after I have dropped the neutron, and resynced....
>
>
>Best regards
>Yngvi
>
>
>From: Uwe Sauter [mailto:uwe.sauter.de at gmail.com]
>Sent: 29. júní 2015 21:16
>To: Yngvi Páll Þorfinnsson; YANG LI
>Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>Subject: RE: [Openstack] network question on openstack installation
>
>
>Yes. Just keep in mind that if you extend your configuration with a new
>config file, then you must change your init script / unit file to
>reference that file. And it would probably be a good idea to re-sync
>the DB with that additional file as an option. Or you keep your plugin
>configuration in a single file and be happy with the current layout.
>Am 29. Juni 2015 22:47:22 MESZ, schrieb "Yngvi Páll Þorfinnsson"
><yngvith at siminn.is<mailto:yngvith at siminn.is>>:
>
>Hi Uwe
>
>I just ran this some minutes ago, i.e. did the "population of db"
>again,
>According to the manual.
>Should'nt this be enough ?
>
>
>root at controller2:/# su -s /bin/sh -c "neutron-db-manage --config-file
>/etc/neutron/neutron.conf \
>
>--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno"
>neutron
>
>INFO [alembic.migration] Context impl MySQLImpl.
>INFO [alembic.migration] Will assume non-transactional DDL.
>INFO [alembic.migration] Running upgrade None -> havana,
>havana_initial INFO [alembic.migration] Running upgrade havana ->
>e197124d4b9, add unique constraint to members INFO [alembic.migration]
>Running upgrade e197124d4b9 -> 1fcfc149aca4, Add a unique constraint on
>(agent_type, host) columns to prevent a race condition when an agent
>
>entry is 'upserted'.
>INFO [alembic.migration] Running upgrade 1fcfc149aca4 -> 50e86cb2637a,
>nsx_mappings INFO [alembic.migration] Running upgrade 50e86cb2637a ->
>1421183d533f, NSX DHCP/metadata support INFO [alembic.migration]
>Running upgrade 1421183d533f -> 3d3cb89d84ee, nsx_switch_mappings INFO
>[alembic.migration] Running upgrade 3d3cb89d84ee -> 4ca36cfc898c,
>nsx_router_mappings INFO [alembic.migration] Running upgrade
>4ca36cfc898c -> 27cc183af192, ml2_vnic_type INFO [alembic.migration]
>Running upgrade 27cc183af192 -> 50d5ba354c23,
>ml2 binding:vif_details
>INFO [alembic.migration] Running upgrade 50d5ba354c23 -> 157a5d299379,
>ml2 binding:profile
>INFO [alembic.migration] Running upgrade 157a5d299379 -> 3d2585038b95,
>VMware NSX rebranding INFO [alembic.migration] Running upgrade
>3d2585038b95 -> abc88c33f74f, lb stats INFO
>
>[alembic.migration]
>
>Running upgrade abc88c33f74f -> 1b2580001654,
>
>nsx_sec_group_mapping
>INFO [alembic.migration] Running upgrade 1b2580001654 -> e766b19a3bb,
>nuage_initial INFO [alembic.migration] Running upgrade e766b19a3bb ->
>2eeaf963a447, floatingip_status INFO [alembic.migration] Running
>upgrade 2eeaf963a447 -> 492a106273f8, Brocade ML2 Mech. Driver INFO
>[alembic.migration] Running upgrade 492a106273f8 -> 24c7ea5160d7, Cisco
>CSR VPNaaS INFO [alembic.migration] Running upgrade 24c7ea5160d7 ->
>81c553f3776c, bsn_consistencyhashes INFO [alembic.migration] Running
>upgrade 81c553f3776c -> 117643811bca,
>nec: delete old ofc mapping tables
>INFO [alembic.migration] Running upgrade 117643811bca -> 19180cf98af6,
>nsx_gw_devices INFO [alembic.migration] Running upgrade 19180cf98af6
>-> 33dd0a9fa487, embrane_lbaas_driver INFO [alembic.migration] Running
>upgrade 33dd0a9fa487 -> 2447ad0e9585, Add IPv6 Subnet properties INFO
>[alembic.migration] Running upgrade 2447ad0e9585
>
>-> 538732fa21e1, NEC Rename quantum_id to neutron_id
>INFO [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051,
>n1kv segment allocs for cisco n1kv plugin INFO [alembic.migration]
>Running upgrade 5ac1c354a051 -> icehouse, icehouse INFO
>[alembic.migration] Running upgrade icehouse -> 54f7549a0e5f,
>set_not_null_peer_address INFO [alembic.migration] Running upgrade
>54f7549a0e5f -> 1e5dd1d09b22, set_not_null_fields_lb_stats INFO
>[alembic.migration] Running upgrade 1e5dd1d09b22 -> b65aa907aec,
>set_length_of_protocol_field INFO [alembic.migration] Running upgrade
>b65aa907aec -> 33c3db036fe4, set_length_of_description_field_metering
>INFO [alembic.migration] Running upgrade 33c3db036fe4 -> 4eca4a84f08a,
>Remove ML2 Cisco Credentials DB INFO [alembic.migration] Running
>upgrade 4eca4a84f08a -> d06e871c0d5,
>set_admin_state_up_not_null_ml2
>INFO
>
>[alembic.migration] Running upgrade d06e871c0d5 ->
>
>6be312499f9, set_not_null_vlan_id_cisco INFO [alembic.migration]
>Running upgrade 6be312499f9 -> 1b837a7125a9, Cisco APIC Mechanism
>Driver INFO [alembic.migration] Running upgrade 1b837a7125a9 ->
>10cd28e692e9, nuage_extraroute INFO [alembic.migration] Running
>upgrade 10cd28e692e9 -> 2db5203cb7a9, nuage_floatingip INFO
>[alembic.migration] Running upgrade 2db5203cb7a9 -> 5446f2a45467,
>set_server_default INFO [alembic.migration] Running upgrade
>5446f2a45467 -> db_healing, Include all tables and make migrations
>unconditional.
>INFO [alembic.migration] Context impl MySQLImpl.
>INFO [alembic.migration] Will assume non-transactional DDL.
>INFO [alembic.autogenerate.compare] Detected server default on column
>'cisco_ml2_apic_epgs.provider'
>INFO [alembic.autogenerate.compare] Detected removed index
>'cisco_n1kv_vlan_allocations_ibfk_1' on 'cisco_n1kv_vlan_allocations'
>INFO [alembic.autogenerate.compare] Detected server
>
>default on column 'cisco_n1kv_vxlan_allocations.allocated'
>INFO [alembic.autogenerate.compare] Detected removed index
>'cisco_n1kv_vxlan_allocations_ibfk_1' on 'cisco_n1kv_vxlan_allocations'
>INFO [alembic.autogenerate.compare] Detected removed index
>'embrane_pool_port_ibfk_2' on 'embrane_pool_port'
>INFO [alembic.autogenerate.compare] Detected removed index
>'firewall_rules_ibfk_1' on 'firewall_rules'
>INFO [alembic.autogenerate.compare] Detected removed index
>'firewalls_ibfk_1' on 'firewalls'
>INFO [alembic.autogenerate.compare] Detected server default on column
>'meteringlabelrules.excluded'
>INFO [alembic.autogenerate.compare] Detected server default on column
>'ml2_port_bindings.host'
>INFO [alembic.autogenerate.compare] Detected added column
>'nuage_routerroutes_mapping.destination'
>INFO [alembic.autogenerate.compare] Detected added column
>'nuage_routerroutes_mapping.nexthop'
>INFO
>
>[alembic.autogenerate.compare] Detected server default
>
>on column 'poolmonitorassociations.status'
>INFO [alembic.autogenerate.compare] Detected added index
>'ix_quotas_tenant_id' on '['tenant_id']'
>INFO [alembic.autogenerate.compare] Detected NULL on column
>'tz_network_bindings.phy_uuid'
>INFO [alembic.autogenerate.compare] Detected NULL on column
>'tz_network_bindings.vlan_id'
>INFO [neutron.db.migration.alembic_migrations.heal_script] Detected
>removed foreign key u'nuage_floatingip_pool_mapping_ibfk_2' on table
>u'nuage_floatingip_pool_mapping'
>INFO [alembic.migration] Running upgrade db_healing -> 3927f7f7c456,
>L3 extension distributed mode
>INFO [alembic.migration] Running upgrade 3927f7f7c456 -> 2026156eab2f,
>L2 models to support DVR
>INFO [alembic.migration] Running upgrade 2026156eab2f -> 37f322991f59,
>removing_mapping_tables INFO [alembic.migration] Running upgrade
>37f322991f59 -> 31d7f831a591, add constraint for routerid INFO
>[alembic.migration] Running upgrade
>
>31d7f831a591 -> 5589aa32bf80, L3 scheduler additions to support DVR
>INFO [alembic.migration] Running upgrade 5589aa32bf80 -> 884573acbf1c,
>Drop NSX table in favor of the extra_attributes one INFO
>[alembic.migration] Running upgrade 884573acbf1c -> 4eba2f05c2f4,
>correct Vxlan Endpoint primary key INFO [alembic.migration] Running
>upgrade 4eba2f05c2f4 -> 327ee5fde2c7, set_innodb_engine INFO
>[alembic.migration] Running upgrade 327ee5fde2c7 -> 3b85b693a95f, Drop
>unused servicedefinitions and servicetypes tables.
>INFO [alembic.migration] Running upgrade 3b85b693a95f -> aae5706a396,
>nuage_provider_networks INFO [alembic.migration] Running upgrade
>aae5706a396 -> 32f3915891fd, cisco_apic_driver_update INFO
>[alembic.migration] Running upgrade 32f3915891fd -> 58fe87a01143,
>cisco_csr_routing INFO [alembic.migration] Running upgrade
>58fe87a01143 -> 236b90af57ab,
>
>ml2_type_driver_refactor_dynamic_segments
>INFO
>
>[alembic.migration] Running upgrade 236b90af57ab -> 86d6d9776e2b, Cisco
>APIC Mechanism Driver INFO [alembic.migration] Running upgrade
>86d6d9776e2b -> 16a27a58e093, ext_l3_ha_mode INFO [alembic.migration]
>Running upgrade 16a27a58e093 -> 3c346828361e, metering_label_shared
>INFO [alembic.migration] Running upgrade 3c346828361e -> 1680e1f0c4dc,
>Remove Cisco Nexus Monolithic Plugin INFO [alembic.migration] Running
>upgrade 1680e1f0c4dc -> 544673ac99ab, add router port relationship INFO
>[alembic.migration] Running upgrade 544673ac99ab -> juno, juno
>root at controller2:/#
>
>-----Original Message-----
>From: Uwe Sauter [mailto:uwe.sauter.de<http://uwe.sauter.de>@gmail.com]
>Sent: 29. júní 2015 20:45
>To: Yngvi Páll Þorfinnsson; YANG LI
>Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>Subject: Re: [Openstack] network
>
>question
>
>on
>
>openstack installation
>
>Go to http://docs.openstack.org/ , select the OpenStack version, then
>the installation guide for your distribution and navigate to
>
>6. Add a networking component
> - OpenStack Networking (neutron)
> - Install and configure controller node
>
>and follow the database related stuff:
>- create DB
>- grant privileges to neutron user
>- populate the DB (search for "neutron-db-manage")
>
>Depending on your distribution, you probably need to use sudo.
>
>The number of config files depends on the file layout you are working
>with, so this is not an exact answer.
>
>I did the following:
>
>[on controller node]
>
>cd /etc/neutron
>cat plugins/ml2/ml2_conf.ini > plugin.ini cd plugins/ml2 rm -f
>ml2_config.ini ln -s ../../plugin.ini ml2_conf.ini
>
>
>[on network and compute nodes]
>
>cd
>
>/etc/neutron
>cat plugins/ml2/ml2_conf.ini > plugin.ini cat
>
>plugins/openvswitch/ovs_neutron_plugin.ini >> plugin.ini cd plugins/ml2
>rm -f ml2_config.ini ln -s ../../plugin.ini ml2_conf.ini cd
>../openvswitch rm -f ovs_neutron_plugin.ini ln -s ../../plugin.ini
>ovs_neutron_plugin.ini
>
>
>
>Then I did that sed trick that is needed on RDO because of a packaging
>bug with Juno (changes systemd unit file to load
>/etc/neutron/plugin.ini instead of
>/etc/neutron/plugins/ml2/ml2_plugin.ini).
>And then 'su -s /bin/sh -c "neutron-db-manage --config-file
>/etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade
>juno" neutron' on the controller node.
>
>As far as I understood the configuration system used in Neutron, each
>config file will be merged into the same configuration namespace; only
>the order will determine, which value is the current one if they exist
>in more then one file [] section.
>So you could have just one ini file in addition to
>/etc/neutron/neutron.conf (my
>
>way)
>
>or
>
>several. But then you
>
>need to load them all when sync'ing the DB *and* starting the services.
>
>
>Hopefully this brightens the dark side of Neutron configuration,
>
> Uwe
>
>
>
>
>
>Am 29.06.2015 um 22:19 schrieb Yngvi Páll Þorfinnsson:
>
> HI Uwe
> No, I did'nt drop the keystone ;-)
>
> But is this the correct way to resync neutron ?
>
> # neutron-db-manage --config-file /etc/neutron/neutron.conf \ #
> --config-file /etc/neutron/plugins/ml2/ml2_plugin.ini
>
> I mean, how many config files is necessary to have in the cmd ?
>
> best regards
> Yngvi
>
> -----Original Message-----
>From: Uwe Sauter [mailto:uwe.sauter.de<http://uwe.sauter.de>@gmail.com]
> Sent: 29. júní 2015 18:08
> To: YANG LI
>Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
> Subject: Re:
>
>[Openstack] network question on openstack installation
>
>It depends on your switch… some drop tagged packets on an access port,
>others allow tagged packets, if the packet VLAN ID equals the
>configured VLAN ID.
>
> I'd reconfigure the provider network type to "flat" but that's
> personal taste. You could also reconfigure the switch port to be a
> trunking port (with only one VLAN ID). Currently your in between those
> configurations that might or might not work lateron…
>
>
> Regards,
>
> Uwe
>
>
> Am 29.06.2015 um 19:42 schrieb YANG LI:
>
> thank you, Uwe. our provider network actually is untagged, but I did
>specified VLAN ID when I create our external network and everything
>still works. will this cause issue later on?
>
> eutron net-create --provider:network_type=vlan
> —provider:segmentation_id=<vlan id>
> --provider:physical_network=physnet1
> --router:external=true public
>
>
> Thanks,
> Yang
>
>On Jun 27, 2015, at 11:17 AM, Uwe Sauter
><uwe.sauter.de<http://uwe.sauter.de>@gmail.com
><mailto:uwe.sauter.de<http://uwe.sauter.de>@gmail.com>> wrote:
>
> Hi Yang,
>
>it depends on whether your provider network is tagged or untagged. If
>it is untagged (the switch port is an "access"
> port) then you don't specify the VLAN ID for the external network (as
>it will get tagged by the switch). If the provider network is tagged
>(the switch port is a "trunk" port) then you have to configure the VLAN
>ID because the switch might refuse the traffic (depending if there is a
>default VLAN ID defined on the switch's port).
>
>Regards,
>
> Uwe
>
>
>
> Am 27.06.2015 um 13:47 schrieb YANG LI:
>
> Thank you so much, James! This is so helpful. Another confusion I have
>is about network_vlan_ranges. Is this network VLAN id range?
>If so, does it has to match external network? For example, we only have
>one external VLAN we can use as Our provider network and that VLAN id
>is 775 (xxx.xxx.xxx.0/26). Should I define network_vlan_ranges as
>following?
>
> [ml2]
> type_drivers=vlan
> tenant_network_types = vlan
> mechanism_drivers=openvswitch
> #
> [ml2_type_vlan]
> #thistellsOpenstackthattheinternalname"physnet1"providesthevlanrang
> e
> 100-199
> network_vlan_ranges=physnet1:775
> #
>
> Thanks,
> Yang
> Sent from my iPhone
>
> On Jun 26, 2015, at 8:54 AM, "James Denton"
>
><james.denton at rackspace.com
><mailto:james.denton at rackspace.com<mailto:james.denton at rackspace.com%20
>%3cmailto:james.denton at rackspace.com>>
> <mailto:james.denton at rackspace.com>> wrote:
>
> You can absolutely have instances in the same network span different
>compute nodes. As an admin, you can run ‘nova show <instanceid>’ and
>see the host in the output:
>
> root at controller01:~# nova show
> 7bb18175-87da-4d1f-8dca-2ef07fee9d21
> | grep host
>| OS-EXT-SRV-ATTR:host | compute02
> |
>
> That info is not available to non-admin users by default.
>
> James
>
> On Jun 26, 2015, at 7:38 AM, YANG LI <yangli at clemson.edu
><mailto:yangli at clemson.edu<mailto:yangli at clemson.edu%20%0b%20%3cmailto:
>yangli at clemson.edu>>
><mailto:yangli at clemson.edu>>
>
>wrote:
>
> Thanks, James for the explanation. it make more sense now.
><http://now.it/> it is possible that a instances on same tenant network
>reside on different compute nodes right? how do I tell which compute
>node a instance is on?
>
> Thanks,
> Yang
>
> On Jun 24, 2015, at 10:27 AM, James Denton <james.denton at rackspace.com
><mailto:james.denton at rackspace.com<mailto:james.denton at rackspace.com%20
>%3cmailto:james.denton at rackspace.com>>
> <mailto:james.denton at rackspace.com>> wrote:
>
> Hello.
>
> all three nodes will have eth0 on management/api network. since I am
> using ml2 plugin with vlan for tenant network, I think all compute
> node should have eth1 as the second nic on provider network. Is this
>
>correct? I understand provider network is for instance to get external
>access to internet, but how is instance live on compute1 communicate
>with instance live on compute2? are they also go through provider
>network?
>
> In short, yes. If you’re connecting instances to vlan “provider”
> networks, traffic between instances on different compute nodes will
>traverse the “provider bridge”, get tagged out eth1, and hit the
>physical switching fabric. Your external gateway device could also sit
>in that vlan, and the default route on the instance would direct
>external traffic to that device.
>
> In reality, every network has ‘provider’ attributes that describe the
> network type, segmentation id, and bridge interface (for vlan/flat
> only). So tenant networks that leverage vlans would have provider
> attributes set by Neutron automatically based on the configuration set
> in the ML2 config file. If you
>
>use Neutron routers that connect to both ‘tenant’ vlan-based networks
>and external ‘provider’ networks, all of that traffic could traverse
>the same provider bridge on the controller/network node, but would be
>tagged accordingly based on the network (ie. vlan 100 for external
>network, vlan 200 for tenant network).
>
> Hope that’s not too confusing!
>
> James
>
> On Jun 24, 2015, at 8:54 AM, YANG LI <yangli at clemson.edu
><mailto:yangli at clemson.edu<mailto:yangli at clemson.edu%20%0b%20%3cmailto:
>yangli at clemson.edu>>
><mailto:yangli at clemson.edu>>
> wrote:
>
> I am working on install openstack from scratch, but get confused with
>network part. I want to have one controller node, two compute nodes.
>
> the controller node will only handle following services:
> glance-api
> glance-registry
> keystone
> nova-api
> nova-cert
>
>nova-conductor
> nova-consoleauth
> nova-novncproxy
> nova-scheduler
> qpid
> mysql
> neutron-server
>
> compute nodes will have following services:
> neutron-dhcp-agent
> neutron-l3-agent
> neutron-metadata-agent
> neutron-openvswitch-agent
> neutron-ovs-cleanup
> openvswtich
> nova-compute
>
> all three nodes will have eth0 on management/api network. since I am
>using ml2 plugin with vlan for tenant network, I think all compute
>node should have eth1 as the second nic on provider network. Is this
>correct? I understand provider network is for instance to get external
>access to internet, but how is instance live on compute1 communicate
>with instance live on compute2? are they also go through provider
>network?
>
>________________________________
>
>Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to :
>
>openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
><mailto:openstack at lists.openstack.org>
> <mailto:openstack at lists.openstack.org>
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
>
>
>
>
>________________________________
>
>Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to :
>openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
><mailto:openstack at lists.openstack.org>
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>________________________________
>
>Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>Post to :
>openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
><mailto:openstack at lists.openstack.org>
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
>________________________________
>
>Mailing list:
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to :
>openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
>
>
>--
>Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
>gesendet.
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.
More information about the Openstack
mailing list