[Openstack] [Neutron] Compute node can't connect to control node

柯俊兆 kojiunjau at gmail.com
Thu Feb 18 12:59:34 UTC 2016


Hi Luke

Thanks for your reply.
It's work !

But neutron seems difficult more then nova-network.
I can't start new instance in different availability zone.
Look like need create network and route.

Where can find the Docs to tell about OpenStack administration more detail?
I searched http://docs.openstack.org , and it too simple and the examples
doesn't full with I need.

=============== n-cond log ===============
ubuntu at asus:/opt/stack/logs$ grep 31a32c85-fc13-46db-93db-f5e1ee836fe2
n-cond.log
2016-02-18 20:39:58.253 ERROR nova.scheduler.utils
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] [instance:
31a32c85-fc13-46db-93db-f5e1ee836fe2] Error from last host: neutron (node
neutron): [u'Traceback (most recent call last):\n', u'  File
"/opt/stack/nova/nova/compute/manager.py", line 1920, in
_do_build_and_run_instance\n    filter_properties)\n', u'  File
"/opt/stack/nova/nova/compute/manager.py", line 2086, in
_build_and_run_instance\n    instance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
31a32c85-fc13-46db-93db-f5e1ee836fe2 was re-scheduled: Binding failed for
port df818cd8-0667-462e-aa04-4a3c6ba8510a, please check neutron logs for
more information.\n']
2016-02-18 20:39:58.325 WARNING nova.scheduler.utils
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] [instance:
31a32c85-fc13-46db-93db-f5e1ee836fe2] Setting instance to ERROR state.
2016-02-18 20:39:58.524 DEBUG nova.network.neutronv2.api
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] [instance:
31a32c85-fc13-46db-93db-f5e1ee836fe2] deallocate_for_instance() from
(pid=8643) deallocate_for_instance
/opt/stack/nova/nova/network/neutronv2/api.py:796
2016-02-18 20:39:58.525 DEBUG keystoneauth.session
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] REQ: curl -g -i -X
GET
http://192.168.1.130:9696/v2.0/ports.json?device_id=31a32c85-fc13-46db-93db-f5e1ee836fe2
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: {SHA1}6a4d1e094116a0ccb022e35216bb44d47d191277" from
(pid=8643) _http_log_request
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:248
2016-02-18 20:39:58.562 DEBUG nova.network.neutronv2.api
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] [instance:
31a32c85-fc13-46db-93db-f5e1ee836fe2] Instance cache missing network info.
from (pid=8643) _get_preexisting_port_ids
/opt/stack/nova/nova/network/neutronv2/api.py:1651
2016-02-18 20:39:58.623 DEBUG nova.network.base_api
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] [instance:
31a32c85-fc13-46db-93db-f5e1ee836fe2] Updating instance_info_cache with
network_info: [] from (pid=8643) update_instance_cache_with_nw_info
/opt/stack/nova/nova/network/base_api.py:43
=============== n-cond log ===============

=============== n-sch log ===============
2016-02-18 20:39:58.272 DEBUG nova.scheduler.host_manager
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Update host state
with instances: {} from (pid=8710) _locked_update
/opt/stack/nova/nova/scheduler/host_manager.py:176
2016-02-18 20:39:58.272 DEBUG oslo_concurrency.lockutils
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Lock "(u'neutron',
u'neutron')" released by "nova.scheduler.host_manager._locked_update" ::
held 0.001s from (pid=8710) inner
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2016-02-18 20:39:58.273 DEBUG nova.filters
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Starting with 3
host(s) from (pid=8710) get_filtered_objects
/opt/stack/nova/nova/filters.py:70
2016-02-18 20:39:58.273 INFO nova.scheduler.filters.retry_filter
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Host [u'neutron',
u'neutron'] fails.  Previously tried hosts: [[u'neutron', u'neutron']]
2016-02-18 20:39:58.273 DEBUG nova.filters
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Filter RetryFilter
returned 2 host(s) from (pid=8710) get_filtered_objects
/opt/stack/nova/nova/filters.py:104
2016-02-18 20:39:58.273 DEBUG
nova.scheduler.filters.availability_zone_filter
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Availability Zone
'x86_64' requested. (asus, asus) ram:3337 disk:882688 io_ops:0 instances:1
has AZs: nova from (pid=8710) host_passes
/opt/stack/nova/nova/scheduler/filters/availability_zone_filter.py:60
2016-02-18 20:39:58.273 DEBUG
nova.scheduler.filters.availability_zone_filter
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Availability Zone
'x86_64' requested. (pismo-192-168-1-98, pismo-192-168-1-98) ram:7573
disk:47104 io_ops:0 instances:0 has AZs: nova from (pid=8710) host_passes
/opt/stack/nova/nova/scheduler/filters/availability_zone_filter.py:60
2016-02-18 20:39:58.274 INFO nova.filters
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Filter
AvailabilityZoneFilter returned 0 hosts
2016-02-18 20:39:58.274 DEBUG nova.filters
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Filtering removed
all hosts for the request with instance ID
'31a32c85-fc13-46db-93db-f5e1ee836fe2'. Filter results: [('RetryFilter',
[(u'asus', u'asus'), (u'pismo-192-168-1-98', u'pismo-192-168-1-98')]),
('AvailabilityZoneFilter', None)] from (pid=8710) get_filtered_objects
/opt/stack/nova/nova/filters.py:129
2016-02-18 20:39:58.274 INFO nova.filters
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] Filtering removed
all hosts for the request with instance ID
'31a32c85-fc13-46db-93db-f5e1ee836fe2'. Filter results: ['RetryFilter:
(start: 3, end: 2)', 'AvailabilityZoneFilter: (start: 2, end: 0)']
2016-02-18 20:39:58.274 DEBUG nova.scheduler.filter_scheduler
[req-ef9f4cc6-115a-4e95-9a3d-d1656fc2ceb7 admin admin] There are 0 hosts
available but 1 instances requested to build. from (pid=8710)
select_destinations /opt/stack/nova/nova/scheduler/filter_scheduler.py:71
=============== n-sch log ===============

Best Regards,
Jau


2016-02-18 19:46 GMT+08:00 Hinds, Luke (Nokia - GB/Bristol) <
luke.hinds at nokia.com>:

> Seems your interfaces are not set up correctly. The IP address should
> be under br-ex
>
> For example,
>
> ifcfg-br-ex:
>
> DEVICE=br-ex
> DEVICETYPE=ovs
> TYPE=OVSBridge
> BOOTPROTO=static
> IPADDR=192.168.1.98
> NETMASK=255.255.255.0
> GATEWAY=192.168.1.1
> ONBOOT=yes
>
> ifcfg-eth0:
>
> DEVICE=eth0
> HWADDR=52:54:00:92:05:AE # your hwaddr (find via ip link show eth0)
> TYPE=OVSPort
> DEVICETYPE=ovs
> OVS_BRIDGE=br-ex
> ONBOOT=yes
>
> Running 'ifconfig or ip addr' should then show the IP address plumbed
> to br-ex
>
>
> On Thu, 2016-02-18 at 18:58 +0800, EXT 柯俊兆 wrote:
> > Hi All
> >
> > When I setting neutron for openstack network, I got some problem and
> > doesn't know why.
> >
> > I reference"Neutron Networking with Open vSwitch and Provider
> > Networks" to setting my openstack network.
> >
> > Control node is work fine and NIC eth0 IP address can assign to br-ex
> > successful
> > Compute node NIC eth0 assign to br-ex failed ,and Compute all API
> > service can't connect to Control
> >
> > My compute node local.conf  in below
> > ================ start ================
> > ubuntu at pismo-192-168-1-98:~/OpenStack/devstack$ more local.conf
> > [[local|localrc]]
> > ADMIN_PASSWORD=admin
> > DATABASE_PASSWORD=admin
> > RABBIT_PASSWORD=admin
> > SERVICE_PASSWORD=admin
> > SERVICE_TOKEN=admin
> >
> > DEST=/opt/stack
> >
> > LOGFILE=/opt/stack/logs/stack.sh.log
> > VERBOSE=True
> > LOG_COLOR=True
> > SCREEN_LOGDIR=/opt/stack/logs
> >
> > MULTI_HOST=True
> > DATABASE_TYPE=mysql
> >
> > HOST_IP=192.168.1.98
> > FLAT_INTERFACE=eth1
> > FIXED_RANGE=10.0.0.0/24
> > FIXED_NETWORK_SIZE=256
> > FLOATING_RANGE=192.168.1.0/24
> > Q_FLOATING_ALLOCATION_POOL=start=192.168.1.221,end=192.168.1.230
> > SERVICE_HOST=192.168.1.130
> > MYSQL_HOST=192.168.1.130
> > RABBIT_HOST=192.168.1.130
> > GLANCE_HOSTPORT=192.168.1.130:9292
> > ENABLED_SERVICES=n-cpu,rabbit,q-agt
> >
> > ## Open vSwitch provider networking options
> > PHYSICAL_NETWORK=default
> > OVS_PHYSICAL_BRIDGE=br-ex
> > PUBLIC_INTERFACE=eth0
> > Q_USE_PROVIDER_NETWORKING=True
> > Q_L3_ENABLED=False
> >
> >
> > ================ end ================
> >
> >
> > My compute node NIC info in below
> > ================ start ================
> > ubuntu at pismo-192-168-1-98:~/OpenStack/devstack$ ifconfig
> > br-ex     Link encap:Ethernet  HWaddr 30:0e:d5:c7:59:3e
> >           inet6 addr: fe80::1cde:14ff:fed6:1fa1/64 Scope:Link
> >           UP BROADCAST RUNNING  MTU:1500  Metric:1
> >           RX packets:289 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:18096 (18.0 KB)  TX bytes:648 (648.0 B)
> >
> > br-int    Link encap:Ethernet  HWaddr 46:6b:f2:e5:c9:41
> >           inet6 addr: fe80::b801:4cff:feb6:cbc2/64 Scope:Link
> >           UP BROADCAST RUNNING  MTU:1500  Metric:1
> >           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)
> >
> > br-tun    Link encap:Ethernet  HWaddr 36:0f:ae:d8:c6:43
> >           inet6 addr: fe80::f4e4:b9ff:feab:5d1d/64 Scope:Link
> >           UP BROADCAST RUNNING  MTU:1500  Metric:1
> >           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)
> >
> > eth0      Link encap:Ethernet  HWaddr 30:0e:d5:c7:59:3e
> >           inet addr:192.168.1.98  Bcast:192.168.1.255
> > Mask:255.255.255.0
> >           inet6 addr: fe80::320e:d5ff:fec7:593e/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:40871 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:28210 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:1000
> >           RX bytes:7796067 (7.7 MB)  TX bytes:5086230 (5.0 MB)
> >           Interrupt:109
> >
> > eth1      Link encap:Ethernet  HWaddr 30:0e:d5:c7:59:3f
> >           inet addr:10.0.0.98  Bcast:10.0.0.255  Mask:255.255.255.0
> >           inet6 addr: fe80::320e:d5ff:fec7:593f/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:2538 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:3135 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:1000
> >           RX bytes:189631 (189.6 KB)  TX bytes:531277 (531.2 KB)
> >           Interrupt:112
> >
> > lo        Link encap:Local Loopback
> >           inet addr:127.0.0.1  Mask:255.0.0.0
> >           inet6 addr: ::1/128 Scope:Host
> >           UP LOOPBACK RUNNING  MTU:65536  Metric:1
> >           RX packets:3960 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:3960 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:346996 (346.9 KB)  TX bytes:346996 (346.9 KB)
> >
> >
> > ================ end ================
> >
> > The question are
> > 1.In Open vSwitch provider networking options
> > PHYSICAL_NETWORK=default  <-- Is mean it use my NIC or  my public
> > network ?
> >
> > 2.PUBLIC_INTERFACE is same with FLAT_INTERFACE?
> >
> > 3.Why the SERVICE_HOST connect failed?
> > I can ping or connect to SERVICE_HOST via other server,but compute
> > node can't.
> >
> > 4.My compute node br-ex doesn't be assign eth0's IP,so it can't go
> > out ? If yes,how to fix it ?
> >
> >
> > Best Regards,
> > Jau
> > _______________________________________________
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ope
> > nstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ope
> > nstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160218/6e17a6b5/attachment.html>


More information about the Openstack mailing list