[Openstack-operators] Neutron Issues

Chris Sarginson csargiso at gmail.com
Tue May 2 21:23:45 UTC 2017


If you're using openvswitch, with Newton there was a change to the default
agent for configuring openvswitch to be the python ryu library, I think
it's been mentioned on here recently, so probably worth having a poke
through the archives for more information.  I'd check your neutron
openvswitch agent logs for errors pertaining to openflow configuration
specifically, and if you see anything, it's probably worth applying the
following config to your ml2 ini file under the [OVS] section:

of_interface = ovs-ofctl

https://docs.openstack.org/mitaka/config-reference/networking/networking_options_reference.html

Then restart the neutron openvswitch agent, watch the logs, hopefully this
is of some use to you.

On Tue, 2 May 2017 at 21:30 Steve Powell <spowell at silotechgroup.com> wrote:

> I forgot to mention I’m running Newton and my neutron.conf file is below
> and I’m running haproxy.
>
>
>
>     [DEFAULT]
>
>     core_plugin = ml2
>
>     service_plugins = router
>
>     allow_overlapping_ips = True
>
>     notify_nova_on_port_status_changes = True
>
>     notify_nova_on_port_data_changes = True
>
>     transport_url = rabbit://openstack:#############@x.x.x.x
>
>     auth_strategy = keystone
>
>
>
>     [agent]
>
>     root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
>
>
>
>     [cors]
>
>
>
>     [cors.subdomain]
>
>
>
>     [database]
>
>     connection = mysql+pymysql://neutron:###########################@
> 10.10.6.220/neutron
>
>
>
>     [keystone_authtoken]
>
>     auth_url = http://x.x.x.x:35357/v3
>
>     auth_uri = https://xxx.xxxx.xxx:5000/v3
>
>     memcached_servers = x.x.x.x:11211
>
>     auth_type = password
>
>     project_domain_name = Default
>
>     user_domain_name = Default
>
>     project_name = service
>
>     username = neutron
>
>     password = ##################################################
>
>
>
>
>
>     [matchmaker_redis]
>
>
>
>     [nova]
>
>
>
>     auth_url = http://x.x.x.x:35357/v3
>
>     auth_type = password
>
>     project_domain_name = Default
>
>     user_domain_name = Default
>
>     region_name = RegionOne
>
>     project_name = service
>
>     username = nova
>
>     password = ###################################################
>
>
>
>     [oslo_concurrency]
>
>
>
>     [oslo_messaging_amqp]
>
>
>
>     [oslo_messaging_notifications]
>
>
>
>     [oslo_messaging_rabbit]
>
>
>
>     [oslo_messaging_zmq]
>
>
>
>     [oslo_middleware]
>
>     enable_proxy_headers_parsing = True
>
>     enable_http_proxy_to_wsgi = True
>
>
>
>     [oslo_policy]
>
>
>
>     [qos]
>
>
>
>     [quotas]
>
>
>
>     [ssl]
>
>
>
> *From:* Steve Powell [mailto:spowell at silotechgroup.com]
> *Sent:* Tuesday, May 2, 2017 4:16 PM
> *To:* openstack-operators at lists.openstack.org
> *Subject:* [Openstack-operators] Neutron Issues
>
>
>
>
> This sender failed our fraud detection checks and may not be who they appear to be. Learn about
> spoofing <http://aka.ms/LearnAboutSpoofing>
>
> Feedback <http://aka.ms/SafetyTipsFeedback>
>
> Hello Ops!
>
>
>
> I have a major issue slapping me in the face and seek any assistance
> possible. When trying to spin up and instance whether from the command
> line, manually in Horizon, or with a HEAT template I receive the following
> error in nova and, where applicable, heat logs:
>
>
>
> Failed to allocate the network(s), not rescheduling.
>
>
>
> I see in the neutron logs where the request make it through to completion
> but that info is obviously not making it back to nova.
>
>
>
> INFO neutron.notifiers.nova [-] Nova event response: {u'status':
> u'completed', u'code': 200, u'name': u'network-changed', u'server_uuid':
> u'6892bb9e-4256-4fc9-a313-331f0c576a03'}
>
>
>
> What am I missing? Why would the response from neutron not make it back to
> nova?
>
>
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170502/74220dfe/attachment-0001.html>


More information about the OpenStack-operators mailing list