<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<div class="moz-cite-prefix">On 3/27/14, 7:15 PM, Tom Verdaat wrote:<br>
</div>
<blockquote
cite="mid:CACznH7aoBQfSVza1KgoPA2RX9zs3b27qo=1nvORPuoBOzazv4A@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>Thanks for your reply.<br>
<br>
I checked that. Followed <a moz-do-not-send="true"
href="https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/?answer=7259#post-id-7259">these
instructions</a> too. Using the vxlan typedriver and
openvswitch mechanism driver. Agents are up on both the
networking node and compute node.<br>
<span style="font-family:arial,helvetica,sans-serif"><br>
</span></div>
<span style="font-family:arial,helvetica,sans-serif">Pasted
the agent information and configuration files below. vm-1 is
the networking node, mv-4 the compute node. Tried
configuring a public network for floating IP's but I get the
same problem with a simple internal tenant network.<br>
<br>
</span></div>
<span style="font-family:arial,helvetica,sans-serif">Any idea
why it won't work?<br>
</span></div>
</blockquote>
If you run "nova service-list" as admin, do you see the nova-compute
services with the exact same host IDs ("vm-1" and "vm-4") as the L2
agents? <br>
<br>
Also, not sure if this is related, but I noticed that l2_population
is True on vm-1, and False on vm-4. Both these should be True if the
l2population mechanism driver is enabled on the server, and False
otherwise.<br>
<br>
-Bob<br>
<br>
<blockquote
cite="mid:CACznH7aoBQfSVza1KgoPA2RX9zs3b27qo=1nvORPuoBOzazv4A@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div><span style="font-family:arial,helvetica,sans-serif"><br>
</span></div>
<div><span style="font-family:arial,helvetica,sans-serif">Thanks,]<br>
<br>
</span></div>
<div><span style="font-family:arial,helvetica,sans-serif">Tom<br>
---<br>
</span></div>
<div><span style="font-family:arial,helvetica,sans-serif"><br>
</span><span style="font-family:courier new,monospace"><span
style="font-family:arial,helvetica,sans-serif">#
neutron agent-list</span><br>
+--------------------------------------+--------------------+------+-------+----------------+<br>
| id |
agent_type | host | alive | admin_state_up |<br>
+--------------------------------------+--------------------+------+-------+----------------+<br>
| e3b5c1b3-307c-4c27-8ea2-30e253da776b | Loadbalancer
agent | vm-1 | :-) | True |<br>
| 8aa54d4a-27db-4ec9-a49f-0766479ce35c | Metering
agent | vm-1 | :-) | True |<br>
| e63e47c7-b78c-42b3-b28e-62e6ada26be2 | DHCP
agent | vm-1 | :-) | True |<br>
| 994c8598-ee7d-46ae-9681-b7718758199c | L3
agent | vm-1 | :-) | True |<br>
| c26c302c-19a1-44af-b941-061a11e559eb | Open vSwitch
agent | vm-1 | :-) | True |<br>
| d80bac01-f651-4ad3-8564-d6933ee9a919 | Open vSwitch
agent | vm-4 | :-) | True |<br>
+--------------------------------------+--------------------+------+-------+----------------+<br>
<span style="font-family:arial,helvetica,sans-serif"><br>
<br>
# neutron agent-show
994c8598-ee7d-46ae-9681-b7718758199c</span><br>
+---------------------+-------------------------------------------------------------------------------+<br>
| Field |
Value
|<br>
+---------------------+-------------------------------------------------------------------------------+<br>
| admin_state_up |
True
|<br>
| agent_type | L3
agent
|<br>
| alive |
True
|<br>
| binary |
neutron-l3-agent
|<br>
| configurations |
{
|<br>
| | "router_id":
"",
|<br>
| |
"gateway_external_network_id":
"", |<br>
| |
"handle_internal_only_routers":
true, |<br>
| | "use_namespaces":
true, |<br>
| | "routers":
0,
|<br>
| | "interfaces":
0,
|<br>
| | "floating_ips":
0,
|<br>
| | "interface_driver":
"neutron.agent.linux.interface.OVSInterfaceDriver", |<br>
| | "ex_gw_ports":
0
|<br>
| |
}
|<br>
| created_at | 2014-03-27
00:34:10.528993
|<br>
| description
|
|<br>
| heartbeat_timestamp | 2014-03-27
09:47:34.777233
|<br>
| host |
vm-1
|<br>
| id |
994c8598-ee7d-46ae-9681-b7718758199c
|<br>
| started_at | 2014-03-27
08:47:30.366770
|<br>
| topic |
l3_agent
|<br>
+---------------------+-------------------------------------------------------------------------------+<br>
<span style="font-family:arial,helvetica,sans-serif"><br>
<br>
# neutron agent-show
c26c302c-19a1-44af-b941-061a11e559eb</span><br>
+---------------------+--------------------------------------+<br>
| Field |
Value |<br>
+---------------------+--------------------------------------+<br>
| admin_state_up |
True |<br>
| agent_type | Open vSwitch
agent |<br>
| alive |
True |<br>
| binary |
neutron-openvswitch-agent |<br>
| configurations |
{ |<br>
| | "tunnel_types":
[ |<br>
| |
"vxlan" |<br>
| |
], |<br>
| | "tunneling_ip":
"10.12.0.20", |<br>
| | "bridge_mappings":
{}, |<br>
| | "l2_population":
true, |<br>
| | "devices":
0 |<br>
| |
} |<br>
| created_at | 2014-03-27
00:35:17.555910 |<br>
| description
| |<br>
| heartbeat_timestamp | 2014-03-27
09:33:35.133166 |<br>
| host |
vm-1 |<br>
| id |
c26c302c-19a1-44af-b941-061a11e559eb |<br>
| started_at | 2014-03-27
00:35:22.920690 |<br>
| topic |
N/A |<br>
+---------------------+--------------------------------------+<br>
<span style="font-family:arial,helvetica,sans-serif"><br>
<br>
# neutron agent-show
d80bac01-f651-4ad3-8564-d6933ee9a919</span><br>
+---------------------+--------------------------------------+<br>
| Field |
Value |<br>
+---------------------+--------------------------------------+<br>
| admin_state_up |
True |<br>
| agent_type | Open vSwitch
agent |<br>
| alive |
True |<br>
| binary |
neutron-openvswitch-agent |<br>
| configurations |
{ |<br>
| | "tunnel_types":
[ |<br>
| |
"vxlan" |<br>
| |
], |<br>
| | "tunneling_ip":
"10.12.0.23", |<br>
| | "bridge_mappings":
{}, |<br>
| | "l2_population":
false, |<br>
| | "devices":
0 |<br>
| |
} |<br>
| created_at | 2014-03-27
09:13:24.616733 |<br>
| description
| |<br>
| heartbeat_timestamp | 2014-03-27
09:33:44.360747 |<br>
| host |
vm-4 |<br>
| id |
d80bac01-f651-4ad3-8564-d6933ee9a919 |<br>
| started_at | 2014-03-27
09:13:24.616733 |<br>
| topic |
N/A |<br>
+---------------------+--------------------------------------+</span><br>
<br>
<br>
==============================================================================================================================================================<br>
/etc/neutron/neutron.conf<br>
==============================================================================================================================================================<br>
<br>
[DEFAULT]<br>
# Default log level is INFO<br>
# verbose and debug has the same result.<br>
# One of them will set DEBUG log level output<br>
# debug = False<br>
debug = False<br>
# verbose = False<br>
verbose = False<br>
<br>
# Where to store Neutron state files. This directory must
be writable by the<br>
# user executing the agent.<br>
state_path = /var/lib/neutron<br>
<br>
# Where to store lock files<br>
lock_path = $state_path/lock<br>
<br>
# log_format = %(asctime)s %(levelname)8s [%(name)s]
%(message)s<br>
# log_date_format = %Y-%m-%d %H:%M:%S<br>
<br>
# use_syslog -> syslog<br>
# log_file and log_dir ->
log_dir/log_file<br>
# (not log_file) and log_dir ->
log_dir/{binary_name}.log<br>
# use_stderr -> stderr<br>
# (not user_stderr) and (not log_file) -> stdout<br>
# publish_errors -> notification
system<br>
<br>
# use_syslog = False<br>
use_syslog = False<br>
# syslog_log_facility = LOG_USER<br>
<br>
# use_stderr = True<br>
# log_file =<br>
# log_dir =<br>
log_dir =/var/log/neutron<br>
<br>
# publish_errors = False<br>
<br>
# Address to bind the API server<br>
# bind_host = 0.0.0.0<br>
bind_host = 0.0.0.0<br>
<br>
# Port the bind the API server to<br>
# bind_port = 9696<br>
bind_port = 9696<br>
<br>
# Path to the extensions. Note that this can be a
colon-separated list of<br>
# paths. For example:<br>
# api_extensions_path =
extensions:/path/to/more/extensions:/even/more/extensions<br>
# The __path__ of neutron.extensions is appended to this,
so if your<br>
# extensions are in there you don't need to specify them
here<br>
# api_extensions_path =<br>
<br>
# Neutron plugin provider module<br>
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin<br>
<br>
# Advanced service modules<br>
# service_plugins =<br>
service_plugins
=neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.vpn.plugin.VPNDriverPlugin,neutron.services.metering.metering_plugin.MeteringPlugin<br>
<br>
# Paste configuration file<br>
# api_paste_config = api-paste.ini<br>
<br>
# The strategy to be used for auth.<br>
# Supported values are 'keystone'(default), 'noauth'.<br>
# auth_strategy = keystone<br>
auth_strategy = keystone<br>
<br>
# Base MAC address. The first 3 octets will remain
unchanged. If the<br>
# 4h octet is not 00, it will also used. The others will
be<br>
# randomly generated.<br>
# 3 octet<br>
# base_mac = fa:16:3e:00:00:00<br>
base_mac = fa:16:3e:00:00:00<br>
# 4 octet<br>
# base_mac = fa:16:3e:4f:00:00<br>
<br>
# Maximum amount of retries to generate a unique MAC
address<br>
# mac_generation_retries = 16<br>
mac_generation_retries = 16<br>
<br>
# DHCP Lease duration (in seconds)<br>
# dhcp_lease_duration = 86400<br>
dhcp_lease_duration = 120<br>
<br>
# Allow sending resource operation notification to DHCP
agent<br>
# dhcp_agent_notification = True<br>
<br>
# Enable or disable bulk create/update/delete operations<br>
# allow_bulk = True<br>
allow_bulk = True<br>
# Enable or disable pagination<br>
# allow_pagination = False<br>
# Enable or disable sorting<br>
# allow_sorting = False<br>
# Enable or disable overlapping IPs for subnets<br>
# Attention: the following parameter MUST be set to False
if Neutron is<br>
# being used in conjunction with nova security groups<br>
# allow_overlapping_ips = False<br>
allow_overlapping_ips = False<br>
# Ensure that configured gateway is on subnet<br>
# force_gateway_on_subnet = False<br>
<br>
<br>
# RPC configuration options. Defined in rpc __init__<br>
# The messaging module to use, defaults to kombu.<br>
# rpc_backend = neutron.openstack.common.rpc.impl_kombu<br>
rpc_backend = neutron.openstack.common.rpc.impl_kombu<br>
# Size of RPC thread pool<br>
# rpc_thread_pool_size = 64<br>
# Size of RPC connection pool<br>
# rpc_conn_pool_size = 30<br>
# Seconds to wait for a response from call or multicall<br>
# rpc_response_timeout = 60<br>
# Seconds to wait before a cast expires (TTL). Only
supported by impl_zmq.<br>
# rpc_cast_timeout = 30<br>
# Modules of exceptions that are permitted to be recreated<br>
# upon receiving exception data from an rpc call.<br>
# allowed_rpc_exception_modules =
neutron.openstack.common.exception, nova.exception<br>
# AMQP exchange to connect to if using RabbitMQ or QPID<br>
# control_exchange = neutron<br>
control_exchange = neutron<br>
<br>
# If passed, use a fake RabbitMQ provider<br>
# fake_rabbit = False<br>
<br>
# Configuration options if sending notifications via kombu
rpc (these are<br>
# the defaults)<br>
# SSL version to use (valid only if SSL enabled)<br>
# kombu_ssl_version =<br>
# SSL key file (valid only if SSL enabled)<br>
# kombu_ssl_keyfile =<br>
# SSL cert file (valid only if SSL enabled)<br>
# kombu_ssl_certfile =<br>
# SSL certification authority file (valid only if SSL
enabled)'<br>
# kombu_ssl_ca_certs =<br>
# IP address of the RabbitMQ installation<br>
# rabbit_host = localhost<br>
rabbit_host = 127.0.0.1<br>
# Password of the RabbitMQ server<br>
# rabbit_password = guest<br>
rabbit_password = <<removed>><br>
# Port where RabbitMQ server is running/listening<br>
# rabbit_port = 5672<br>
rabbit_port = 5672<br>
# RabbitMQ single or HA cluster (host:port pairs i.e:
host1:5672, host2:5672)<br>
# rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'<br>
# rabbit_hosts = localhost:5672<br>
rabbit_hosts = <a moz-do-not-send="true"
href="http://127.0.0.1:5672">127.0.0.1:5672</a><br>
# User ID used for RabbitMQ connections<br>
# rabbit_userid = guest<br>
rabbit_userid = openstack<br>
# Location of a virtual RabbitMQ installation.<br>
# rabbit_virtual_host = /<br>
rabbit_virtual_host = /<br>
# Maximum retries with trying to connect to RabbitMQ<br>
# (the default of 0 implies an infinite retry count)<br>
# rabbit_max_retries = 0<br>
# RabbitMQ connection retry interval<br>
# rabbit_retry_interval = 1<br>
# Use HA queues in RabbitMQ (x-ha-policy: all).You need to<br>
# wipe RabbitMQ database when changing this option.
(boolean value)<br>
# rabbit_ha_queues = false<br>
rabbit_ha_queues = False<br>
<br>
# QPID<br>
# rpc_backend=neutron.openstack.common.rpc.impl_qpid<br>
# Qpid broker hostname<br>
# qpid_hostname = localhost<br>
# Qpid broker port<br>
# qpid_port = 5672<br>
# Qpid single or HA cluster (host:port pairs i.e:
host1:5672, host2:5672)<br>
# qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'<br>
# qpid_hosts = localhost:5672<br>
# Username for qpid connection<br>
# qpid_username = ''<br>
# Password for qpid connection<br>
# qpid_password = ''<br>
# Space separated list of SASL mechanisms to use for auth<br>
# qpid_sasl_mechanisms = ''<br>
# Seconds between connection keepalive heartbeats<br>
# qpid_heartbeat = 60<br>
# Transport to use, either 'tcp' or 'ssl'<br>
# qpid_protocol = tcp<br>
# Disable Nagle algorithm<br>
# qpid_tcp_nodelay = True<br>
<br>
# ZMQ<br>
# rpc_backend=neutron.openstack.common.rpc.impl_zmq<br>
# ZeroMQ bind address. Should be a wildcard (*), an
ethernet interface, or IP.<br>
# The "host" option should point or resolve to this
address.<br>
# rpc_zmq_bind_address = *<br>
<br>
# ============ Notification System Options
=====================<br>
<br>
# Notifications can be sent when network/subnet/port are
create, updated or deleted.<br>
# There are three methods of sending notifications:
logging (via the<br>
# log_file directive), rpc (via a message queue) and<br>
# noop (no notifications sent, the default)<br>
<br>
# Notification_driver can be defined multiple times<br>
# Do nothing driver<br>
# notification_driver =
neutron.openstack.common.notifier.no_op_notifier<br>
# Logging driver<br>
# notification_driver =
neutron.openstack.common.notifier.log_notifier<br>
# RPC driver. DHCP agents needs it.<br>
notification_driver =
neutron.openstack.common.notifier.rpc_notifier<br>
<br>
# default_notification_level is used to form actual topic
name(s) or to set logging level<br>
# default_notification_level = INFO<br>
<br>
# default_publisher_id is a part of the notification
payload<br>
# host = <a moz-do-not-send="true"
href="http://myhost.com">myhost.com</a><br>
# default_publisher_id = $host<br>
<br>
# Defined in rpc_notifier, can be comma separated values.<br>
# The actual topic names will be
%s.%(default_notification_level)s<br>
# notification_topics = notifications<br>
<br>
# Default maximum number of items returned in a single
response,<br>
# value == infinite and value < 0 means no max limit,
and value must<br>
# greater than 0. If the number of items requested is
greater than<br>
# pagination_max_limit, server will just return
pagination_max_limit<br>
# of number of items.<br>
# pagination_max_limit = -1<br>
<br>
# Maximum number of DNS nameservers per subnet<br>
# max_dns_nameservers = 5<br>
<br>
# Maximum number of host routes per subnet<br>
# max_subnet_host_routes = 20<br>
<br>
# Maximum number of fixed ips per port<br>
# max_fixed_ips_per_port = 5<br>
<br>
# =========== items for agent management extension
=============<br>
# Seconds to regard the agent as down; should be at least
twice<br>
# report_interval, to be sure the agent is down for good<br>
# agent_down_time = 9<br>
agent_down_time = 9<br>
# =========== end of items for agent management extension
=====<br>
<br>
# =========== items for agent scheduler extension
=============<br>
# Driver to use for scheduling network to DHCP agent<br>
# network_scheduler_driver =
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler<br>
# Driver to use for scheduling router to a default L3
agent<br>
# router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.ChanceScheduler<br>
router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.ChanceScheduler<br>
# Driver to use for scheduling a loadbalancer pool to an
lbaas agent<br>
# loadbalancer_pool_scheduler_driver =
neutron.services.loadbalancer.agent_scheduler.ChanceScheduler<br>
<br>
# Allow auto scheduling networks to DHCP agent. It will
schedule non-hosted<br>
# networks to first DHCP agent which sends
get_active_networks message to<br>
# neutron server<br>
# network_auto_schedule = True<br>
<br>
# Allow auto scheduling routers to L3 agent. It will
schedule non-hosted<br>
# routers to first L3 agent which sends sync_routers
message to neutron server<br>
# router_auto_schedule = True<br>
<br>
# Number of DHCP agents scheduled to host a network. This
enables redundant<br>
# DHCP agents for configured networks.<br>
# dhcp_agents_per_network = 1<br>
dhcp_agents_per_network = 1<br>
<br>
# =========== end of items for agent scheduler extension
=====<br>
<br>
# =========== WSGI parameters related to the API server
==============<br>
# Number of separate worker processes to spawn. The
default, 0, runs the<br>
# worker thread in the current process. Greater than 0
launches that number of<br>
# child processes as workers. The parent process manages
them.<br>
# api_workers = 0<br>
api_workers = 0<br>
# Sets the value of TCP_KEEPIDLE in seconds to use for
each server socket when<br>
# starting API server. Not supported on OS X.<br>
# tcp_keepidle = 600<br>
<br>
# Number of seconds to keep retrying to listen<br>
# retry_until_window = 30<br>
<br>
# Number of backlog requests to configure the socket with.<br>
# backlog = 4096<br>
<br>
# Enable SSL on the API server<br>
# use_ssl = False<br>
<br>
# Certificate file to use when starting API server
securely<br>
# ssl_cert_file = /path/to/certfile<br>
<br>
# Private key file to use when starting API server
securely<br>
# ssl_key_file = /path/to/keyfile<br>
<br>
# CA certificate file to use when starting API server
securely to<br>
# verify connecting clients. This is an optional parameter
only required if<br>
# API clients need to authenticate to the API server using
SSL certificates<br>
# signed by a trusted CA<br>
# ssl_ca_file = /path/to/cafile<br>
# ======== end of WSGI parameters related to the API
server ==========<br>
report_interval=4<br>
<br>
[quotas]<br>
# resource name(s) that are supported in quota features<br>
# quota_items = network,subnet,port<br>
<br>
# default number of resource allowed per tenant, minus for
unlimited<br>
# default_quota = -1<br>
<br>
# number of networks allowed per tenant, and minus means
unlimited<br>
# quota_network = 10<br>
<br>
# number of subnets allowed per tenant, and minus means
unlimited<br>
# quota_subnet = 10<br>
<br>
# number of ports allowed per tenant, and minus means
unlimited<br>
# quota_port = 50<br>
<br>
# number of security groups allowed per tenant, and minus
means unlimited<br>
# quota_security_group = 10<br>
<br>
# number of security group rules allowed per tenant, and
minus means unlimited<br>
# quota_security_group_rule = 100<br>
<br>
# default driver to use for quota checks<br>
# quota_driver = neutron.db.quota_db.DbQuotaDriver<br>
<br>
[agent]<br>
# Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf"
to use the real<br>
# root filter facility.<br>
# Change to "sudo" to skip the filtering and just run the
comand directly<br>
root_helper = sudo /usr/bin/neutron-rootwrap
/etc/neutron/rootwrap.conf<br>
<br>
# =========== items for agent management extension
=============<br>
# seconds between nodes reporting state to server; should
be less than<br>
# agent_down_time, best if it is half or less than
agent_down_time<br>
# report_interval = 4<br>
<br>
# =========== end of items for agent management extension
=====<br>
<br>
[keystone_authtoken]<br>
auth_host = 127.0.0.1<br>
auth_port = 35357<br>
auth_protocol = http<br>
admin_tenant_name = services<br>
admin_user = neutron<br>
admin_password = <<removed>><br>
signing_dir = $state_path/keystone-signing<br>
auth_uri=<a moz-do-not-send="true"
href="http://127.0.0.1:5000/">http://127.0.0.1:5000/</a><br>
<br>
[database]<br>
# This line MUST be changed to actually run the plugin.<br>
# Example:<br>
# connection = mysql://<a moz-do-not-send="true"
href="http://root:pass@127.0.0.1:3306/neutron">root:pass@127.0.0.1:3306/neutron</a><br>
# Replace 127.0.0.1 above with the IP address of the
database used by the<br>
# main neutron server. (Leave it as is if the database
runs on this host.)<br>
connection = sqlite:////var/lib/neutron/ovs.sqlite<br>
<br>
# The SQLAlchemy connection string used to connect to the
slave database<br>
# slave_connection =<br>
<br>
# Database reconnection retry times - in event
connectivity is lost<br>
# set to -1 implies an infinite retry count<br>
# max_retries = 10<br>
max_retries = 10<br>
<br>
# Database reconnection interval in seconds - if the
initial connection to the<br>
# database fails<br>
# retry_interval = 10<br>
retry_interval = 10<br>
<br>
# Minimum number of SQL connections to keep open in a pool<br>
# min_pool_size = 1<br>
<br>
# Maximum number of SQL connections to keep open in a pool<br>
# max_pool_size = 10<br>
<br>
# Timeout in seconds before idle sql connections are
reaped<br>
# idle_timeout = 3600<br>
idle_timeout = 3600<br>
<br>
# If set, use this value for max_overflow with sqlalchemy<br>
# max_overflow = 20<br>
<br>
# Verbosity of SQL debugging information. 0=None,
100=Everything<br>
# connection_debug = 0<br>
<br>
# Add python stack traces to SQL as comment strings<br>
# connection_trace = False<br>
<br>
# If set, use this value for pool_timeout with sqlalchemy<br>
# pool_timeout = 10<br>
<br>
[service_providers]<br>
# Specify service providers (drivers) for advanced
services like loadbalancer, VPN, Firewall.<br>
# Must be in form:<br>
#
service_provider=<service_type>:<name>:<driver>[:default]<br>
# List of allowed service type include LOADBALANCER,
FIREWALL, VPN<br>
# Combination of <service type> and <name>
must be unique; <driver> must also be unique<br>
# this is multiline option, example for default provider:<br>
#
service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default<br>
# example of non-default provider:<br>
# service_provider=FIREWALL:name2:firewall_driver_path<br>
# --- Reference implementations ---<br>
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default<br>
<br>
[QUOTAS]<br>
quota_firewall_rule=-1<br>
quota_subnet=10<br>
quota_router=10<br>
quota_firewall=1<br>
quota_security_group=10<br>
quota_network=10<br>
default_quota=-1<br>
quota_firewall_policy=1<br>
quota_security_group_rule=100<br>
quota_floatingip=50<br>
quota_port=50<br>
quota_driver=neutron.db.quota_db.DbQuotaDriver<br>
<br>
[AGENT]<br>
root_helper=sudo neutron-rootwrap
/etc/neutron/rootwrap.conf<br>
<br>
<br>
==============================================================================================================================================================<br>
/etc/neutron/plugins/ml2/ml2_conf.ini<br>
==============================================================================================================================================================<br>
<br>
[ml2]<br>
# (ListOpt) List of network type driver entrypoints to be
loaded from<br>
# the neutron.ml2.type_drivers namespace.<br>
#<br>
# type_drivers = local,flat,vlan,gre,vxlan<br>
type_drivers = vxlan<br>
# Example: type_drivers = flat,vlan,gre,vxlan<br>
<br>
# (ListOpt) Ordered list of network_types to allocate as
tenant<br>
# networks. The default value 'local' is useful for
single-box testing<br>
# but provides no connectivity between hosts.<br>
#<br>
# tenant_network_types = local<br>
tenant_network_types = vxlan<br>
# Example: tenant_network_types = vlan,gre,vxlan<br>
<br>
# (ListOpt) Ordered list of networking mechanism driver
entrypoints<br>
# to be loaded from the neutron.ml2.mechanism_drivers
namespace.<br>
# mechanism_drivers =<br>
mechanism_drivers =openvswitch,l2population<br>
# Example: mechanism drivers = openvswitch,mlnx<br>
# Example: mechanism_drivers = arista<br>
# Example: mechanism_drivers = cisco,logger<br>
# Example: mechanism_drivers = openvswitch,brocade<br>
# Example: mechanism_drivers = linuxbridge,brocade<br>
<br>
[ml2_type_flat]<br>
# (ListOpt) List of physical_network names with which flat
networks<br>
# can be created. Use * to allow flat networks with
arbitrary<br>
# physical_network names.<br>
#<br>
# flat_networks =<br>
# Example:flat_networks = physnet1,physnet2<br>
# Example:flat_networks = *<br>
<br>
[ml2_type_vlan]<br>
# (ListOpt) List of
<physical_network>[:<vlan_min>:<vlan_max>]
tuples<br>
# specifying physical_network names usable for VLAN
provider and<br>
# tenant networks, as well as ranges of VLAN tags on each<br>
# physical_network available for allocation as tenant
networks.<br>
#<br>
# network_vlan_ranges =<br>
# Example: network_vlan_ranges =
physnet1:1000:2999,physnet2<br>
<br>
[ml2_type_gre]<br>
# (ListOpt) Comma-separated list of
<tun_min>:<tun_max> tuples enumerating ranges
of GRE tunnel IDs that are available for tenant network
allocation<br>
# tunnel_id_ranges =<br>
<br>
[ml2_type_vxlan]<br>
# (ListOpt) Comma-separated list of
<vni_min>:<vni_max> tuples enumerating<br>
# ranges of VXLAN VNI IDs that are available for tenant
network allocation.<br>
#<br>
# vni_ranges =<br>
vni_ranges =101:1677<br>
<br>
# (StrOpt) Multicast group for the VXLAN interface. When
configured, will<br>
# enable sending all broadcast traffic to this multicast
group. When left<br>
# unconfigured, will disable multicast VXLAN mode.<br>
#<br>
# vxlan_group =<br>
vxlan_group =224.0.0.1<br>
# Example: vxlan_group = 239.1.1.1<br>
<br>
[securitygroup]<br>
firewall_driver=True<br>
<br>
<br>
==============================================================================================================================================================<br>
/etc/neutron/l3_agent.ini<br>
==============================================================================================================================================================<br>
<br>
[DEFAULT]<br>
# Show debugging output in log (sets DEBUG log level
output)<br>
# debug = False<br>
debug = False<br>
<br>
# L3 requires that an interface driver be set. Choose the
one that best<br>
# matches your plugin.<br>
# interface_driver =<br>
interface_driver
=neutron.agent.linux.interface.OVSInterfaceDriver<br>
<br>
# Example of interface_driver option for OVS based plugins
(OVS, Ryu, NEC)<br>
# that supports L3 agent<br>
# interface_driver =
neutron.agent.linux.interface.OVSInterfaceDriver<br>
<br>
# Use veth for an OVS interface or not.<br>
# Support kernels with limited namespace support<br>
# (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.<br>
# ovs_use_veth = False<br>
<br>
# Example of interface_driver option for LinuxBridge<br>
# interface_driver =
neutron.agent.linux.interface.BridgeInterfaceDriver<br>
<br>
# Allow overlapping IP (Must have kernel build with
CONFIG_NET_NS=y and<br>
# iproute2 package that supports namespaces).<br>
# use_namespaces = True<br>
use_namespaces = True<br>
<br>
# If use_namespaces is set as False then the agent can
only configure one router.<br>
<br>
# This is done by setting the specific router_id.<br>
# router_id =<br>
<br>
# Each L3 agent can be associated with at most one
external network. This<br>
# value should be set to the UUID of that external
network. If empty,<br>
# the agent will enforce that only a single external
networks exists and<br>
# use that external network id<br>
# gateway_external_network_id =<br>
<br>
# Indicates that this L3 agent should also handle routers
that do not have<br>
# an external network gateway configured. This option
should be True only<br>
# for a single agent in a Neutron deployment, and may be
False for all agents<br>
# if all routers must have an external network gateway<br>
# handle_internal_only_routers = True<br>
handle_internal_only_routers = True<br>
<br>
# Name of bridge used for external network traffic. This
should be set to<br>
# empty value for the linux bridge<br>
# external_network_bridge = br-ex<br>
external_network_bridge = br-ex<br>
<br>
# TCP Port used by Neutron metadata server<br>
# metadata_port = 9697<br>
metadata_port = 9697<br>
<br>
# Send this many gratuitous ARPs for HA setup. Set it
below or equal to 0<br>
# to disable this feature.<br>
# send_arp_for_ha = 3<br>
send_arp_for_ha = 3<br>
<br>
# seconds between re-sync routers' data if needed<br>
# periodic_interval = 40<br>
periodic_interval = 40<br>
<br>
# seconds to start to sync routers' data after<br>
# starting agent<br>
# periodic_fuzzy_delay = 5<br>
periodic_fuzzy_delay = 5<br>
<br>
# enable_metadata_proxy, which is true by default, can be
set to False<br>
# if the Nova metadata server is not available<br>
# enable_metadata_proxy = True<br>
enable_metadata_proxy = True<br>
<br>
# Location of Metadata Proxy UNIX domain socket<br>
# metadata_proxy_socket = $state_path/metadata_proxy<br>
<br>
<br>
</div>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">2014-03-27 22:32 GMT+01:00 Sławek
Kapłoński <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:slawek@kaplonski.pl" target="_blank">slawek@kaplonski.pl</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
I think that You should check what mechanism drivers have
You got setup in ml2<br>
config file and also check is OVS agent on compute host
working correctly.<br>
<br>
--<br>
Best regards<br>
Sławek Kapłoński<br>
<br>
Dnia czwartek, 27 marca 2014 21:09:19 Tom Verdaat pisze:<br>
<div>
<div class="h5">> Hi all,<br>
><br>
> I've been trying to get a multi-host openstack
havana deployment to work on<br>
> Ubuntu 13.10 with neutron using the ML2 plugin, OVS
agent and VXLAN for<br>
> tenant networks. Created networks, subnets and
routers inside neutron and<br>
> accodring to neutron they are all active and up.
All ports however are<br>
> DOWN when created with the error
"binding:vif_type=binding_failed".<br>
><br>
> Haven't been able to find useful stuff on this
issue or the vif_type<br>
> parameter in general online so far. Can anyone tell
me what this error is<br>
> about and how I can fix this? What might I have
done wrong? Where do I<br>
> start?<br>
><br>
> Thanks a lot!<br>
><br>
> Tom<br>
<br>
</div>
</div>
_______________________________________________<br>
Mailing list: <a moz-do-not-send="true"
href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack"
target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a moz-do-not-send="true"
href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
Unsubscribe : <a moz-do-not-send="true"
href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack"
target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Mailing list: <a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to : <a class="moz-txt-link-abbreviated" href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a>
Unsubscribe : <a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</pre>
</blockquote>
<br>
</body>
</html>